6 Comments
User's avatar
Bryan Frances's avatar

"Certain participants in the discussion view “tech bros” as their political enemies, and any thought that takes AI capabilities seriously is viewed as serving the interests of those enemies, and hence must be opposed."

It is stunning to me how often philosophers fall into the trap you describe here, suitably generalized.

For instance, lots of philosophers think masculinity is horrific, because many right-wing people celebrate the terrible parts of masculinity. On the other side, some philosophers end up endorsing Trump or other right-wing causes because they can't stand the far-left woke progressives. We can do better than this, but the first step is to identify the tribal urge in oneself, which is a step many of us never take.

Karin Rudolph's avatar

I've seen this many times, and unfortunately, many self-appointed AI ethics experts fall into this category. They lack the curiosity to understand how people interact with these systems, and instead of researching and producing new knowledge, they repeat the same ideas and concepts, such as "power dynamics", "tech bros", and their obsession with negativity, often censoring anyone who tries to explore ideas beyond their limited perspective.

Siebe Rozendal's avatar

I'd point them to 'Reversed Stupidity is not Intelligence' but it's written by Yudkowsky 😅

https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence

(Btw, not a big fan of the AI-generated cover image for this article, because it makes you look more tech bro 😬)

hn.cbp's avatar

I agree that dismissing alignment questions on the grounds that “AI aren’t agents” is a category mistake. Behavioral alignment plainly matters regardless of metaphysical agency.

But there is a deeper structural issue lurking underneath the alignment debate itself. Even perfectly aligned systems can normalize action without ownership — decisions that are well-behaved and norm-compliant, yet cannot be claimed by any agent who can be held responsible for them (e.g., through delegated automation and pre-structured flows).

Alignment discourse tends to focus on shaping outputs; it rarely asks whether anyone remains structurally positioned to stand behind those outputs once decision-making is distributed across models, pipelines, and institutions.

That seems like a distinct philosophical problem, orthogonal to consciousness or moral personhood, and not captured by the “are AIs agents?” framing at all.

Rafael Ruiz's avatar

Having to choose between Daily Nous or Leiter's blog for philosophy news reminds me of having to choose between BlueSky or Twitter. They're both terrible in their own ways.

Richard Y Chappell's avatar

In fairness, I thought the original DN post in this case was very reasonable. It's the readership—i.e., the philosophy profession at large—that I despair of.