9 Comments
User's avatar
Ali Afroz's avatar

I worry that if AIs become conscious in the future most people will still be willing to use them in ways that are bad for them because it’s too useful andconvenient, even though the cost to the human is a much smaller cost than the cost to the AI. Factory farming and any intuitive grasp of human psychology already confirms that if you make people choose between their personal comfort and the suffering of entities that they don’t necessarily regard as similar enough to warrant consideration humans will pick their comfort every time. There are also a bunch of less serious, but more immediate concerns like that interacting with an AI will accustom a lot of humans not to seek social interaction or expect a lot more psychophancy, or that AI will prove addictive way social media is addictive and cause all the problems addiction normally causes. I am not sure you would include possibilities like mass automation causing social unrest which proves extremely disruptive and disastrous.

Expand full comment
Siebe Rozendal's avatar

There's a painful and worrisome confluence of trends:

1) Students cheating on their education (worsening the already reversed Flynn effect)

2) Junior knowledge/administrative jobs being the first to get automated

And maybe

3) increasing support for authoritarianism, political violence and other illiberalism (education is protective for democracy)

Even if humans somehow maintain power through AGI & full automation of labor, the human electorate becoming less educated, intelligent, and liberal would suggest that that's not a very desirable trajectory either. It leads me to think we need some suite of technological interventions that make humans better. More curious, more altruistic, more intelligent. Could be drugs, computer brain interfaces, genetic enhancement of embryos..

Expand full comment
Siebe Rozendal's avatar

(I'm actually not sure #3 is a real trend in values)

Expand full comment
Harvey Lederman's avatar

Great post and good question. I like the framing of conflict between perfectionism and liberalism. Not exactly a different issue than you're saying, but I feel less motivated to explore areas where I'm not an expert because I trust less my ability to contribute by connecting disparate fields.

Expand full comment
Luch of Truth's avatar

The main difference is that most people don’t understand what AI really is or why it is even called “intelligence.” AI is not intelligent, it simply calculates the highest probability and provides an answer. The real danger lies in people believing it is “smart” and relying on it, even for trivial things, without verification. This undermines critical thinking before it even begins.

Expand full comment
Richard Y Chappell's avatar

What do you think human brains do?

There are many topics on which AI is more reliable than most people. No source is entirely reliable, of course, and part of critical thinking is realizing when the stakes warrant making independent checks.

Expand full comment
Luch of Truth's avatar

The human brain doesn’t process information through mathematical probability calculations, unless someone deliberately does so for scientific purposes. Most of the time, our thinking relies on context, meaning, and lived experience. AI, by contrast, is only probability math all the way down. This doesn’t exclude AI from giving correct answers, and in some areas it may even be more reliable than humans. But the same system can produce equally wrong answers, and without human validation that risk makes it dangerous.

Expand full comment
Richard Y Chappell's avatar

I think you're missing that there are different levels of description: any cognitive process can be alternatively described as either "calculating the highest probability answer" (even neural functioning can often be accurately characterized as doing this, at least implicitly) or as "relying on context, meaning, etc." (which even LLMs can often be accurately characterized as doing—they could hardly give such competent answers if they *ignored* context and meaning!).

For example, you can read more about neural "predictive processing" here:

https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

People often indulge in magical thinking about human cognition, but it's ultimately a physical process like any other. (Important to note that we don't have direct conscious access to the details of our underlying cognitive processing: you typically *aren't aware* of when your brain is calculating probabilities! You only experience the "output" thoughts.) There are significant functional differences between human and AI cognition, of course, and it's important to be aware of their respective strengths and weaknesses. But a blanket statement like "AI is not intelligent" doesn't strike me as particularly meaningful or illuminating as to when we should expect AI answers to be more or less reliable than human ones.

(It's actually pretty hard to think of questions where I'd trust the answer of a random human over GPT-5, since at least the latter already has a kind of "wisdom of the crowds" built into it, whereas random individuals can be bafflingly stupid and unreliable.)

Maybe your key point is just that some people are *too* trusting of AI answers, in ways that they wouldn't be so irrationally trusting of (say) a random person on the internet. That could be true! But it has little to do with whether or not AI can accurately be described as "intelligent", "smart", etc. And it's interesting to compare the opposite problem of stupid people being too dogmatic in trusting their own opinions (conspiracy theories, etc.), when they'd actually do better to defer more to "mainstream" views including what they would get from an AI!

Expand full comment
Luch of Truth's avatar

Semantically and philosophically, it is always possible to say that something is escaping the other person, since everything depends on the chosen framework and rules of the system. I can also say to you, from the standpoint of strict mathematics and logic, that you are missing what AI is or is not, and then direct you to the basic definitions and concepts of artificial intelligence.

In my first reply I wrote what worries me, which is exactly what you asked at the end of your post: people mostly do not know what AI actually is. If they did, it would be clear to them that “intelligence” as an intrinsic property is not part of AI systems. That is precisely why they get the wrong impression that AI is intelligent, even though it is not in the sense in which the term is used for humans.

I assume you misunderstood my first post, and I will not go into the other points you raised, even though they do concern me.

Expand full comment