Discussion about this post

User's avatar
Ali Afroz's avatar

I worry that if AIs become conscious in the future most people will still be willing to use them in ways that are bad for them because it’s too useful andconvenient, even though the cost to the human is a much smaller cost than the cost to the AI. Factory farming and any intuitive grasp of human psychology already confirms that if you make people choose between their personal comfort and the suffering of entities that they don’t necessarily regard as similar enough to warrant consideration humans will pick their comfort every time. There are also a bunch of less serious, but more immediate concerns like that interacting with an AI will accustom a lot of humans not to seek social interaction or expect a lot more psychophancy, or that AI will prove addictive way social media is addictive and cause all the problems addiction normally causes. I am not sure you would include possibilities like mass automation causing social unrest which proves extremely disruptive and disastrous.

Expand full comment
Siebe Rozendal's avatar

There's a painful and worrisome confluence of trends:

1) Students cheating on their education (worsening the already reversed Flynn effect)

2) Junior knowledge/administrative jobs being the first to get automated

And maybe

3) increasing support for authoritarianism, political violence and other illiberalism (education is protective for democracy)

Even if humans somehow maintain power through AGI & full automation of labor, the human electorate becoming less educated, intelligent, and liberal would suggest that that's not a very desirable trajectory either. It leads me to think we need some suite of technological interventions that make humans better. More curious, more altruistic, more intelligent. Could be drugs, computer brain interfaces, genetic enhancement of embryos..

Expand full comment
7 more comments...

No posts

Ready for more?