Anti-AI Ideology Enforced at r/philosophy
Are Reddit mods abusing their power?
I was surprised to find myself temporarily banned (for 3 days) from Reddit’s r/philosophy for attempting to share my recent post ‘The Gift of Life’. The post struck me as a good candidate for being of general philosophical interest: it’s on an important topic, substantively addressing popular anti-natalist arguments, but still (I hope) clear and easy to read. But it turns out the philosophy subreddit mods have a rule against sharing content that includes any AI-generated images.
Now, I’d understand having a rule against submitting AI-written articles: they may otherwise worry about being inundated with “AI slop”, and community members may reasonably expect to be engaging with a person’s thoughts. But of course my articles are 100% written by me—a flesh-and-blood philosopher, producing public-philosophical content of a sort that people might go to an official “philosophy” subreddit to look for. The image is mere background (for purposes of scene-setting and social media thumbnails). I’m reminded of my middle-school teacher who wouldn’t let me submit my work until I’d drawn a frilly border around it. Intelligent people should be better capable of distinguishing substantive from aesthetic content, and know when to focus on the former.
When I messaged the mods, checking whether they really meant to exclude 100% human-written philosophical content (from a professional philosopher, in fact), just because it’s supplemented with an AI image, they replied that the rule is “well justified given the harms that AI poses overall.”
This is, in my professional opinion, not remotely justified. Rather, I think the mods are abusing their position to impose their personal ideology on r/philosophy users in a way that is contrary to the interests of the philosophical community that they are supposed to be stewarding. I don’t have any strong personal attachment to Reddit (I’ve only tried sharing posts there a few times), but bad behavior annoys me and the underlying dispute is kind of interesting, so I’ll explain my reasoning in more depth below. (If it results in the bad policy being revised, all the better.)
Norms of Neutrality
Internet spaces can be loosely divided into “personal” and “public” communities. Personal spaces, like this blog, are subject to the arbitrary whims of their creators/moderators. Since this blog is my space, I can block or ban anyone for any reason, and that’s my prerogative.1 If I make a mess of my personal online space, it’s a wasted opportunity to do better, but it’s “harmless” enough in a sense: I didn’t have to create this space in the first place, and my having done so doesn’t prevent anyone else from offering something better that might displace it.
Public communities are different. They inhabit a space that is in some sense privileged as the space for a community of their sort. This generates duties of good stewardship to adhere to certain neutrality norms, and not abuse one’s moderating powers to advance personal interests, ideologies, etc. Now, I take it that r/philosophy is a public community in this sense. It would (I assume) be very difficult for a competitor to displace r/philosophy as the philosophy subreddit that members of the general public find when looking for a subreddit about general philosophy. So if the mods make a mess of it, that has real costs for the quality of philosophical discourse and community on Reddit. So the mods have duties of good stewardship to implement (only) rules that are conducive to creating a good philosophy subreddit, and not just to advance their personal interests, ideologies, etc.
Absolute Opposition to AI images is Ideological
Reasonable people can vary in their overall attitudes towards AI. (Personally, I find AI pretty exciting, though I certainly worry about the potential downsides, and would like to see it better regulated. Still, I do expect the overall effect of the technology to be positive, even though I understand and sympathize with the concerns of many who disagree. An exception to my sympathy is those who think that the “pirate” training of generative models renders them unethical to use. Those people are just confused about the ethics of intellectual property.)
What I find completely undeniable is that AI, like any powerful tool, has both good and bad potential uses. One such good use, in my opinion, is to provide free graphical design/illustration capabilities to ordinary people for whom these services would otherwise be out of reach. If I want an illustration to go along with one of my posts—which is a perfectly reasonable thing for a writer to want—I can get a custom image, roughly approximating my specifications, created for free in a matter of seconds by using AI. That’s very useful! Sometimes it can even be philosophically illuminating, as in my (AI-generated) illustration of shuffling around expected value. To require that illustration to be removed from the post would harm philosophical understanding, for no good purpose. Most of my illustrations are not so essential; but I still like them well enough, and I don’t think r/philosophy mods have any business policing the aesthetic choices that individual philosophers make about how to present their public work. (Readers are, of course, free to simply stop reading my blog if they find the aesthetics too distasteful, ideologically offensive, or whatever.)
If I had to make my illustrations myself, this would not only be a horrendous waste of my time, but I can also assure you that the results would not be pretty:
So there are good reasons for philosophers, like myself, to instead use AI-generated images to illustrate our work.2 Moreover, there is no harm in our doing so. (The energy use is trivial—equivalent to about 5.5 seconds of microwaving according to the MIT Technology Review.) So why would the r/philosophy mods seek to prohibit philosophers from presenting their work in a way that is (i) perfectly reasonable, and (ii) harmless?
The only answer I got invoked “the harms that AI poses overall.” This is utterly unreasonable. Firstly: as we’ve already seen, reasonable people can disagree about whether AI is overall good or bad. It is not the place of the r/philosophy mods to impose their personal opinions on philosophers who reasonably disagree with them. Second: even if they’re right that the technology is overall bad, it is not, in general, reasonable to ban or prevent people from doing something that is itself reasonable and harmless just because it shares a category with other things that are harmful.
For comparison: suppose that, citing Netanyahu’s actions in Gaza, they banned all contributions featuring work from Jewish people. That would obviously be outrageous. We all know that it isn’t decent to judge individual people based on their demographic categories. But the underlying principle generalizes beyond just people. If it turns out that (a certain kind of) shepherding is good for sheep, it would clearly be unreasonable to object to good shepherding on the grounds of “the harms that animal agriculture poses overall” (due to the factory-farming of chickens, say). Good or harmless acts do not become objectionable just because they can be lumped together in a broad category with (relevantly different) harmful acts. But that is precisely what the r/philosophy mods are doing with their blanket anti-AI policy.
In general, ignoring relevant differences and treating everything in a broad category as taboo is a sign of lazy ideological thinking. People who are ideologically opposed to AI are tempted to think of every use of AI as objectionable. But this is obviously misguided. I’d encourage everyone to try to think more clearly and carefully. But individuals are free to be arbitrarily ideological in their personal spaces, if they wish. Good stewards of public communities are not. They have a moral duty to avoid imposing their ideological opinions on other members of the community. (Note that there is nothing in the nature of philosophy, or the internal norms and purposes of a philosophy subreddit, that would justify imposing a “no AI illustrations” policy on this community against the wishes of some other members. People should have a right to present/illustrate their philosophical contributions as they deem best.) Otherwise, the mods are potentially depriving their community of good and relevant content without adequate public justification.
When should AI content be prohibited?
Subreddits and similar public communities may most reasonably ban AI-generated content when the ban applies specifically to core content (what constitutes the basis of the submission) as opposed to mere background (like frilly borders or personal aesthetic choices on the part of the author/contributor). As mentioned at the start of this post, it would seem reasonable for r/philosophy to ban AI-written articles.3 But note that this is not the same as an absolute ban even on AI-generated text (let alone other media). An article on AI ethics, for example, may have good reason to quote some AI text, and it would be obviously unreasonable to prohibit this.
As this example shows, an article may contain AI-generated content, in some suitably demarcated and separated way, without thereby qualifying as an AI-written article in any objectionable sense. The important thing, I take it, is that the core content of the article is authentically human, and any AI-generated content is suitably “cordoned off” in a way that is clear to the reader and doesn’t undermine the authenticity of the work as a whole.
Illustrations supplementing a philosophy text obviously meet this criterion.
As it happens, I care to be reasonable, and so try to only ban people for good reasons, e.g. disrespectful or otherwise norm-violating comments. I may write up a more explicit “comments policy” someday, though it hasn’t seemed especially necessary to date.
I haven’t in this post so that I can share my objections to the subreddit without breaking their rules. :-)
It’s an interesting question how to think of AI “assisted” writing. While I don’t currently use AI in this way myself, I don’t personally have any objection to others using AI to improve their writing. As a consumer of ideas, I’d like to read whatever is the best expression of the idea they want to convey. OTOH, if I’m on a space for public discourse and dialogue (like Reddit), I do want to be interacting with other people and their ideas, not just having them serve as intermediaries blindly copying & pasting LLM text. (I can easily find chatbots myself if that was what I wanted.) Since it’s hard to assess how far down the slippery slope is undermining of the human aspect of the interaction, I guess I can understand a policy of keeping off the slope entirely, at least until we have better norms for navigating these tricky issues more precisely. (Though I’d personally be more inclined to just trust people not to waste their time acting as a mere LLM-intermediary.)



While my attempt to share this on r/philosophy was blocked by a Reddit filter of some sort, someone else managed to post it, and has sparked quite a discussion!
https://www.reddit.com/r/philosophy/comments/1mmr13z/antiai_ideology_enforced_at_rphilosophy/
Who are the r/philosophy mods?
I'm sympathetic to what you say here about public vs private online spaces, but I worry I'm biased by my views of the underlying first order question about AI.
I'm particularly struck by the recommendations I see here on this thread and elsewhere to the effect that people who would otherwise use AI images should instead find public domain images, or take their own pictures. When you consider the range of highly specific images people use AI to create, this is obviously quite often a non starter. You're asking people to spend huge amounts of time searching for images that may not exist, or creating them themselves, when the image typically isn't all that important. The actual alternative to using AI images is almost always not using images at all. Maybe that's fine; images typically aren't all that important. But people should be honest about the tradeoff.