While my attempt to share this on r/philosophy was blocked by a Reddit filter of some sort, someone else managed to post it, and has sparked quite a discussion!
If you think this is bad, you should see the r/badphilosophy subreddit. I find it genuinely disturbing that some of the mods in that community have academic jobs, and presumably the ability to make hiring and tenure decisions on committees.
Wow! Substack is the only online community I've participated in primarily to discuss ideas (never was on Twitter, and mostly use Facebook for personal stuff), and getting a glimpse of reddit mostly makes me grateful for what it's like over here. I'm sure reddit is big, and I've heard there's lots of good stuff. Maybe r/philosophy is better on other topics...
I stopped using reddit about a year and a half ago, the same time I stopped using facebook. Before then I was moderately active on /r/philosophy and generally found it fine. But it does sound like social media generally has been on a downward spiral the past couple years.
I'm sympathetic to what you say here about public vs private online spaces, but I worry I'm biased by my views of the underlying first order question about AI.
I'm particularly struck by the recommendations I see here on this thread and elsewhere to the effect that people who would otherwise use AI images should instead find public domain images, or take their own pictures. When you consider the range of highly specific images people use AI to create, this is obviously quite often a non starter. You're asking people to spend huge amounts of time searching for images that may not exist, or creating them themselves, when the image typically isn't all that important. The actual alternative to using AI images is almost always not using images at all. Maybe that's fine; images typically aren't all that important. But people should be honest about the tradeoff.
To be fair, I think the main point of pointing out that you can just go get a photo. Is that if you’re really hell bent on having an image, there’s nothing stopping you from getting one. It’s just more inconvenient. So it’s more permissive than simply banning all images and should not be treated as comparable to that.
It’s much closer to a ban than you think. A lot of people draw distinctions based on what is possible vs what is impossible, but I think it’s much more useful to draw distinctions based on what is likely vs what is unlikely. For instance, a lot of governments are required to have public outreach meetings where they do things in person, because everyone theoretically *can* attend, while some people are excluded from online meetings due to lack of connectivity. But making it theoretically possible for everyone but extremely unlikely for all but a few, is much worse for having actual outreach than making it likely for a good number.
I agree in some situations, but it depends on what’s important in the given situation because when it comes to freedom, your approach of focusing on what’s likely appears to ignore the distinction between things that are unlikely because they are very difficult and things that are unlikely because you don’t care about having them very much and so don’t put in much effort to get them. Of course, if for some reason, you are specifically hung up on having a photo precisely customised the situation, it’s pretty difficult to get it, but for most people, they don’t mind very much. If all you have is a photo in the rough ballpark of what you want.
The fact that there is a huge difference in likelihood between people showing up at an in-person public outreach session vs an online one shows that people *do* care, but the thing that is theoretically possible is very aversive for some reason. Similarly, the fact that there is a huge difference in probability between people using images when they are stock photos vs custom made AI images indicates that they *do* care about having the images but that the stock photographs have some serious defect that makes them much less usable (namely, that they don’t illustrate the point that you wanted an image to illustrate).
You are ignoring the possibility that people like using AI images when they have the ability to get them for basically, zero effort, but don’t value it much so unwilling to even put in small amounts of time to find a photo in the ballpark of what they wish. For example, in economics increases in prices of certain goods can lead to rather large reduction in demand, depending on the elasticity of demand of the good in question. So the number of people who stop doing the thing in response to barriers doesn’t tell you that they care a lot. Depending on the context, it might even be evidence that they care very little because if they cared a lot, they be willing to put in the work to circumvent whatever restriction you put in place. Indeed, I’ve heard that lots of research indicates that a surprisingly large number of people will stop in the face of trivial barriers, although I have mostly heard this through here say from others, rather than reading the research myself.
Yeah, it's notable that in my ~18 years of blogging prior to AI I basically just never used illustrations in my posts. Reverting to that practice wouldn't be the end of the world, but would involve losing a capability that I (somewhat) value, and to lose (somewhat) valuable options for no good reason seems... bad.
If you want a custom image, you can certainly spend hundreds or thousands of dollars and days to weeks going back and forth with a graphic designer, or learn to use image editing tools yourself.
Whether increasing the time and money cost of getting a custom image many orders of magnitude amounts to the same thing as a ban or not--I'd say not; a 10000% tax isn't a ban, it's a 10000% tax--one shouldn't make out that it's easy to substitute non AI art for AI art.
To be fair, I suppose the more likely form of substitution would involve relaxing the specificity of the image you're looking for. Basically giving up on a custom image, and looking for something generally in the ballpark of your topic.
Yes, there is the reason I myself never used the phrase custom image. You can use graphics just fine. They just won’t be custom images. You might find it annoying to have to use a photo that is only in the ballpark of what you want, but describing this has a 10000% tax strikes me as rhetorically excessive for approximately the same reason you found the original calls for just using a photo annoying.
There's a reason I *did* use the phrase custom image; because that's what's de facto banned (or subject to an astronomical tax). Generic clip art isn't the same thing as a custom image. Talking about how graphics aren't banned is sleight of hand that obscures the actual trade off. Better to say that there's no great loss in not being able to customize graphics.
At least you only got banned for 3 days! I got permanently banned for linking this post (https://linch.substack.com/p/why-reality-has-a-well-known-math), despite making a substantive philosophical point, having an article with a reasonable number of upvotes, and prior interested engagement from other related subs.
The AI image in question (a shrimp physicist trying to do calculations at the bottom of a waterfall) also clearly did not have anything remotely like it in the public domain. And I clearly labeled AI use rather than hide it.
This is in keeping with reddit's long-standing policies.
They were opposed to bronze replacing stone, then iron replacing bronze. Nothing from the printing press was allowed, they required hand written posts. They prohibit the burning of fuel in internal combustion engines and the generation and distribution of electricity. And they will certainly oppose the Internet should they learn about its existence.
I think if you mistakenly believe that AI image-use is IP theft, it makes sense to exclude posts with AI images. Like philosophy conferences that only serve vegan food, I doubt there’s any objection to the practice that isn’t just an objection to the object level critique the ideology that underlies it
I think it's more complicated! I agree there's no independent objection to conferences serving (i.e. providing) only vegan food. But I think it *would* be objectionably illiberal for the conference to *police* participants' diets, e.g. by checking bags at the door and removing any home-made ham sandwiches that participants have brought along for themselves.
So I think a lot depends on which actions properly fall within the sphere of influence of the host (e.g. what meals THEY will provide for participants) versus which are private choices for participating individuals to make for themselves (i.e. what THEY will bring/eat).
How is the belief "mistaken"? image-gen-ai training datasets have been scraped from the portfolios of artists with neither their knowledge nor consent. That sounds an awful lot like theft to me.
I hadn't seen that post, no. And frankly I find it quite slanderous to equate people who believe "rich tech companies should stop trying to destroy the ability of non-rich people to make a living doing art" with people who believe "taxation is theft". The reality of living in a society centered around private property rights is that people need a certain amount of control or ownership over their ability to work so that they don't, you know, die. What do you think the end-game is for all this gen-ai training? Do you think this leads to a world where people are more or less free? Because from my position as somebody who works in tech and has familiarity with how these tools work (not to mention knowledge of what kinds of data sets are being aggregated by so many giant companies), I say all signs point to the latter. And many people employed by these companies understand on some level that what they are doing is, in fact, tantamount to theft. You'll have a hard time getting them to publicly cop to this though since, after all, 'It is difficult to get a man to understand something, when his salary depends on his not understanding it.'
At any rate, your willingness to use a term like "neo Luddite" to denigrate a position is pretty telling.
(1) A deontological accusation of "theft", which applies regardless of the consequences; and
(2) An empirical prognosis that you expect gen-AI to prove detrimental to humanity on the whole.
I think #2 is a super-important question, and I respect people who worry that it may turn out badly. I think you're wildly overconfident if you think you can "almost certainly" predict the effects of this transformative technology. As mentioned in the OP, I'm tentatively optimistic about it myself. But this is really the important question that people should be discussing.
Type-#1 deontological moralizing about "theft" is just a sideshow. If I'm right that the consequences of the technology are overall empowering of human capacities, then we should not regard it as "theft". Property rights and their limits -- ESPECIALLY when we're talking about intellectual property -- should be determined with reference to human interests. The failure to acknowledge this is PRECISELY how the "AI is theft" crowd is similar to the right-wing propertarian "taxation is theft" crowd. Both extremes make the mistake of thinking that property rights are a kind of "first principle", rather than a pragmatic social construction that should be designed to promote human interests.
P.S. I think it's "pretty telling" that instead of engaging with my arguments on their merits, you're hunting for signal phrases that "tell" you something ad hominem about the author, and how you might seek to cluster them together with others you regard as "morally dubious".
Any technology that can be used for repression and control will be used for repression and control. LLMs right now are operating with a loss-leader model: The goal is to elicit dependency on these tools, and to later recoup today's losses by massively raising prices once these tools are too deeply embedded in customer processes to be excised without jeopardizing entire tech stacks and delivery pipelines. Make no mistake, the public availability of tools like ChatGPT or Midjourney is a matter of marketing, not altruism. Once the "make a big profit" lever has been pulled, only those who can pay through the nose will be able to benefit, and you'll have one more technology category that helps to further cement massive class inequality.
It's strange to me that you make these comparisons to right-wingers while turning a blind eye to the glaring problem of a right-wing economic order that seeks only to impose austerity and extract value. In the context of the current mode of production, I fail to see how scraping artist's portfolios and driving them out of work is in any way congruent with any notion of "human interests". Seems quite the opposite to me.
I'm not qualified to argue with you about deontology, but I understand the workings of our present economic order and of this industry in particular. We need a lot more skepticism from academics if we're ever going to see anything approaching the utopian vision you're laying out here, imo. I don't like it when academics carry water for big business.
And fair enough on that last point, I was more hostile than I should have been.
To be fair, everything you say is true of the printing press, which has absolutely been a tool of repression and control, but it’s not so bad that I don’t want to avail myself of it.
I agree with you on the substance of whether it will be desirable for this policy, not to exist, but I think the moderators have the right to regulate the community in this way, and their behaviour is not unreasonable. Firstly, any subreddit is a community of online individuals while the general public may occasionally use it. It is reasonable for the people who engage with the content. There are more to have more influence and generally moderators tend to be people whom a community generally trust and agrees with so unless there is a specific reason to expect the community disagrees with the moderators here I think it should in fact count as them acting as agents for a private community. Secondly, from their own point of view, they are indulging in good stewardship by incentivising less use of a technology they consider harmful the same way you might encourage people to boycott a company that uses slave labour or factory farming. I think it is fundamentally not possible to have a system where people only go by their best judgement in matters. They consider un controversial since what is controversial is itself a topic of dispute and so it’s simple to just let communities do as they wish and go by their best judgement in ordinary circumstances. I personally think it’s kind of like the online equivalent of not drinking wine at a club where they have an explicit policy of no alcohol allowed on account of the management of the club. Having decided that alcohol is bad for personal health and society. Even if you trust yourself to drink responsibly and think it’s unreasonable for such a rule to exist in these circumstances, you should still think that as a social matter, it’s good and right for the club to be able to regulate itself this way. So I would agree with trying to persuade the moderators to change the policy, but I think your criticism goes beyond that and suggest that they acted badly here which I think is not warranted.
To give another analogy when activists get a government to ban factory farm products that do not meet certain standards, they are imposing their own private opinion on other members of the community, some of whom likely disagree with their opinions on factory farming and think that they are being as unreasonable as you think the moderators are being here. Still most people would agree that there is nothing fundamentally unreasonable about the activism here and unlike governments, the moderators are constrained by exit rights to a much greater extent because if they got sufficiently out of hand realistically, the users and general public would exit and start up a different community to serve this function, which is not too much of a constraint, but certainly much more constraining than anything a government has to face. So if it’s okay for people in a government to impose their own private opinions when they think they are reasonable on the whole society, I think it has to be definitely okay for moderators to impose their private will on the community, especially if they think that the majority of the community agrees. Of course realistically, it’s not the majority that counts only the preference waited opinion of the community. So members of the general public who don’t care much about philosophy count for near zero whereas users who spend a lot of time there and have strong preferences about it, count for a lot more.
It's even worse than what's described here. EDIT: Confirmed that even work written in Google Docs and Microsoft Word is banned due to those applications' integration of LLMs, see below.
The rule is specifically that no AI-generated or AI-assisted content can appear on the linked page. I spoke with the mods about being approved for self-promotion on there, and I asked whether the fact that I was at the time using an AI-generated profile picture prohibited me from posting, to which they said they would have to discuss and get back to me (they never did). They were so deep in this position that they weren't sure whether the small byline picture, which is hardly even discernible, would somehow detract from the user experience.
To make it even stranger, I asked them if I would also need to proactively disable comments just in case someone else with an AI profile picture were to comment, which would have the same end effect as my own profile picture. They were able to immediately tell me that was not a problem and comments could stay on.
Even stranger than that, I asked whether it would be okay to post an article about AI images that used AI images as demonstrations. That was also okay by them.
I also asked whether it was okay to use Google Docs or Microsoft Word to draft articles, as they now include AI features that will pop up automatically, and which I didn't think I could disable, meaning that writing in those applications would be "AI-assisted", even if only unintentionally. They said that as long as I didn't use the AI features it's ok, but they clearly misunderstood as the point was you can't *not* use them. So, if you're being really careful about the rule, Google Docs and Microsoft Word are also prohibited. And obviously so is Google, as AI summaries instantly appear and could potentially influence your thought process while writing your piece.
EDIT: I looked back at the screenshots I took of the conversation with the mods, and they said "If Google Docs is now using LLMs in suggestions that would too" (the "too" referring to disqualifying me from posting).
Eventually, I ended up changing my profile picture to what it is now, which is a collage of some real images and AI images. After not having heard back from them, I asked if this was now acceptable, and they never answered me about that either.
In the end, I got banned from Reddit anyway when an automoderator incorrectly flagged one of my comments that was about a sensitive topic (moral issues with porn). The moderators of that subreddit, one of whom is a Buddhist monk, appealed to Reddit admins on my behalf, but the admins got back to me and said that I violated Reddit rules and my account is permanently banned (post about that here: https://ottotherenunciant.substack.com/p/i-was-banished-from-reddit-by-a-robot).
I honestly didn't realize how bad Reddit is until recently.
This post contains an honest, logical argument, and that’s why it was mocked on reddit. You should have written a post about the poor situation of struggling writers, desperate for clicks in order to make ends meet, needing to use AI images in order to attract readers and fight against the capitalist machine.
It would have gotten 1000 upvotes and everyone would be saying that banning AI images is tyranny.
I'm not crazy about the /r/philosophy policy, but I saw a year or so ago that this was the way the wind was blowing (and I'm generally less positive about AI than you are). Fortunately, I found an alternative: my university has a university-wide contract with Adobe, which includes Adobe Stock. That leads to a win-win: I've got access to a very large array of quality photos and other art that I can use to illustrate my posts, and their creators get paid for it (by Adobe) without me having to shell out anything. Not everybody has access to that, which is why I don't like the /r/philosophy policy, but for those who do, I think it is a better choice than AI art: where possible, it is good to support human artistic creators.
It's an interesting question whether your last sentence is true. Is taking stock photos such a great use of human labor? I guess it depends what else they would be doing. If there was another use of their time and efforts that did more to help others (e.g. working as a childcare provider), why would I want to prop up people doing less-helpful work, just because it's coded as "artistic"? (Anyone is, of course, free to pursue photography as a hobby, even if they can't make a living from it.)
I'd rather aid go to kids at risk of malaria than artists at risk of needing to find another job. Of course, there's no immediate causal path from using AI over stock photos to getting more money donated to effective charities. But if more people shared my attitudes, companies/universities might save money from dropping Adobe in favor of cheaper AI graphics, the savings could allow them to improve salaries, and we could donate our extra salaries to life-saving aid.
You're free to act on your aesthetic preferences. But it's obnoxious to presume that others must share them as a condition of even participating in a public space.
When you say “it is obnoxious to presume that others must share them” is equally indicting of your post.
Further, many, if not all, aesthetic choices are political choices, and all political choices are ethical ones, so there is no aesthetic high ground upon which a man can make a non-ethical stand.
There's a well-developed and highly principled liberal tradition that is not undermined merely by noting that liberal norms are intolerant of illiberalism. The point is to create a culture and range of spaces (of varying norms) in which as many people as possible are free to act upon their own preferences without creating conflict or imposing harms on others.
"Everyone must confirm to my aesthetic preferences as a precondition to participation in the public sphere" is a blatantly illiberal position. I'm not presuming that anyone has to share my first-order preferences. I do assert that they ought to share my higher-order commitment to liberalism, i.e. respecting others' rights and freedoms to the greatest extent compatible with respecting alike rights and freedoms for all others.
I think demanding that everyone has to conform to respecting the rights and preferences of everybody else to the maximum extent possible is itself a very first order position and in fact in practice, most actual public institutions have never even pretended to adhere to such a demanding standard, so unless you water it down its effectively Demanding that public institutions conform to highly controversial philosophical commitment of yours. For example, are government and universities acting, contrary to liberalism when they subsidise art that they consider good or refuse to fund medical treatments that they consider of controversial ethical standing. Is it unethical under you view for me to use government to ban public nudity because it does not conform to my aesthetics? Does it violate liberalism for the state to actively promote marriage as a social institution You might point to the number of people who hold the preference, but at that point, you are no longer using liberalism and instead doing utilitarianism the old fashion way. And of course, utilitarianism is itself highly controversial. Remember if your public spaces are conforming to the views of the majority, you aren’t ideological neutral, you are just conforming to the majority ideology, which is fine, but certainly not neutral. Also, liberalism. In your sense is in any case, not the majority position of people since most people think it’s fine to impose your first order views on others. For example, the reason most people will object to things like banning alcohol or marijuana or public nudity has absolutely nothing to do with ideological neutrality and every thing to do with believing that these policies are bad on the merits. Even most people in the public sphere who talk of ideological neutrality, have the sense, to not consistently follow this principle. And respecting the rights and freedom of others to the maximum extent possible is in any case, unless you articulate the view very weirdly, not ideological neutrality or even the majority ideology, but straight up consequentialism assuming by respecting them to the maximum extent possible means to maximise. Even if you don’t articulate the view this way, it’s just hard to see how such a doctrine isn’t a full-fledged ethical philosophy that will effectively dictate all first order views. And for example, under your view, immigration restrictions, seems very hard to justify since they greatly restrict the rights and freedom of others for minor benefits to the natives. Whatever someone might call this position, this isn’t ideological neutrality in any sense.
And again ideological neutrality isn’t a good thing. Is it bad that most universities and schools will refuse to teach creationism, even though 40% of the population believes in it. The answer is pretty obviously that it’s good they don’t teach it even though this is pretty clearly taking an obviously ideological stand in a dispute where half the countries on the other side.
There's a huge literature in political philosophy addressing all these sorts of questions (i.e. post-Rawlsian political liberalism). There are easy cases and hard cases when it comes to understanding what "liberal neutrality" calls for, and when it is or isn't desirable. I don't think the existence of hard cases (e.g. drug laws) undermines the case for a default presumption in favor of liberalism that can help us to adjudicate the easy cases (gatekeepers abusing their positions to impose their personal ideologies).
For example, I think it is OBVIOUSLY inappropriate for professors to penalize students for defending different moral or political views from the professor's own. Broadly liberal norms can help us to appreciate this, and if you reject liberalism wholesale you'll have much more trouble articulating what is wrong with the dogmatic professor.
I don't think this implies "anything goes", e.g. teaching creationism. Of course there CAN be sufficiently good reasons to "take an ideological stand" in some sense, and in some circumstances. A lot depends on the purpose of the activity -- one could not do a good job of academic teaching without using academic judgment, which involves ruling out some ideologies as "not worth discussing / taking seriously". But again, that hardly implies that EVERY ideological act is unimpeachable. Details and context matter!
I am somewhat familiar with this literature, although it is an amateur familiarity, so almost certainly quite a bit less than yours, and have always found this particular brand of liberalism unpersuasive. I personally regard animal welfare laws as a reductio ad absurdism of this position, although I suspect you have a well developed argument, why you think this doesn’t fly.
I don’t think you need ideological neutrality or the idea that you have to pretend you don’t have first order ideological preferences while running the public institution to criticise the Prof. because their behaviour obviously creates bad incentives for truth seeking. I believe there are certain second order rules whose defence advances my first order, ideological preferences like maximising well-being. I am even fine with compromises like not taking sides between two political parties. Even if you think one of the parties is better on the merits because of pragmatic reasons, but that’s not ideological neutrality. It’s just acknowledging practical benefits of not needlessly, antagonising, powerful, political actors, and not needlessly, hurting the sentiments of half the country for little reason, while damaging your own credibility with these people. in fact, if I was in China, I would support the Communist party in my public facing communication. If I was managing a public institution. For approximately the same reasons, we should make it obvious that they don’t have much to do with ideological neutrality and everything to do with practical compromise and win win deals with other people who have different ideologies.
To be clear, I’m not suggesting that the freedom of other people doesn’t matter, but that has nothing to do with ideological neutrality. I just think that preference, satisfaction and well-being are often helped by giving people freedom, which is obviously not an ideological neutral opinion.
Obviously, I assume you have a sophisticated theory of why you think liberalism matters, but this was mostly to illustrate that I think your example with the professor doesn’t actually demonstrate that we need political liberalism in the sense you’re using it here. and in fact, I think so few people in real world actually believe this that it would violate the principles of political liberalism to run public institutions using them. Since it is highly controversial ideology that most people do not agree with.
On second thought, I think I was probably a bit too incendiary, while writing the previous comment. My apologies if I came across as rude. Unfortunately, I am on my phone right now, so can’t edit the comment without an enormous headache,. Although I agree with everything substantive that it says I should probably have written it differently.
You are retreating to a claim from precedence without dealing with the fact that you are expressing a preference as a truth and ignoring that it is not simply taste but a deeply problematic question of art, authorship, agency, and many other facets.
It feels less like a thoughtful discussion and more like a petty grievance. You have been granted the gift of a fascinating problem. Explore it.
In my exchanges with the subreddit moderators about my own work, they told me that articles written in Google Docs are banned due to the AI rule as well. I asked if I would be allowed to post articles written in Google Docs given that Google Docs makes automatic suggestions that I couldn't disable, and there response was:
"If Google Docs is now using LLMs in suggestions that would too" (the "too" referring to disqualifying me from posting). As far as I can tell, Google Docs does use LLMs for suggestions, as does Microsoft Word.
So, in the quest against slop, all work written in Google Docs and Microsoft Word is banned from r/philosophy.
Perhaps, but we are not choosing between the beige sheets and the blue. We are choosing between problematic product and non-problematic.
I would dress better if I stole money and bought Savile Row suits.
We can debate whether AI slop is derivative or appropriative, but the stench of theft is on it, and the distinction is neither free speech nor the liberal tradition
I’m much more sympathetic to the line that private property of any sort (including intellectual property) is theft than I am to the line that contemporary AI generated images are theft.
And that is an arguable point of view for certain. My point is that the opposite is also defensible, meaning a moderator who chooses one is not doing an arbitrary act but picking a position that can be reasonably defended, even if not universally accepted.
I think they’re thinking of it like a ban on articles illustrated with images of meat, or a ban on articles illustrated with images of child sexual abuse. If you think that the use of AI is a substantive harm, then you might want to prohibit it even (perhaps especially) when it is used in a frivolous way.
I think they’re wrong about AI, but that seems like it has to be the way to interpret a ban like this.
I think it would probably be inappropriate for a general philosophy subreddit to ban articles illustrated with images of meat, on similar grounds of violating ideological neutrality. (A veganism subreddit would be a different matter.)
Images of child sexual abuse are surely illegal? But even supposing that they weren't, graphic violence is sufficiently distressing to a wide range of people that it makes sense to treat it as "prohibited by default" - not the sort of thing that people should have to be exposed to in order to participate in public life.
It's an interesting question how sensitive we should be to less mainstream sensibilities (the meat example may be a good one for these purposes). But I take it that nobody is caused *emotional distress* by seeing AI art, like seeing graphic violence. They just don't like it. That seems really importantly different to me.
I disagree with your claim that a subreddit is a "public community" and draw a distinction between r/philosophy and your blog. You have comments turned on here - if someone were to post something offensive, absusive, whatever - you could turn off comments or hide that specific comment. It is a public posting area on your blog - much like subreddits are public posting areas on a company owned site. If you decide to go to Walmart and not wear a shirt, they will ask you to leave. While these places are open to the public, there are rules and guidelines that are enforced for whatever reasons - whether we agree or not.
Personally, I believe the mods' decision was a bit of a strict interpretation of that rule. But I'm not a mod - neither are you. The good news is if you want things to be different, you can become a mod. :)
It's an interesting philosophical question whether there are ANY limits to the norms that gatekeepers may impose in their spaces. As a possible test case: Do you think it would be OK for the mods to require every comment to end with a declaration of loyalty to Donald Trump (and ban anyone who refused to comply)?
My view is that a personal blog could require this, and they would soon just lose their non-MAGA readers, and that's no big deal. But my section on neutrality norms explains why I think it would be less appropriate for a space like r/philosophy - a privileged meeting space for any member of the public who wants to discuss philosophy on reddit - and we should instead expect them to abide by norms of liberal neutrality. I think that's what good stewardship of such a privileged space requires.
I don't care enough about Reddit to personally work towards improving their moderation policies by any means other than sharing these philosophical reflections about when liberal neutrality is plausibly called for.
I do feel the pull of the idea that because this subreddit has this name, which feels like a piece of the public commons that it has enclosed, it should run itself in a more neutral way, even on questions where they are convinced a majority of the public is just constantly committing a major wrong. It would be much less problematic to insist on this stance on a subreddit whose name was more of a specific community brand, rather than the general English term for a discipline.
That's what subreddits are - little rooms with gate keepers - and why people leave subs, create their own, or choose not to use Reddit. They are flavors of ice cream, some have nuts - so you choose a different flavor. r/philosophy is not so elite that being denied or shunned by the mods should make anyone feel like they've experienced rejection. Member numbers tend to lend to feelings of power - and we all know what comes with great power.
Here is what I propose - let them keep their rules and stringent interpretations - create your own philosophy subreddit. Everytime you write a blog post, post it to your subreddit as well. I'm positive that you will not be the only member relieved for an alternative.
My intent is not to be critical of your experience - I agree it was an unnecessarily rigid translation - absolutely ridiculous. However, I would not advocate shutting down or forcing any subreddit to accept what they clearly do not want of as a result of an overzealous mod team. I choose to advocate for spreading awareness (as you're doing) and letting people choose whether or not they still want to be a participant in that particular forum. If the mod team insists on alienating members, they will be left moderating themselves.
PS if you want to declare MAGA loyalty there's a subreddit for that too - r/conservative. Seriously.
I'll do a Buddhist comment here. All concepts are illusory, in the sense that they are not accurate representations of reality. Also, anything connected to strengthening the self is immensely dangerous.
On that ground it can perhaps be said that from a Buddhist point of view private property is a dangerous illusion.
But so is the notion that everything is a free ride.
I think it's worth differentiating power balances.
It's generally quite ok and productive to commit piracy against large media corporations. It can foster innovation and hurts no-one.
It is not so good to appropriate the hard-won innovations of individual artists who may even be struggling financially. It hurts someone. And it may stifle aesthetic innovation.
The counter argument is that intellectual property rights have been created specifically to incentivise innovation, and by damaging them, you are damaging social welfare as a whole by discouraging innovation, which is already in sufficiently incentivised on account of how difficult it is to capture the benefits of an idea also even big corporations are composed of flesh and blood humans. So the idea that you’re hurting nobody by hurting a big corporation is an illusion. In reality. There are real humans struggling to support themselves who suffer when a big company does badly financially.
Intellectual property laws have also changed over the decades, sometimes because private interests have gotten control of the legislative levers, and sometimes because changes of technology have made new rules more appropriate. It’s really hard to tell them apart, especially with new changes that haven’t even been enacted.
In specifics tho... anything that reduces the amount of AI generated VISUAL SLOP -- and the vast amount of AI generated visuals are slop, on the one hand unnecessary and on the other, arguably setting standards for the expected and possibly required, ie setting the baseline/benchmark -- feels to me like a Good Thing. I absolutely admit it's an ideological/value driven position but every time I "unblock pictures" on an email newsletter vaguely hoping for a photo or a graph and see slop, a tiny bit of my enthusiasm for engaging with the text dies inside me. So there's that.
I thus propose, via the time honoured method of anecdotal introspection, that while the ideological opposition to AI overall would be arguably inappropriate for a public square like a big subreddit, a SPECIFIC opposition to visual AI slop is well justified, and even in certain senses more justified than opposing all LLM generated text (tho slippery slopes slip). I think that the frilly border comparison is off because in the vast majority of cases what they're doing is banning frilly borders and in an environment overloaded with frilly borders that feels like a good thing.
To be clear, I think it would also be unreasonable for a teacher to refuse to accept student work until the student *removed* their frilly borders. Gatekeepers of intellectual spaces should often just disregard background aesthetics, and allow individual content creators to express themselves how they like!
(If enough of their audience expresses distaste, creators may voluntarily choose to change their approach. But it seems awfully illiberal for gatekeepers of a shared space to try to force them to do so. It's not like there's any such thing as an environment's being *objectively* "overloaded with frilly borders" -- there are just different tastes.)
Interesting! I would have thought that most people would find it reasonable for an instructor to insist on basic formatting of a paper! I think I (and most philosophy professors) am a bit unusual in not caring what bibliography format students use, but if a student turned in papers written in green comic sans on a purple background, I probably would insist on them reformatting it. And when you submit a paper to a journal, you expect them to eventually insist on their own very specific formatting requirements.
This is the beauty and the pain of Reddit. You get instant access to large audiences, because you don't have to build a personal large following.
The cost is that posts get removed and you get banned *way* more easily than other platforms.
If you want to post on the platform, you just have to pay the price.
Which, admittedly, most people don't want to pay the price. It's way more painful than other platforms due to this + the downvote button + anonymity leading to far worse behavior than most platforms.
While my attempt to share this on r/philosophy was blocked by a Reddit filter of some sort, someone else managed to post it, and has sparked quite a discussion!
https://www.reddit.com/r/philosophy/comments/1mmr13z/antiai_ideology_enforced_at_rphilosophy/
Disturbing to see how unreasonable and closed-minded the members of such a prominent philosophy community are…
Oh, man. This guy definitely does not Reddit.
All good. You're probably better off, my dude.
If you think this is bad, you should see the r/badphilosophy subreddit. I find it genuinely disturbing that some of the mods in that community have academic jobs, and presumably the ability to make hiring and tenure decisions on committees.
Wow! Substack is the only online community I've participated in primarily to discuss ideas (never was on Twitter, and mostly use Facebook for personal stuff), and getting a glimpse of reddit mostly makes me grateful for what it's like over here. I'm sure reddit is big, and I've heard there's lots of good stuff. Maybe r/philosophy is better on other topics...
I stopped using reddit about a year and a half ago, the same time I stopped using facebook. Before then I was moderately active on /r/philosophy and generally found it fine. But it does sound like social media generally has been on a downward spiral the past couple years.
Who are the r/philosophy mods?
I'm sympathetic to what you say here about public vs private online spaces, but I worry I'm biased by my views of the underlying first order question about AI.
I'm particularly struck by the recommendations I see here on this thread and elsewhere to the effect that people who would otherwise use AI images should instead find public domain images, or take their own pictures. When you consider the range of highly specific images people use AI to create, this is obviously quite often a non starter. You're asking people to spend huge amounts of time searching for images that may not exist, or creating them themselves, when the image typically isn't all that important. The actual alternative to using AI images is almost always not using images at all. Maybe that's fine; images typically aren't all that important. But people should be honest about the tradeoff.
To be fair, I think the main point of pointing out that you can just go get a photo. Is that if you’re really hell bent on having an image, there’s nothing stopping you from getting one. It’s just more inconvenient. So it’s more permissive than simply banning all images and should not be treated as comparable to that.
It’s much closer to a ban than you think. A lot of people draw distinctions based on what is possible vs what is impossible, but I think it’s much more useful to draw distinctions based on what is likely vs what is unlikely. For instance, a lot of governments are required to have public outreach meetings where they do things in person, because everyone theoretically *can* attend, while some people are excluded from online meetings due to lack of connectivity. But making it theoretically possible for everyone but extremely unlikely for all but a few, is much worse for having actual outreach than making it likely for a good number.
I agree in some situations, but it depends on what’s important in the given situation because when it comes to freedom, your approach of focusing on what’s likely appears to ignore the distinction between things that are unlikely because they are very difficult and things that are unlikely because you don’t care about having them very much and so don’t put in much effort to get them. Of course, if for some reason, you are specifically hung up on having a photo precisely customised the situation, it’s pretty difficult to get it, but for most people, they don’t mind very much. If all you have is a photo in the rough ballpark of what you want.
The fact that there is a huge difference in likelihood between people showing up at an in-person public outreach session vs an online one shows that people *do* care, but the thing that is theoretically possible is very aversive for some reason. Similarly, the fact that there is a huge difference in probability between people using images when they are stock photos vs custom made AI images indicates that they *do* care about having the images but that the stock photographs have some serious defect that makes them much less usable (namely, that they don’t illustrate the point that you wanted an image to illustrate).
You are ignoring the possibility that people like using AI images when they have the ability to get them for basically, zero effort, but don’t value it much so unwilling to even put in small amounts of time to find a photo in the ballpark of what they wish. For example, in economics increases in prices of certain goods can lead to rather large reduction in demand, depending on the elasticity of demand of the good in question. So the number of people who stop doing the thing in response to barriers doesn’t tell you that they care a lot. Depending on the context, it might even be evidence that they care very little because if they cared a lot, they be willing to put in the work to circumvent whatever restriction you put in place. Indeed, I’ve heard that lots of research indicates that a surprisingly large number of people will stop in the face of trivial barriers, although I have mostly heard this through here say from others, rather than reading the research myself.
Yeah, it's notable that in my ~18 years of blogging prior to AI I basically just never used illustrations in my posts. Reverting to that practice wouldn't be the end of the world, but would involve losing a capability that I (somewhat) value, and to lose (somewhat) valuable options for no good reason seems... bad.
If you want a custom image, you can certainly spend hundreds or thousands of dollars and days to weeks going back and forth with a graphic designer, or learn to use image editing tools yourself.
Whether increasing the time and money cost of getting a custom image many orders of magnitude amounts to the same thing as a ban or not--I'd say not; a 10000% tax isn't a ban, it's a 10000% tax--one shouldn't make out that it's easy to substitute non AI art for AI art.
To be fair, I suppose the more likely form of substitution would involve relaxing the specificity of the image you're looking for. Basically giving up on a custom image, and looking for something generally in the ballpark of your topic.
Yeah, I think that's right, but it is a cost and it's annoying when people don't acknowledge that.
Yes, there is the reason I myself never used the phrase custom image. You can use graphics just fine. They just won’t be custom images. You might find it annoying to have to use a photo that is only in the ballpark of what you want, but describing this has a 10000% tax strikes me as rhetorically excessive for approximately the same reason you found the original calls for just using a photo annoying.
There's a reason I *did* use the phrase custom image; because that's what's de facto banned (or subject to an astronomical tax). Generic clip art isn't the same thing as a custom image. Talking about how graphics aren't banned is sleight of hand that obscures the actual trade off. Better to say that there's no great loss in not being able to customize graphics.
The mods are listed here: https://www.reddit.com/mod/philosophy/moderators/
(Anonymous Redditors afaict, though I assume they've been involved with the community for a long time.)
Just curious. It's not a community I know at all.
At least you only got banned for 3 days! I got permanently banned for linking this post (https://linch.substack.com/p/why-reality-has-a-well-known-math), despite making a substantive philosophical point, having an article with a reasonable number of upvotes, and prior interested engagement from other related subs.
The AI image in question (a shrimp physicist trying to do calculations at the bottom of a waterfall) also clearly did not have anything remotely like it in the public domain. And I clearly labeled AI use rather than hide it.
This is in keeping with reddit's long-standing policies.
They were opposed to bronze replacing stone, then iron replacing bronze. Nothing from the printing press was allowed, they required hand written posts. They prohibit the burning of fuel in internal combustion engines and the generation and distribution of electricity. And they will certainly oppose the Internet should they learn about its existence.
I think if you mistakenly believe that AI image-use is IP theft, it makes sense to exclude posts with AI images. Like philosophy conferences that only serve vegan food, I doubt there’s any objection to the practice that isn’t just an objection to the object level critique the ideology that underlies it
I think it's more complicated! I agree there's no independent objection to conferences serving (i.e. providing) only vegan food. But I think it *would* be objectionably illiberal for the conference to *police* participants' diets, e.g. by checking bags at the door and removing any home-made ham sandwiches that participants have brought along for themselves.
So I think a lot depends on which actions properly fall within the sphere of influence of the host (e.g. what meals THEY will provide for participants) versus which are private choices for participating individuals to make for themselves (i.e. what THEY will bring/eat).
How is the belief "mistaken"? image-gen-ai training datasets have been scraped from the portfolios of artists with neither their knowledge nor consent. That sounds an awful lot like theft to me.
Did you miss the link in my post? See: https://www.goodthoughts.blog/p/theres-no-moral-objection-to-ai-art
I hadn't seen that post, no. And frankly I find it quite slanderous to equate people who believe "rich tech companies should stop trying to destroy the ability of non-rich people to make a living doing art" with people who believe "taxation is theft". The reality of living in a society centered around private property rights is that people need a certain amount of control or ownership over their ability to work so that they don't, you know, die. What do you think the end-game is for all this gen-ai training? Do you think this leads to a world where people are more or less free? Because from my position as somebody who works in tech and has familiarity with how these tools work (not to mention knowledge of what kinds of data sets are being aggregated by so many giant companies), I say all signs point to the latter. And many people employed by these companies understand on some level that what they are doing is, in fact, tantamount to theft. You'll have a hard time getting them to publicly cop to this though since, after all, 'It is difficult to get a man to understand something, when his salary depends on his not understanding it.'
At any rate, your willingness to use a term like "neo Luddite" to denigrate a position is pretty telling.
You're running together two issues:
(1) A deontological accusation of "theft", which applies regardless of the consequences; and
(2) An empirical prognosis that you expect gen-AI to prove detrimental to humanity on the whole.
I think #2 is a super-important question, and I respect people who worry that it may turn out badly. I think you're wildly overconfident if you think you can "almost certainly" predict the effects of this transformative technology. As mentioned in the OP, I'm tentatively optimistic about it myself. But this is really the important question that people should be discussing.
Type-#1 deontological moralizing about "theft" is just a sideshow. If I'm right that the consequences of the technology are overall empowering of human capacities, then we should not regard it as "theft". Property rights and their limits -- ESPECIALLY when we're talking about intellectual property -- should be determined with reference to human interests. The failure to acknowledge this is PRECISELY how the "AI is theft" crowd is similar to the right-wing propertarian "taxation is theft" crowd. Both extremes make the mistake of thinking that property rights are a kind of "first principle", rather than a pragmatic social construction that should be designed to promote human interests.
P.S. I think it's "pretty telling" that instead of engaging with my arguments on their merits, you're hunting for signal phrases that "tell" you something ad hominem about the author, and how you might seek to cluster them together with others you regard as "morally dubious".
Any technology that can be used for repression and control will be used for repression and control. LLMs right now are operating with a loss-leader model: The goal is to elicit dependency on these tools, and to later recoup today's losses by massively raising prices once these tools are too deeply embedded in customer processes to be excised without jeopardizing entire tech stacks and delivery pipelines. Make no mistake, the public availability of tools like ChatGPT or Midjourney is a matter of marketing, not altruism. Once the "make a big profit" lever has been pulled, only those who can pay through the nose will be able to benefit, and you'll have one more technology category that helps to further cement massive class inequality.
It's strange to me that you make these comparisons to right-wingers while turning a blind eye to the glaring problem of a right-wing economic order that seeks only to impose austerity and extract value. In the context of the current mode of production, I fail to see how scraping artist's portfolios and driving them out of work is in any way congruent with any notion of "human interests". Seems quite the opposite to me.
I'm not qualified to argue with you about deontology, but I understand the workings of our present economic order and of this industry in particular. We need a lot more skepticism from academics if we're ever going to see anything approaching the utopian vision you're laying out here, imo. I don't like it when academics carry water for big business.
And fair enough on that last point, I was more hostile than I should have been.
To be fair, everything you say is true of the printing press, which has absolutely been a tool of repression and control, but it’s not so bad that I don’t want to avail myself of it.
I agree with you on the substance of whether it will be desirable for this policy, not to exist, but I think the moderators have the right to regulate the community in this way, and their behaviour is not unreasonable. Firstly, any subreddit is a community of online individuals while the general public may occasionally use it. It is reasonable for the people who engage with the content. There are more to have more influence and generally moderators tend to be people whom a community generally trust and agrees with so unless there is a specific reason to expect the community disagrees with the moderators here I think it should in fact count as them acting as agents for a private community. Secondly, from their own point of view, they are indulging in good stewardship by incentivising less use of a technology they consider harmful the same way you might encourage people to boycott a company that uses slave labour or factory farming. I think it is fundamentally not possible to have a system where people only go by their best judgement in matters. They consider un controversial since what is controversial is itself a topic of dispute and so it’s simple to just let communities do as they wish and go by their best judgement in ordinary circumstances. I personally think it’s kind of like the online equivalent of not drinking wine at a club where they have an explicit policy of no alcohol allowed on account of the management of the club. Having decided that alcohol is bad for personal health and society. Even if you trust yourself to drink responsibly and think it’s unreasonable for such a rule to exist in these circumstances, you should still think that as a social matter, it’s good and right for the club to be able to regulate itself this way. So I would agree with trying to persuade the moderators to change the policy, but I think your criticism goes beyond that and suggest that they acted badly here which I think is not warranted.
To give another analogy when activists get a government to ban factory farm products that do not meet certain standards, they are imposing their own private opinion on other members of the community, some of whom likely disagree with their opinions on factory farming and think that they are being as unreasonable as you think the moderators are being here. Still most people would agree that there is nothing fundamentally unreasonable about the activism here and unlike governments, the moderators are constrained by exit rights to a much greater extent because if they got sufficiently out of hand realistically, the users and general public would exit and start up a different community to serve this function, which is not too much of a constraint, but certainly much more constraining than anything a government has to face. So if it’s okay for people in a government to impose their own private opinions when they think they are reasonable on the whole society, I think it has to be definitely okay for moderators to impose their private will on the community, especially if they think that the majority of the community agrees. Of course realistically, it’s not the majority that counts only the preference waited opinion of the community. So members of the general public who don’t care much about philosophy count for near zero whereas users who spend a lot of time there and have strong preferences about it, count for a lot more.
It's even worse than what's described here. EDIT: Confirmed that even work written in Google Docs and Microsoft Word is banned due to those applications' integration of LLMs, see below.
The rule is specifically that no AI-generated or AI-assisted content can appear on the linked page. I spoke with the mods about being approved for self-promotion on there, and I asked whether the fact that I was at the time using an AI-generated profile picture prohibited me from posting, to which they said they would have to discuss and get back to me (they never did). They were so deep in this position that they weren't sure whether the small byline picture, which is hardly even discernible, would somehow detract from the user experience.
To make it even stranger, I asked them if I would also need to proactively disable comments just in case someone else with an AI profile picture were to comment, which would have the same end effect as my own profile picture. They were able to immediately tell me that was not a problem and comments could stay on.
Even stranger than that, I asked whether it would be okay to post an article about AI images that used AI images as demonstrations. That was also okay by them.
I also asked whether it was okay to use Google Docs or Microsoft Word to draft articles, as they now include AI features that will pop up automatically, and which I didn't think I could disable, meaning that writing in those applications would be "AI-assisted", even if only unintentionally. They said that as long as I didn't use the AI features it's ok, but they clearly misunderstood as the point was you can't *not* use them. So, if you're being really careful about the rule, Google Docs and Microsoft Word are also prohibited. And obviously so is Google, as AI summaries instantly appear and could potentially influence your thought process while writing your piece.
EDIT: I looked back at the screenshots I took of the conversation with the mods, and they said "If Google Docs is now using LLMs in suggestions that would too" (the "too" referring to disqualifying me from posting).
Eventually, I ended up changing my profile picture to what it is now, which is a collage of some real images and AI images. After not having heard back from them, I asked if this was now acceptable, and they never answered me about that either.
In the end, I got banned from Reddit anyway when an automoderator incorrectly flagged one of my comments that was about a sensitive topic (moral issues with porn). The moderators of that subreddit, one of whom is a Buddhist monk, appealed to Reddit admins on my behalf, but the admins got back to me and said that I violated Reddit rules and my account is permanently banned (post about that here: https://ottotherenunciant.substack.com/p/i-was-banished-from-reddit-by-a-robot).
I honestly didn't realize how bad Reddit is until recently.
This post contains an honest, logical argument, and that’s why it was mocked on reddit. You should have written a post about the poor situation of struggling writers, desperate for clicks in order to make ends meet, needing to use AI images in order to attract readers and fight against the capitalist machine.
It would have gotten 1000 upvotes and everyone would be saying that banning AI images is tyranny.
As somebody working on an AI pause until we know how to build it safely - I agree with you.
Using AI to make accompanying images seems harmless. In fact, it seems very net positive.
I'm not crazy about the /r/philosophy policy, but I saw a year or so ago that this was the way the wind was blowing (and I'm generally less positive about AI than you are). Fortunately, I found an alternative: my university has a university-wide contract with Adobe, which includes Adobe Stock. That leads to a win-win: I've got access to a very large array of quality photos and other art that I can use to illustrate my posts, and their creators get paid for it (by Adobe) without me having to shell out anything. Not everybody has access to that, which is why I don't like the /r/philosophy policy, but for those who do, I think it is a better choice than AI art: where possible, it is good to support human artistic creators.
It's an interesting question whether your last sentence is true. Is taking stock photos such a great use of human labor? I guess it depends what else they would be doing. If there was another use of their time and efforts that did more to help others (e.g. working as a childcare provider), why would I want to prop up people doing less-helpful work, just because it's coded as "artistic"? (Anyone is, of course, free to pursue photography as a hobby, even if they can't make a living from it.)
I'd rather aid go to kids at risk of malaria than artists at risk of needing to find another job. Of course, there's no immediate causal path from using AI over stock photos to getting more money donated to effective charities. But if more people shared my attitudes, companies/universities might save money from dropping Adobe in favor of cheaper AI graphics, the savings could allow them to improve salaries, and we could donate our extra salaries to life-saving aid.
Most of us are sick of slop. Go outside and take a picture. Cameras are amazing. Find a photo in the public domain.
You're free to act on your aesthetic preferences. But it's obnoxious to presume that others must share them as a condition of even participating in a public space.
Every position is bias. Even yours.
When you say “it is obnoxious to presume that others must share them” is equally indicting of your post.
Further, many, if not all, aesthetic choices are political choices, and all political choices are ethical ones, so there is no aesthetic high ground upon which a man can make a non-ethical stand.
There's a well-developed and highly principled liberal tradition that is not undermined merely by noting that liberal norms are intolerant of illiberalism. The point is to create a culture and range of spaces (of varying norms) in which as many people as possible are free to act upon their own preferences without creating conflict or imposing harms on others.
"Everyone must confirm to my aesthetic preferences as a precondition to participation in the public sphere" is a blatantly illiberal position. I'm not presuming that anyone has to share my first-order preferences. I do assert that they ought to share my higher-order commitment to liberalism, i.e. respecting others' rights and freedoms to the greatest extent compatible with respecting alike rights and freedoms for all others.
I think demanding that everyone has to conform to respecting the rights and preferences of everybody else to the maximum extent possible is itself a very first order position and in fact in practice, most actual public institutions have never even pretended to adhere to such a demanding standard, so unless you water it down its effectively Demanding that public institutions conform to highly controversial philosophical commitment of yours. For example, are government and universities acting, contrary to liberalism when they subsidise art that they consider good or refuse to fund medical treatments that they consider of controversial ethical standing. Is it unethical under you view for me to use government to ban public nudity because it does not conform to my aesthetics? Does it violate liberalism for the state to actively promote marriage as a social institution You might point to the number of people who hold the preference, but at that point, you are no longer using liberalism and instead doing utilitarianism the old fashion way. And of course, utilitarianism is itself highly controversial. Remember if your public spaces are conforming to the views of the majority, you aren’t ideological neutral, you are just conforming to the majority ideology, which is fine, but certainly not neutral. Also, liberalism. In your sense is in any case, not the majority position of people since most people think it’s fine to impose your first order views on others. For example, the reason most people will object to things like banning alcohol or marijuana or public nudity has absolutely nothing to do with ideological neutrality and every thing to do with believing that these policies are bad on the merits. Even most people in the public sphere who talk of ideological neutrality, have the sense, to not consistently follow this principle. And respecting the rights and freedom of others to the maximum extent possible is in any case, unless you articulate the view very weirdly, not ideological neutrality or even the majority ideology, but straight up consequentialism assuming by respecting them to the maximum extent possible means to maximise. Even if you don’t articulate the view this way, it’s just hard to see how such a doctrine isn’t a full-fledged ethical philosophy that will effectively dictate all first order views. And for example, under your view, immigration restrictions, seems very hard to justify since they greatly restrict the rights and freedom of others for minor benefits to the natives. Whatever someone might call this position, this isn’t ideological neutrality in any sense.
And again ideological neutrality isn’t a good thing. Is it bad that most universities and schools will refuse to teach creationism, even though 40% of the population believes in it. The answer is pretty obviously that it’s good they don’t teach it even though this is pretty clearly taking an obviously ideological stand in a dispute where half the countries on the other side.
There's a huge literature in political philosophy addressing all these sorts of questions (i.e. post-Rawlsian political liberalism). There are easy cases and hard cases when it comes to understanding what "liberal neutrality" calls for, and when it is or isn't desirable. I don't think the existence of hard cases (e.g. drug laws) undermines the case for a default presumption in favor of liberalism that can help us to adjudicate the easy cases (gatekeepers abusing their positions to impose their personal ideologies).
For example, I think it is OBVIOUSLY inappropriate for professors to penalize students for defending different moral or political views from the professor's own. Broadly liberal norms can help us to appreciate this, and if you reject liberalism wholesale you'll have much more trouble articulating what is wrong with the dogmatic professor.
I don't think this implies "anything goes", e.g. teaching creationism. Of course there CAN be sufficiently good reasons to "take an ideological stand" in some sense, and in some circumstances. A lot depends on the purpose of the activity -- one could not do a good job of academic teaching without using academic judgment, which involves ruling out some ideologies as "not worth discussing / taking seriously". But again, that hardly implies that EVERY ideological act is unimpeachable. Details and context matter!
I am somewhat familiar with this literature, although it is an amateur familiarity, so almost certainly quite a bit less than yours, and have always found this particular brand of liberalism unpersuasive. I personally regard animal welfare laws as a reductio ad absurdism of this position, although I suspect you have a well developed argument, why you think this doesn’t fly.
I don’t think you need ideological neutrality or the idea that you have to pretend you don’t have first order ideological preferences while running the public institution to criticise the Prof. because their behaviour obviously creates bad incentives for truth seeking. I believe there are certain second order rules whose defence advances my first order, ideological preferences like maximising well-being. I am even fine with compromises like not taking sides between two political parties. Even if you think one of the parties is better on the merits because of pragmatic reasons, but that’s not ideological neutrality. It’s just acknowledging practical benefits of not needlessly, antagonising, powerful, political actors, and not needlessly, hurting the sentiments of half the country for little reason, while damaging your own credibility with these people. in fact, if I was in China, I would support the Communist party in my public facing communication. If I was managing a public institution. For approximately the same reasons, we should make it obvious that they don’t have much to do with ideological neutrality and everything to do with practical compromise and win win deals with other people who have different ideologies.
To be clear, I’m not suggesting that the freedom of other people doesn’t matter, but that has nothing to do with ideological neutrality. I just think that preference, satisfaction and well-being are often helped by giving people freedom, which is obviously not an ideological neutral opinion.
Obviously, I assume you have a sophisticated theory of why you think liberalism matters, but this was mostly to illustrate that I think your example with the professor doesn’t actually demonstrate that we need political liberalism in the sense you’re using it here. and in fact, I think so few people in real world actually believe this that it would violate the principles of political liberalism to run public institutions using them. Since it is highly controversial ideology that most people do not agree with.
On second thought, I think I was probably a bit too incendiary, while writing the previous comment. My apologies if I came across as rude. Unfortunately, I am on my phone right now, so can’t edit the comment without an enormous headache,. Although I agree with everything substantive that it says I should probably have written it differently.
No worries! I wasn't bothered at all, but appreciate the consideration.
You are retreating to a claim from precedence without dealing with the fact that you are expressing a preference as a truth and ignoring that it is not simply taste but a deeply problematic question of art, authorship, agency, and many other facets.
It feels less like a thoughtful discussion and more like a petty grievance. You have been granted the gift of a fascinating problem. Explore it.
Well said sirrah
In my exchanges with the subreddit moderators about my own work, they told me that articles written in Google Docs are banned due to the AI rule as well. I asked if I would be allowed to post articles written in Google Docs given that Google Docs makes automatic suggestions that I couldn't disable, and there response was:
"If Google Docs is now using LLMs in suggestions that would too" (the "too" referring to disqualifying me from posting). As far as I can tell, Google Docs does use LLMs for suggestions, as does Microsoft Word.
So, in the quest against slop, all work written in Google Docs and Microsoft Word is banned from r/philosophy.
Slip is annoying, but it’s better than what you’re going to get to illustrate most philosophical posts by using cameras and stock photography.
Perhaps, but we are not choosing between the beige sheets and the blue. We are choosing between problematic product and non-problematic.
I would dress better if I stole money and bought Savile Row suits.
We can debate whether AI slop is derivative or appropriative, but the stench of theft is on it, and the distinction is neither free speech nor the liberal tradition
I’m much more sympathetic to the line that private property of any sort (including intellectual property) is theft than I am to the line that contemporary AI generated images are theft.
And that is an arguable point of view for certain. My point is that the opposite is also defensible, meaning a moderator who chooses one is not doing an arbitrary act but picking a position that can be reasonably defended, even if not universally accepted.
I think they’re thinking of it like a ban on articles illustrated with images of meat, or a ban on articles illustrated with images of child sexual abuse. If you think that the use of AI is a substantive harm, then you might want to prohibit it even (perhaps especially) when it is used in a frivolous way.
I think they’re wrong about AI, but that seems like it has to be the way to interpret a ban like this.
I think it would probably be inappropriate for a general philosophy subreddit to ban articles illustrated with images of meat, on similar grounds of violating ideological neutrality. (A veganism subreddit would be a different matter.)
Images of child sexual abuse are surely illegal? But even supposing that they weren't, graphic violence is sufficiently distressing to a wide range of people that it makes sense to treat it as "prohibited by default" - not the sort of thing that people should have to be exposed to in order to participate in public life.
It's an interesting question how sensitive we should be to less mainstream sensibilities (the meat example may be a good one for these purposes). But I take it that nobody is caused *emotional distress* by seeing AI art, like seeing graphic violence. They just don't like it. That seems really importantly different to me.
I disagree with your claim that a subreddit is a "public community" and draw a distinction between r/philosophy and your blog. You have comments turned on here - if someone were to post something offensive, absusive, whatever - you could turn off comments or hide that specific comment. It is a public posting area on your blog - much like subreddits are public posting areas on a company owned site. If you decide to go to Walmart and not wear a shirt, they will ask you to leave. While these places are open to the public, there are rules and guidelines that are enforced for whatever reasons - whether we agree or not.
Personally, I believe the mods' decision was a bit of a strict interpretation of that rule. But I'm not a mod - neither are you. The good news is if you want things to be different, you can become a mod. :)
It's an interesting philosophical question whether there are ANY limits to the norms that gatekeepers may impose in their spaces. As a possible test case: Do you think it would be OK for the mods to require every comment to end with a declaration of loyalty to Donald Trump (and ban anyone who refused to comply)?
My view is that a personal blog could require this, and they would soon just lose their non-MAGA readers, and that's no big deal. But my section on neutrality norms explains why I think it would be less appropriate for a space like r/philosophy - a privileged meeting space for any member of the public who wants to discuss philosophy on reddit - and we should instead expect them to abide by norms of liberal neutrality. I think that's what good stewardship of such a privileged space requires.
I don't care enough about Reddit to personally work towards improving their moderation policies by any means other than sharing these philosophical reflections about when liberal neutrality is plausibly called for.
I do feel the pull of the idea that because this subreddit has this name, which feels like a piece of the public commons that it has enclosed, it should run itself in a more neutral way, even on questions where they are convinced a majority of the public is just constantly committing a major wrong. It would be much less problematic to insist on this stance on a subreddit whose name was more of a specific community brand, rather than the general English term for a discipline.
Then again, /r/trees did become what it is.
That's what subreddits are - little rooms with gate keepers - and why people leave subs, create their own, or choose not to use Reddit. They are flavors of ice cream, some have nuts - so you choose a different flavor. r/philosophy is not so elite that being denied or shunned by the mods should make anyone feel like they've experienced rejection. Member numbers tend to lend to feelings of power - and we all know what comes with great power.
Here is what I propose - let them keep their rules and stringent interpretations - create your own philosophy subreddit. Everytime you write a blog post, post it to your subreddit as well. I'm positive that you will not be the only member relieved for an alternative.
My intent is not to be critical of your experience - I agree it was an unnecessarily rigid translation - absolutely ridiculous. However, I would not advocate shutting down or forcing any subreddit to accept what they clearly do not want of as a result of an overzealous mod team. I choose to advocate for spreading awareness (as you're doing) and letting people choose whether or not they still want to be a participant in that particular forum. If the mod team insists on alienating members, they will be left moderating themselves.
PS if you want to declare MAGA loyalty there's a subreddit for that too - r/conservative. Seriously.
I'll do a Buddhist comment here. All concepts are illusory, in the sense that they are not accurate representations of reality. Also, anything connected to strengthening the self is immensely dangerous.
On that ground it can perhaps be said that from a Buddhist point of view private property is a dangerous illusion.
But so is the notion that everything is a free ride.
I think it's worth differentiating power balances.
It's generally quite ok and productive to commit piracy against large media corporations. It can foster innovation and hurts no-one.
It is not so good to appropriate the hard-won innovations of individual artists who may even be struggling financially. It hurts someone. And it may stifle aesthetic innovation.
The counter argument is that intellectual property rights have been created specifically to incentivise innovation, and by damaging them, you are damaging social welfare as a whole by discouraging innovation, which is already in sufficiently incentivised on account of how difficult it is to capture the benefits of an idea also even big corporations are composed of flesh and blood humans. So the idea that you’re hurting nobody by hurting a big corporation is an illusion. In reality. There are real humans struggling to support themselves who suffer when a big company does badly financially.
Intellectual property laws have also changed over the decades, sometimes because private interests have gotten control of the legislative levers, and sometimes because changes of technology have made new rules more appropriate. It’s really hard to tell them apart, especially with new changes that haven’t even been enacted.
They generally make millions of dollars in profit from hollywood blockbusters.
Athough I would agree that it could hurt real people sometimes.
It should be evaluated on a case by case basis.
Agreed in principle.
In specifics tho... anything that reduces the amount of AI generated VISUAL SLOP -- and the vast amount of AI generated visuals are slop, on the one hand unnecessary and on the other, arguably setting standards for the expected and possibly required, ie setting the baseline/benchmark -- feels to me like a Good Thing. I absolutely admit it's an ideological/value driven position but every time I "unblock pictures" on an email newsletter vaguely hoping for a photo or a graph and see slop, a tiny bit of my enthusiasm for engaging with the text dies inside me. So there's that.
I thus propose, via the time honoured method of anecdotal introspection, that while the ideological opposition to AI overall would be arguably inappropriate for a public square like a big subreddit, a SPECIFIC opposition to visual AI slop is well justified, and even in certain senses more justified than opposing all LLM generated text (tho slippery slopes slip). I think that the frilly border comparison is off because in the vast majority of cases what they're doing is banning frilly borders and in an environment overloaded with frilly borders that feels like a good thing.
PS. And I'm mostly hopeful about AI overall!!
To be clear, I think it would also be unreasonable for a teacher to refuse to accept student work until the student *removed* their frilly borders. Gatekeepers of intellectual spaces should often just disregard background aesthetics, and allow individual content creators to express themselves how they like!
(If enough of their audience expresses distaste, creators may voluntarily choose to change their approach. But it seems awfully illiberal for gatekeepers of a shared space to try to force them to do so. It's not like there's any such thing as an environment's being *objectively* "overloaded with frilly borders" -- there are just different tastes.)
I entirely agree re: gatekeepers of quasi public spaces. It was mostly a side bar on my perceived dangers of visual slop ;)
Interesting! I would have thought that most people would find it reasonable for an instructor to insist on basic formatting of a paper! I think I (and most philosophy professors) am a bit unusual in not caring what bibliography format students use, but if a student turned in papers written in green comic sans on a purple background, I probably would insist on them reformatting it. And when you submit a paper to a journal, you expect them to eventually insist on their own very specific formatting requirements.
This is the beauty and the pain of Reddit. You get instant access to large audiences, because you don't have to build a personal large following.
The cost is that posts get removed and you get banned *way* more easily than other platforms.
If you want to post on the platform, you just have to pay the price.
Which, admittedly, most people don't want to pay the price. It's way more painful than other platforms due to this + the downvote button + anonymity leading to far worse behavior than most platforms.