NaNoWriMo: Arguing against using AI for writing is "classist", "ableist"
September 2, 2024 6:37 AM   Subscribe

Pivot-to-AI reports: In an official position statement on AI [archive], the NaNoWriMo organization declares: "We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege." This year’s new sponsor is ProWritingAid.com, which has now added unspecified “AI” functionality. [archive]
posted by AlSweigart (87 comments total) 19 users marked this as a favorite
 
NoMoWriMo I guess.
posted by snuffleupagus at 6:39 AM on September 2 [78 favorites]


I have to insist you read the article and NaNoWriMo's original announcement before commenting. There's much to criticize here, but it's easy to miss the mark. Charitably, NaNoWriMo seems to be endorsing using AI editing/rephrasing tools like Grammarly (or their sponsor, ProWritingAid.com) and not "let the AI write your novel for you."

However, a look at their sponsor is troubling: They don't state what their "AI" is or the providence of its training data. Given that the testimonials on their website are credited to "Writer", "Game Developer", and "Author" instead of people with names, it's fair to question their business's legitimacy.

The weaponization of social justice terms like "classism" and "ableism" is pretty gross. It's the same thing fascists do with "religious freedom" and "freedom of speech" as cover for their unconstitutional and illegal activities.
posted by AlSweigart at 6:40 AM on September 2 [67 favorites]


Their statement suggests that AI is a force for good because it removes barriers to the publishing world. You don't need to hire a proofreader, or indeed pay for an education, when AI can smooth over the rough edges of your story, right?

I haven't looked at NaNoWriMo in a while so I'm surprised that 1) it's sponsored and 2) it has anything to do with the publishing world. I thought the original spirit of NaNoWriMo had more to do with the love of writing, of finally pushing through and finishing that crappy first draft you always had in you, than with preparing a polished submission for a publishing house.

So I'm sad to see another cool thing enshittified, and in such a stupid way.
posted by swift at 6:56 AM on September 2 [43 favorites]


'Writing tools' were not considered 'AI' until the Venture Capital made those noises.

More old fashioned "selling out"; non profit style

I love "undertones" as well. No specific complaints about the large scale theft of your work can be addressed, because there are no specific examples

Just quit like, harshing the vibe of our big donors, man, they are rich and must be catered to
posted by eustatic at 6:57 AM on September 2 [10 favorites]


General Access Issues. All of these considerations exist within a larger system in which writers don't always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

This is entirely true and has absolutely nothing to do with AI. It's one of the best non-sequitors I've seen in a while. AI will not get you access. Quite the opposite. Being suspected of using AI will cause you to lose opportunities, not gain them. Unless their sponsor's AI somehow constructs social media campaigns to popularize indy authors.
posted by Hactar at 6:59 AM on September 2 [30 favorites]


I have to insist you read the article and NaNoWriMo's original announcement before commenting. There's much to criticize here, but it's easy to miss the mark. Charitably, NaNoWriMo seems to be endorsing using AI editing/rephrasing tools like Grammarly (or their sponsor, ProWritingAid.com) and not "let the AI write your novel for you."

It isn't unfair to ask an organization based on writing to be clear in its statement and not rely on a "charitable" interpretation.

Criticism of AI -- including, say, its ability to use any and all NaNoWriMo posted content with no attribution to the authors, let alone compensation, is valid and met with thunderous silence by the venture capitalists behind most AI systems, who once again seek to profit from others' labor. Defending that system smacks of classism, too.
posted by Gelatin at 7:00 AM on September 2 [26 favorites]


Has anyone categorically condemned AI? If so, were they taken seriously or have they already been dismissed as cranks, rendering this statement uninteresting?

Is it a coincidence that Ted Chiang published another slam dunk critical take on AI the same day this statement was released, arguing that it doesn't actually enable the creation of new art? If true, that would also basically mute any effect AI would have on mending class and ability gaps on art creation...

I wasn't familiar with this organization before the statement was released and I honestly think I'm only upset at them because I've already read the Sudan FPP and I just have nothing to say about that.
posted by Hume at 7:04 AM on September 2 [8 favorites]


Has anyone categorically condemned AI?

You can't condemn a shifting category. We're getting to the point where using a linear regression will be labeled "AI" to obfuscate the issue people have with Microsoft, Google, et al conducting mass plagiarism by using copyrighted works as training data.

I believe Microsoft, Google, etc using copyrighted works as training data has been condemned
posted by eustatic at 7:10 AM on September 2 [17 favorites]


It isn't unfair to ask an organization based on writing to be clear in its statement and not rely on a "charitable" interpretation.

I agree. But it's important to keep the conversation on the facts of this matter in particular, instead of devolving into general AI criticism or about things NaNoWriMo didn't actually say.

So I'm sad to see another cool thing enshittified

Same. I always thought it was just an idea and maybe a subreddit. I was surprised to find it became an organization and not just... somebody with a static website?
posted by AlSweigart at 7:11 AM on September 2 [6 favorites]


Has anyone categorically condemned AI?

I do. I'm pretty sure the damn computer is cheating at checkers.
posted by AlSweigart at 7:13 AM on September 2 [10 favorites]


The marketing term "AI" devalues the words "artificial" and "intelligent."
posted by SoberHighland at 7:23 AM on September 2 [16 favorites]


Gotta love it when their statement on AI is about ableism, and their "tips for getting unstuck" article includes:
1. Go Outside
Writing is an isolating process, and writers are notorious for losing hours of the day to the computer screen. But when you’re stuck in a rut, staring at the page stressed out doesn’t make things better.

Get up. Go get a drink of water. Then go outside. What you do next doesn’t matter. You can get some exercise in or drive to the coffee shop. Birdwatch, play with the kids, splash in some puddles—you get the idea.

A little movement and some sunshine will help you feel refreshed when you sit back down at the computer.
I can't condemn them taking a neutral stance on individual writers using AI. That is an individual choice, and I try not to judge people who aren't paying as much attention as we tend to do here for not thinking about the bigger picture and whether, by using AI, they're contributing to a larger problem.

Nor can I condemn any individual who takes the advice to use AI to get creatively unstuck. My friend and I who are starting a podcast recently used AI to help generate brainstorm ideas for its name, and that was helpful.

I don't love having an AI company as a sponsor.

My most recent use of AI was when I was doing a design project in Canva and I wanted really specific colors. I was using the internet to find the hex codes, and google in its infinite helpfulness is now offering that AI option at the top of the first page. I thought that might be quicker, so I asked it for the hex code for some color, say "pale blue-green" or whatever. It confidently spit out an answer for me.

It was wrong. It was very wrong.

I'd been thinking of the AI as using something like a search engine, or consulting a table, or whatever. I was thinking of it as Siri, maybe. But it just did its AI thing and confidently spit out a hex code. A simple fact that could not be relied up.

I think AI is probably better at squishy creative human-mediated things like, "suggest twenty names for a podcast on this topic," than at being honest and factual. I'm open-minded but wary, and I am absolutely on the side of artists angry that their work has been used and monetized without their permission.
posted by Well I never at 7:26 AM on September 2 [12 favorites]


"STOP MINIMIZING AND INFANTALIZING THE DISABLED. In case you didn't know it, NaNoWriMo, we kind of hate that. [...] On the classism, that is unhinged in its detachment. For real, have they never met poor and working class writers and artists? We're MORE likely to work together." -- Thought Punks post on Mastodon

I expect to see more comprehensive responses coming out later today from disabled writers and activists but suspect most of them will be pretty similar to this, just more measured in tone.
posted by at by at 7:29 AM on September 2 [27 favorites]


Fortunately, you can just...write a book, and you don't need this organization at all!
posted by kittens for breakfast at 7:40 AM on September 2 [16 favorites]


Has anyone categorically condemned AI?

It would help for it to actually exist first.

honest and factual

LLMs can't be any more honest or factual than the sum total of everything they've digested which is presumed to be of equal truth value.
posted by snuffleupagus at 7:46 AM on September 2 [8 favorites]


We're getting to the point where using a linear regression will be labeled "AI"

We have always been at that point, because linear regression is AI.
posted by escabeche at 8:10 AM on September 2 [3 favorites]


I think AI is probably better at squishy creative human-mediated things like, "suggest twenty names for a podcast on this topic," than at being honest and factual.

Yeah, there's no "probably" here. Honesty and factuality are just totally alien qualities to LLMs. These are Venn sets that don't intersect. It's like expecting honesty or factuality from random word generators. Adherence to truth has literally nothing to do with their range of capabilities.
posted by trig at 8:14 AM on September 2 [7 favorites]


Arguing against using pogo sticks for the high jump is "classist", "ableist"
posted by torokunai at 8:15 AM on September 2 [7 favorites]


>these are Venn sets that don't intersect.

the intertubes serving video when JLo's If You Had My Love video came out in late 1999 wasn't possible either, yet here we are now.

When I "learn" something, I tend to remember where/what I learned it from. LLMs will have to do this too I bet, turning them from language models to learning models.
posted by torokunai at 8:20 AM on September 2


Listen, if technology ever gets to that point then great. But LLMs as they exist today are not what it seems like 90% of people believe they are. And are literally not able to do what 90% of their users insanely, irresponsibly trust them to do.

Seriously, almost everyone I know, including people who work in machine learning, is all "I'm gonna use chatGPT for this [task that requires both logic and truth]". It is fucked up. Saying "well, maybe one day it'll be possible" doesn't make it less fucked up today.

So yeah, currently they're for coming up with random creative ideas you might not think of yourself. They're not for truth or honesty or logic or accuracy or correctness or anything in that universe.
posted by trig at 8:28 AM on September 2 [21 favorites]


These sorts of arguments can only begin to hold water if AI is some sort of equalizer between rich or abled people and poor or disabled people. And I think it can be in a very limited way - if you're dyslexic and you need help with spelling and keeping your theirs and they'res straight, sure, although I've seen some fairly bad failures of AI even for similarly straightforward use cases.

Poor people have been writing, and writing well, for centuries. It is ridiculously cheap to become a good writer, especially compared to becoming a good violinist or a good oil painter. Workshops and degrees can get expensive, but you don't need them. (The most expensive thing is the time to write - but AI won't help you with that unless it can write well, which it can't.)

People with all kinds of disabilities have been writing, and writing well, for centuries. And I have no desire, really, to moralize about people who use AI to write fiction because they have attention problems or executive function problems that make writing very hard. I think I'd probably have finished a novel in the last three or four years if not for my own executive function problems. But if I'd used AI to produce writing in spite of those problems, the novel I'd have written wouldn't have much relationship to the book I'm trying to write, and it wouldn't be a book I'd want to read. Look at the suggested rephrase in ProWritingAid's web site - that's a straightforwardly worse sentence.

Good fiction is largely built around thinking hard about what you intend to communicate. There can't be a shortcut around that. Even if generative AI were much, much better than it is, the gap between what's in your head and what you can get out of your head is where everything interesting happens.
posted by Jeanne at 8:40 AM on September 2 [30 favorites]


This looks like a pretty classic case of coopting progressive language to argue for a not-progressive position.

Literally all the things NaNoWriMo is raising for would be better solved by actually dedicating human time and effort to, or creating scholarships. And some of these justifications are just BS! There's absolutely no validity to the claim that services that cost $20+ per month mean accessibility for poor people, or that using an AI will in any way increase access to "traditional publishing contracts". That's just a complete fabrication.

Judicious use of technologies that are carefully built, publicly owned, and transparently operated could be beneficial. But not willy-nilly embracing privately owned and opaque extractive corporations that have been built on stealing the work of writers without recompense, and whose long term goal is to enshittify and monetize.

(To be clear, some disabled people have found positives in LLM use. But that doesn't give abled people the right to say, hey, it's inherently ableist to question AI corporations that, oh, just happen to be financially sponsoring us.)
posted by splitpeasoup at 8:43 AM on September 2 [31 favorites]


It seems like the person responsible for this is Kilby Blades, the interim director and according to Reddit the one person left in the NaNoWriMo office? It sounds like their org was nonstop drama behind the scenes, especially after the whole moderator grooming incident.

My sense is that it was one person with poor judgment and no oversight who wrote that FAQ response, because it was littered with grammar mistakes. If you read that Reddit post you can get a sense of the org dysfunction that led to this incredibly ill-advised statement.
posted by oh__lol at 8:51 AM on September 2 [11 favorites]


The February Album Writer's Month, a sort of music version of NaNoWriMo, is currently in the last month of 50/90 (50 songs in 90 days), and has a statement with guidelines on AI use:
https://fiftyninety.fawm.org/ai-statement

An important difference is that FAWM is requiring disclosure of any AI use. This allows people to avoid songs with AI influence if they want. Organizations like FAWM and NaNoWriMo practice radical inclusivity, and that's fantastic - but inclusivity also means letting people opt out of AI influence for personal reasons.

Disclosure is everything. NaNoWriMo needs to make that clear and unambiguous too.
posted by Flight Hardware, do not touch at 10:05 AM on September 2 [8 favorites]


But LLMs as they exist today are not what it seems like 90% of people believe they are.

Well sure, maybe not today. But one day super genius AI technology will be superior to human ability in all forms of work.

...except for middle management and owning capital, of course.
posted by AlSweigart at 10:15 AM on September 2 [1 favorite]


Has anyone used LLMs as part of their writing process? I write non-fiction technical tutorials, and I've found ChatGPT 4o to be useless, even for generating brainstorm ideas. The best it can do is generic-sounding WikiHow content farm spam, in my experience.
posted by AlSweigart at 10:24 AM on September 2 [8 favorites]


I think you're all overlooking the fact that no one from the working classes could possibly understand English in all its vast complexity without the help of a large language model. Simply can't be done. It's unfortunate that the world is so hard and cold, but the poor and disabled rely on tools like ProWritingAid (TM) just to get through the course of their miserable dark days. Please stop trying to silence them with your virtue signaling and purity tests.
posted by mittens at 10:27 AM on September 2 [22 favorites]


Full disclosure: I wrote a novel in November 2018 (it took me halfway into December, sue me) because of this organization. The novel was terrible, but once I spent six months rewriting it, the final product was really pretty good. Swashbuckling gunpowder navy officers in a matriarchal world. My kids quite liked it.

Anyway, fuck these people. If you're dyslexic and need spellcheck, use it. Need a tool to keep track of there/they're/their, knock yourself out. I'm literally an English professor, and I have to sit there and look at affect/effect every time. Outlining tools? Formatting? Sure. Speech to text tools because you have a disability that prevents you from typing or handwriting? I'm here for you. But once you start using ChatGPT, you're not a writer, and that's not "ableist". Everything about "AI" can absolutely burn in a tire fire.
posted by outgrown_hobnail at 10:27 AM on September 2 [17 favorites]


I have used some of the AI features of Pro Writing Aid, especially the "Rephrase" feature (which was available before the Critique feature). I've found it occasionally helpful for inspiring new ways of casting a sentence. I don't think I've used one of its "rephrase" suggestions more than a couple of times, and then never without further modifying it. Most of the time, honestly, the rephrase suggestions are fairly terrible, and sometimes downright amusing.

I'm planning to put AI disclaimers in my soon-to-be-serialized science fiction novel.

I also used PWA's "Critique" feature for the first time last night. According to its comments, it's either sucking up to me big time or I really am I pretty good writer.
posted by lhauser at 10:34 AM on September 2 [4 favorites]


Judging from the recent posts in the NNWM subreddit it kinda looks like maybe NaNoWriMo-The-Organization is kind of... dead? And this is their death rattle? Wikipedia tells me the official NNWM website launched in July 1999, which is twenty five years ago, which is really pretty good for something like this - compare to Inktober, another Spend A Winter Month Grinding On Creative Work event, started in 2009, mostly collapsed in 2019 when its creator registered a trademark for it and started sending nastygrams to people selling collections of their Inktober work.
posted by egypturnash at 10:39 AM on September 2 [8 favorites]


I've occasionally tried using ChatGPT to generate one-off ideas - for example, generating some names for megachurches. I remember that "Ocean vineyard" was one of its suggestions, which is one of those ChatGPT-style half-right, half-completely-off-the-mark suggestions. Yes, a megachurch is very likely to have "vineyard" in its name, but "Ocean vineyard" makes you think too hard about the salinity tolerance of grapes, and whether an aquaculture farm growing sea grapes would be called an ocean vineyard. No usable suggestions, but if you're the kind of writer who gets stuck not having a name for something, this is perfectly fine as a stopgap. (I am the kind of writer who will put [name this megachurch later] and run a search for [ as part of my editing process.)

And I have, out of perverse curiosity, asked it to generate my next sentences for me. It hasn't generated anything useful. Or rather - it has never generated a sentence that I wanted to use, or that pushed me in the direction of figuring out what I wanted to write. Sometimes I can look at those ChatGPT-generated sentences and regain a little self-confidence in my own voice and my own literary sensibilities, because if nothing else, I know what I DON'T want to sound like.
posted by Jeanne at 10:52 AM on September 2 [7 favorites]


That statement is reminding me of those twitter ones that came out when Kellogg was doing some union busting and people were advocating a boycott. But doing a boycott of cheerios was apparently also classist and ableist.
posted by Iax at 10:59 AM on September 2 [3 favorites]


I like to write my novels the old-fashioned way -- hire a ghostwriter and an editor and lock them in the cellar.
posted by credulous at 11:03 AM on September 2 [11 favorites]


As much as I love Ted Chiang's work - and I really, really, love it - he's fundamentally wrong about the ability of LLMs to create commercial art.

I typed "outline the plot of a novel in which a virus causes 10% of children to be born superintelligent" into my $20 GPT4.o subscription. It produced this.

That is a perfectly serviceable plot for a novel or TV series. $1,000 in GPT tokens could get a rough draft and (if a novel) $9,000 could get a legit pro novelist to polish it up over a few weeks. Is that going to make me $10,000 or more? Probably not. But if a professional writer with an existing brand did, voila.
posted by MattD at 11:07 AM on September 2 [2 favorites]


(Professional writers who want make $9,000 and do this friggin' thing, DM me.)
posted by MattD at 11:08 AM on September 2


Has anyone used LLMs as part of their writing process?

I was forced to use them at work by my boss to write a few articles about what we were doing for our in-house newsletter. It was... fine, I guess? You couldn't use the article it spat out, but you could edit it pretty quickly, though only because I was a subject matter expert on the specific topics. (It contained a couple of "hallucinations", i.e. lies.) It was quicker than writing the articles on my own, but I think the quality suffered.
posted by joannemerriam at 11:09 AM on September 2


$1,000 in GPT tokens could get a rough draft

Or you could, you know, actually write it.
posted by outgrown_hobnail at 11:11 AM on September 2 [10 favorites]


I thought NaNoWriMo is a folk/fan creation that doesn't need infrastructure. How did it end up with a 501(c)e?
posted by Nancy Lebovitz at 11:12 AM on September 2 [9 favorites]


I mean, if NaNoWriMo is willing to accept shitty LLM entries, they're quite welcome to distract GPT enthusiasts' attention from NaNoGenMo.
posted by polytope subirb enby-of-piano-dice at 11:16 AM on September 2 [3 favorites]


> Charitably, NaNoWriMo seems to be endorsing using AI editing/rephrasing tools like Grammarly (or their sponsor, ProWritingAid.com) and not "let the AI write your novel for you."

Unfortunately, ProWritingAid’s “AI Sparks” are several flavors of “let the AI write your novel for you”, specifically including:
  • Add sensory details
  • Add an analogy
  • Expand list to text
  • Dialogue generator
  • Continue writing
Those last two are designed especially for you to select some text, up to 2500 words I think? and then ask their AI to write more story, more dialogue, more scenes for you.
posted by Callisto Prime at 11:29 AM on September 2 [4 favorites]


The people I see at the cutting edge of AI-assisted writing are using finetuned uncensored local models, and they have, um, specific goals.

It would be interesting, even novel, to see a long-form AI-generated work that is more interesting than a high-school fanfic. But LLMs are not empathetic to the reader, and an effective author knows how to lead their reader's mind in a given direction. Maybe the tech will get there, or maybe authors will figure out a hybrid workflow that achieves their goals.

That said, there is a market for absolutely horrible fiction.
posted by credulous at 11:40 AM on September 2 [4 favorites]


You couldn't use the article it spat out, but you could edit it pretty quickly, though only because I was a subject matter expert on the specific topics. (It contained a couple of "hallucinations", i.e. lies.) It was quicker than writing the articles on my own, but I think the quality suffered.

It could probably generate a public statement like the NaNoWriMo org's, but then some human would be expected to read it, think of all the criticisms in this thread (especially the disingenuous fake social justice grossness) and re-work it. And at that point maybe it would just be easier to have a professional writer with subject-matter knowledge do it from the ground up?

But I guess writing something you don't at all believe in would be harder for an ethical human. Something like "using machines to automate a thing where the whole point is celebrating people doing the thing, thumbs up". Good job, next output a justification for the high school math olympics incorporating Wolfram Alpha.
posted by ctmf at 11:49 AM on September 2 [3 favorites]


I was a die hard NaNo for 17 years or so before burning out. They sound like they have REALLY gone downhill in the last few, and that's really sad to see. And thrn, fucking AI.
posted by jenfullmoon at 11:50 AM on September 2 [4 favorites]




Has anyone used LLMs as part of their writing process?.

Worse than useless given the time suck.

I write about 60 pages a week of formulaic technical writing. Everything is at once specific, and yet, derived from deterministic scientific guidelines. I have trained interns for years on how to generate these letters, which are basically formulaic, just based on about 60 different formulas. They can usually hack it in three weeks. It involves scanning hundreds of pages for a reference number, say "100 acres". And putting that in a formula to write X "million gallons".

The GPT just spits out garbage and doesn't follow the formulas. I'm glad it is so bad that I can categorically exclude it from my workflow, rather than having someone use it, and then forcing me to look for errors outside of the formulaic inputs. A human who values their time will only write the minimal amount, and so the errors are limited and predictable. The GPT's lack of such limits makes my error checking that much more of a headache. Usually a gpt will stray off the track for no reason. Worse than useless.

Where LLMs have been useful in this work is in summarizing 100 page public documents, to make the source info for the formulas easier for a human writer to find. I can measure quickly and precisely how often he machine gets it wrong, in that case. In this case, there s an advantage in that the machine can read a thousand pages much more quickly than a human, so even if the LLM is wrong 20% of the time, time is still saved in the long run by having the machine run a pass, with a human checking that result for error

But programmers working on that public interest work have been re assigned, after Microsoft's big bet. I'm sure, though, that we don't really need to check on what our government is doing. It s fine.
posted by eustatic at 12:00 PM on September 2 [2 favorites]


"I thought NaNoWriMo is a folk/fan creation that doesn't need infrastructure. How did it end up with a 501(c)e?"

Well when you start asking for donations, and people get worried you've collected A LOT and might spend it on your self.
posted by 922257033c4a0f3cecdbd819a46d626999d1af4a at 12:28 PM on September 2 [1 favorite]


Thank you, but that just pushes it back a step. Was there a NaNoWriMo corporation asking for money? For what?
posted by Nancy Lebovitz at 12:44 PM on September 2 [1 favorite]


Websites don't just run themselves. All the features needed built out. Etc etc etc.
posted by 922257033c4a0f3cecdbd819a46d626999d1af4a at 12:55 PM on September 2


The org has taken donations and sponsorships for years, to pay their employees who did various administration and marketing tasks for the various NaNo projects plus their charitable foundations, which is what a 501(c) is for. They have been an organization that does these things for many years, it was only a folk/fan thing for a few years. I have known people who worked there, they had real jobs, even though a lot of people think that's not true at not-for-profit companies.

I mean, did it NEED to be a company? I don't know, I have known people who tried to lead "fan" orgs without one and they got fucked in every possible direction because of liabilities and signing things and trying to pay for things (web hosting, forums, coding, moderation, whatever it takes to run something centralized, see also such organizations as Metafilter) with unofficial donations, so I too would start some kind of not-for-profit before I got too deep into it. But did NaNoWriMo get too high on its own supply and overexpand its scope at some point? Probably.
posted by Lyn Never at 12:55 PM on September 2 [7 favorites]


If a human can't be bothered to write it, why should I bother to read it?
posted by mike3k at 1:10 PM on September 2 [25 favorites]


MattD, that is the very definition of what Ted Chiang points out as “the experience of being approached by someone convinced they have a great idea for a novel.” And just like in those cases, the idea ChatGPT generated was… so cringe.

It is true that bad TV and bad books are made and consumed every day, even without AI. But aiming for that level is just adding to the wave of slop drowning out anything good. At least human-made slop has the benefit of being born of someone’s desire to communicate.
posted by oh__lol at 1:15 PM on September 2 [16 favorites]


It's been a wild day, watching much of the Writingsphere busily using engagement-driven tools to attack NaNoWriMo for making a transparently dumb move in order to remind people that NaNoWriMo exists and also draw new users for its AI pals to profit from.
posted by cupcakeninja at 1:53 PM on September 2 [2 favorites]


AI writing programs used to be kinda fun to play with, in a mad-libs sort of way, because they weren't very good and would often generate unlikely results (like "The Starship Enterprise becomes intensely Hawaiian") and having to roll with that kind of 'creativity' is good practice for things like, well, running D&D for a bunch of goofballs as most players are.

But as it's gotten "better", it generates things that make sense, like "The Enterprise meets the Borg" and even trying to get it to be more creative just results in things like "The Borg are all wearing pink tutus, isn't that zany?"

It's like the kid who used to create "Axe Cop" grew up, and now just wants to talk about police reform instead of avocado-unicorn buddy adventures.
posted by The otter lady at 1:55 PM on September 2 [8 favorites]


But I guess writing something you don't at all believe in would be harder for an ethical human.

It's great for getting a rough draft of a corpo-speak cover letter. Attach resume and job description and tell it "go." Edit for the appropriate level of humanness expected for the position. Less psychic damage to yourself.

Haven't come across much of anything else it's useful for though. I still think fondly of the time I asked it to discuss how the character Gourmand intersected with the Buddhist themes in Rain World and it wrote me a lovely essay on said character's cannibalism. (Gourmand is not, in fact, a cannibal.)
posted by brook horse at 1:57 PM on September 2 [2 favorites]


> ctmf: And at that point maybe it would just be easier to have a professional writer with subject-matter knowledge do it from the ground up?
Imagine having an actual writer in the nonprofit whose purpose is to "provide the structure, community, and encouragement to help people use their voices, achieve creative goals, and build new worlds—on and off the page."

Your other point is what really hurts for me: the idea behind NaNo (or the idea that used to be and was sold to many of us) is to celebrate the humans doing the thing, even badly. The whole point was to finish the thing by any means necessary because that awful first script was its own reward (cheating of course is not allowed, but mostly because it's something you do to yourself: copy/pasting a word 50k times to get a PDF is a hollow victory)

--------------------

I find it hilarious that in their added paragraph, they say that AI is a large umbrella technology (...) [that] is simply too big to categorically endorse or not endorse. . Yet, their argument is just as broad (all condemnation of AI reeks of classism/ableism) and one sided (although they mention there's issues with instances of AI abuse, there's no similar arguments of when/how AI can be abused); so this whole thing ends up being a "categorical endorsement", isn't it?

In other words, their lukewarm, very corporate position of «AI can't just be wholesale dismissed or embraced» would make sense if and only if they argued on «why» as well as «why not». But of course, I don't expect them to really be honest about this. If they aren't prepared to post a similar argument against AI (or at least, warning of the abuses they seem to acknowledge) I can't take them seriously.

--------------------

Certainly, the substance of their claims is absurd, emphasis mine: «Since some people can't pay for professional services on certain phases of their writing, your condemnation of AI is classist»; «Since some people can't see the issues in their writing, your condemnation of AI is ableist»; «Since the publishing landmark is highly unequal and biased, your condemnation of AI is bad» (a non-sequitur if I've seen one). No one is saying that there aren't problems in the writing and publishing worlds, but conveniently forgetting about communities and assuming money is the one and only obstacle is classist in itself. Assuming people's skills (or lack of) is the only obstacle is ableist in itself. It's denying that people have found creative workarounds to these economic and skill problems without AI.
posted by andycyca at 2:41 PM on September 2 [10 favorites]


From the Chiang article: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it

I read Chiang’s article on the way to the first day of our fall term, and classes start today. All during August, I saw various posts from teachers in the US going back to school and finding that use of ChatGPT and the like are just rampant among students, and I’ve been thinking about how to reinforce my standards for my near-native English language learners in our skills based class. We write, a lot, as a central point of the class, to build communication skills. As Chiang mentions, it’s the regular work of doing it that helps people to improve in areas far beyond writing, in areas my kids are in desperate need of. It will be an interesting day, but I’m hoping our discussions manage to come up with something useful.

As a fun side note, in our beginning of term meeting, the head foreign teacher at my school, who’s been extolling ChatGPT and using it to make lesson plans, took a moment to complain about how grammarly has become nearly useless, with many of its suggestions being painfully, obviously wrong. Somehow he missed grammarly announcing that it was going all in on AI, and marketing itself as a program that would do your writing for you. Aside from the point that our head teacher feels it’s perfectly okay for us as teachers to use it (but not students) it is kind of hilarious that grammarly has managed to turn itself to shit at the one thing it was (supposedly, but nah, not really) good at.
posted by Ghidorah at 2:47 PM on September 2 [6 favorites]


But LLMs as they exist today are not what it seems like 90% of people believe they are.

If you throw a couple more jiggawatts at it, it will all work as expected, I promise.
posted by 1970s Antihero at 3:19 PM on September 2 [2 favorites]


> 1970s Antihero: If you throw a couple more jiggawatts at it, it will all work as expected, I promise.
Just one more parameter bro
just one more parameter bro
just one more parameter bro
I swear bro, just one more parameter will make this AI good

posted by andycyca at 3:31 PM on September 2 [2 favorites]


Facebook/Meta data centers alone used 14,975 gigawatt-hours of electricity in 2023. Divide that by 8760 hours per year, and you get an average power draw of 1.71 gigawatts. (We are already well past 1.21 jiggawatts!)
posted by mbrubeck at 3:35 PM on September 2 [2 favorites]


they say that AI is a large umbrella technology (...) [that] is simply too big to categorically endorse or not endorse.

This is the scam. We're not complaining about "AI." Nobody is complaining about NaNoWriMo using spellcheck or chess programs or cancer-detection algorithms. We're complaining specifically about the big plagiarism machines doing money-laundering to give this year's NaNoWriMo participants plausible deniability when they steal the work of previous years NaNoWriMo writers.
posted by straight at 4:04 PM on September 2 [12 favorites]


("But we didn't just steal from NaNoWriMo writers! We're stealing from lots of other writers too.")
posted by straight at 4:06 PM on September 2 [2 favorites]


Wikipedia tells me the official NNWM website launched in July 1999

You know what else launched in July 1999?
posted by Horace Rumpole at 4:53 PM on September 2 [1 favorite]


eustatic> A human who values their time will only write the minimal amount, and so the errors are limited and predictable. The GPT's lack of such limits makes my error checking that much more of a headache. Usually a gpt will stray off the track for no reason. Worse than useless.

Yes. We've had AI writing tools for decades, with emulate dead people's style being what hit the headlines usually. Afaik they never saved the human work, but they did ensure the word choice matched the author being emulated.

We've the field of stylometry too, which can best be described as helping dictatorships identify & murdeer otherwise anonymous dissidents, based purely upon their writing style, and to a more limited extent as defending against such identification.

eustatic> Where LLMs have been useful in this work is in summarizing 100 page public documents, to make the source info for the formulas easier for a human writer to find.

We've had AI assisted discovery for court cases for decades too, although not sure how effective then or now, but yeah they should become effective eventually, at which point they'll be shutdown somehow.
posted by jeffburdges at 4:56 PM on September 2 [2 favorites]


This is gonna sound weird, I fear, but I think part of how people are taking this might depend on what you think NaNoWriMo is "for".

I did NNWM once. And for me - and for I think most people - it's a sort of personal challenge to trick you into getting out of your own way and actually completing the first draft of a novel, as opposed to writing a chapter and then cringing and going back and re-editing it a gabillion times. It's an artificial time constraint that stops you from going up your own butt too much.

But the year I did it, there was a lot of discussion in the NNWM fora about this guy who seemed to be getting an entirely different takeaway from it. He was using some kind of auto-text generator (it wasn't AI, or at most it was an early version of it; this was in the late aughts) to hit his word count at double speed, and was boasting on all the fora about how he'd "completed" three drafts by the middle of the month and had thus "won". People were arguing with him about how he was missing the point, and it was supposed to be original work - and he kept arguing back about how some of the avant-garde poets or Beat Writers or whatever would use "ready made" or "cut-up" writing, and it was valid and so there, he could too insert the entire Treaty of Westphalia into his novel and it totally counted.

I think the disconnect there was that people could tell the guy was not doing this because he had any story he was trying to tell. He was just trying to rack up bragging rights, and this was an easy way to do it. Put the appropriate number of words together, and you win. Easy. What the words say doesn't really matter, you just need enough of them. He wasn't trying to say anything with those words, and that was a slap in the face to the people who were participating to get themselves out of their own way so they could finally write the story they'd been trying to write for years.

So I think that how you approach AI in NNWM depends on what you're going to use it for. Are you finally going to write down the epic ghost story that you and your BFF made up with each other while on a camping trip when you were twelve, but you're crap at grammar? Good news, Grammarly can help. Or - are you just trying to say that you wrote a novel so you can impress people, and you've learned that NNWM accepts AI-generated content and gives you a little certificate you can show people so yaY? ....If so, then....you didn't actually write the novel, buddy.
posted by EmpressCallipygos at 5:12 PM on September 2 [13 favorites]


an entirely different takeaway

This is the problem with all "gamification" systems.

1. People have trouble accomplishing X. Let's make it into a game!

2.
Group A: Yay, this game will help me accomplish X!
Group B: Ooh, a game! I would like to try to win it!

3. Group B is much larger than Group A and eventually ruins everything.

GenAI makes Group B's job MUCH easier.
posted by mmoncur at 5:19 PM on September 2 [7 favorites]


I enjoyed Why A.I. Isn’t Going to Make Art by Ted Chaing. I've never noticed good writing by an AI, just the dross eustatic describes, even with extensive human direction, but..

AIs can however perform pop songs well, if tallented humans supply the lyrics and correct the output, mostly because the human could quickly fix all the AI's mistakes in a 2-5 minute pop song, while fixing an AI story represents a herculean editing task. As you'd expect, some poetry styles yields really nice results.

As for the "ableism" topic, yes AI music performances clearly reduce the "tallent privilege" barrier. At this point, one tallented person could create a pop song they believe should exists, and in any style they feel appropriate, without depending upon the agreement of the "class" of musicians who usually perform in that style.

Obscurest Vinyl has landed many bangers, like The Secrets Your Asshole Keeps and I Glued My Balls To My Butthole Again, which for two months managed 9% of the views of Fortnight by Taylor Swift. Reddit has many similar songs, like I Think I'm a Furry, and his AMA is nice.

AIs have reduced the "privilege" barrier to dicovering drugs and chemical weapons too.
posted by jeffburdges at 5:29 PM on September 2 [3 favorites]


Anyways, if you've some nice concept, but lack writing skills, then you should explore new variant writing forms that focus upon concepts over characters, pacing, etc, like Special Containment Procedures (SCP) documnets. You'll likely enjoy writing your own SCP more than struggling through garbage stories output by ChatGPT.
posted by jeffburdges at 5:44 PM on September 2 [2 favorites]


Has anyone used LLMs as part of their writing process?

I used one for my self-evaluation at work (which nobody pays any attention to but we're still supposed to take seriously). I gave it a list of my accomplishments, goals, areas of improvement in plain English, gave it the instructions to write a self-eval, and it translated my words into review-style language. I had to edit a bit where the rephrasing exaggerated my accomplishments, but it was so much easier than trying to write in a style I only ever use once a year for self-evals.

tl;dr: it's good for introducing tedious formality in speech.
posted by creepygirl at 9:09 PM on September 2 [10 favorites]


We've had AI assisted discovery for court cases for decades too

Only for extremely broad definitions of "AI." Identifying the vocabulary correlating to conceptual clusters is an extremely useful skill for discovery, the big breakthrough of technology-assisted review (and my first time feeling that a professional skill of mine was tending towards being made obsolete), but it doesn't even generate a text.

(And not decades, either--the big tools that merely did digitally what people did on paper blowbacks were only starting to gain traction in the mid-aughts.)
posted by praemunire at 9:53 PM on September 2 [2 favorites]


Are you finally going to write down the epic ghost story that you and your BFF made up with each other while on a camping trip when you were twelve, but you're crap at grammar? Good news, Grammarly can help

In an alternative universe, NaNoWriMo could have said "This month is about getting that first draft out of your head. It doesn't need to be perfect. It doesn't need to be grammatically correct. It just needs to be born. Editing and polishing are what the other 11 months are for."
posted by trig at 11:26 PM on September 2 [11 favorites]


Editing and polishing are what the other 11 months are for."

That's a good point, trig. Even with the most charitable reading, that Nanowrimo were talking about AI proofreading /spell and grammar checking tools, this doesn't make sense if the entire point of Nanowrimo is "writing a crap first draft is better than not writing at all, just get it out and polish it later". Proofreading, grammar checks, line edits, those things don't belong in Nanowrimo by definition.
posted by Zumbador at 11:41 PM on September 2 [8 favorites]


I'd be shocked you'd want an AI generated text during discovery, praemunire, because the opposing council could surely obliterate the AI in court, no?

Instead, you'd presumably want explicit statistical models, which maybe the AI helps you identify, but which you're prepared to defend as representing the pattern of criminal behavior, no?
posted by jeffburdges at 12:54 AM on September 3


Art discussion aside, I loved Ted Chaing remarks on "bullshit text" in the sense of Bullshit Jobs by Graeber, ala advertising or bureaucracy.

Ted Chaing> It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill [bullshit text] requirements, the greater those requirements will eventually become.

Ted Chaing> We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?

Julian Assange's pre-wikileaks thoughts on exposing conspiricies suggests a benefit here: If people do this, then they'll often create errors in their bulleted lists, which damages the "thought processes" of whatever organization causes this. It's not as black & white secrecy ala Assange, but bullshit text would hopefully feature disproportionally in organizations that're harmful themselves or participate in harmful industries.

Also, if you know someone distills information using AI, and can identify the model, then you can write text that says one thing, but which the AI distills into a different summary or bulleted list, which opens new vistas of social engeneering attacks.
posted by jeffburdges at 2:11 AM on September 3


In an alternative universe, NaNoWriMo could have said "This month is about getting that first draft out of your head. It doesn't need to be perfect. It doesn't need to be grammatically correct. It just needs to be born. Editing and polishing are what the other 11 months are for."

If memory serves, that is what NaNoWriMo has always said. It's just that they give you a silly little certificate if you get that first draft out and there are people who forget what the point is.

They also don't realize they're actually cheating themselves.
posted by EmpressCallipygos at 3:32 AM on September 3 [9 favorites]


if you want to generate a novel, nanogenmo is right down the hall
posted by BungaDunga at 7:37 AM on September 3 [3 favorites]


I actually think LLMs might have use to help people who already rely on assistive technology to express themselves more fluidly (eg, if someone is limited to producing just a few words at a time, an LLM could act as a very, very good autocomplete), but that is definitely not what nanowrimo is thinking about here...
posted by BungaDunga at 9:01 AM on September 3


The generated Dr. Demento-ish takes on channels such as Obscurest Vinyl, Hard Archive and etc are mostly being made with Suno. Which can turn out some impressive results -- but it's also the most impressive when it's at its most stylistically derivative. Like most of the "Motown" or "70s Funk/Soul" stuff it generates. Trying to get it generate "new styles" by hybridizing genres is much iffier.

Really, it's a toy. But an admittedly neat one.

A "cinematic trip hop song about posting on a web forum:" make your bed, in this thread. Sounds more like Enya, imo.
posted by snuffleupagus at 2:41 PM on September 3


I typed "outline the plot of a novel in which a virus causes 10% of children to be born superintelligent" into my $20 GPT4.o subscription. It produced

...something which feels very derivative of The Midwich Cuckoos?
posted by We had a deal, Kyle at 3:30 PM on September 3


Has anyone used LLMs as part of their writing process? I write non-fiction technical tutorials, and I've found ChatGPT 4o to be useless, even for generating brainstorm ideas.

yeah having messed with it some out of curiosity it's also pretty crap for creative writing

everything comes out too homogenized to be interesting; even when you beg it to insert some unexpected detail it gives you something fairly cliche, probably because its entire job is to provide expected material

also I believe they've fixed this now but one time I got in a straight up moral argument with 3.5 because I asked it to write a Garfield dating sim & it refused to do so on the grounds that fictional characters could not consent??? like in a really preachy way that implied that I was being gross for even asking?? like I'm secretly fishing for a pornographic description of Garfield's taint???

and like I will argue with a computer all day so eventually I got it to condescendingly agree that perhaps an innocent little lasagna dinner date scenario would not break the moral fabric of society, and it would write me one, & by then I'm like, you have made this so icky I dunno if I even want one anymore

anyway after 3 or 4 weeks messing with ChatGPT I came to fully loathe its smug twee flip little writing style & would not use it to generate a single word of anything I otherwise wrote, barring a statistically improbable gun-to-my-head-style situation
posted by taquito sunrise at 6:29 PM on September 3 [13 favorites]


Hah! That's hilarious, because I asked ChatGPT to write me out a cutesy fantasy daydream in which I was a former knight relaxing with my long-time friend and lover the prince, and it made us fuck in the bath unprompted in the first response. Then when I asked what happened when we went to bed, it made us fuck again and this time gave me a little warning that "this content might violate our usage policies." Like, okay, you're the one who made it horny without my consent, not me.
posted by brook horse at 7:30 PM on September 3 [11 favorites]


Those last two are designed especially for you to select some text, up to 2500 words I think? and then ask their AI to write more story, more dialogue, more scenes for you.

Computer, continue "It was a dark and stormy night..." in a way that will keep a reader's attention. 20 GOTO 10
posted by rhizome at 11:12 PM on September 3


fta: General Access Issues
Not just a question of access
As with most forms of exclusion, the digital divide functions in multiple ways. It was originally defined as a gap between those who have access to computers and the internet and those who do not. But research now shows it’s not just an issue of access. [theconversation]
nobody clued me into the direction this topic took here, from a separate thread more focused on Ted Chiang/art generally, if anyone's interested
posted by HearHere at 7:44 AM on September 4


WaPo: National Novel Writing Month faces backlash over allowing AI: What to know
NaNoWriMo’s openness to writers using AI has sparked discontent among some authors and writers associated with the organization. At least a few members have said they would no longer participate in the annual challenge.

Fantasy and young adult fiction writer Daniel José Older stepped down from the NaNoWriMo Writers Board on Tuesday because, he said in a statement, NaNoWriMo “has taken a wild and ridiculous stand in favor of Generative AI.” He said the decision was “unconscionable” and “harming writers” as he urged others to also resign.
Older also noted that NaNoWriMo is sponsored by ProWritingAid, an AI-powered writing assistant.

ProWritingAid’s founder, Chris Banks, confirmed to The Washington Post on Wednesday that the company has long supported NaNoWriMo. He said his organization was “committed to supporting human creativity, not undermining it.”
“We fundamentally disagree with the sentiment that criticism of AI tools is inherently ableist or classist. We believe that writers’ concerns about the role of AI are valid and deserve thoughtful consideration,” Banks wrote in an email.

NaNoWriMo did not immediately respond to a request for comment from The Post.

Maureen Johnson, an author of young adult novels, posted on X that she would step down from the board of NaNoWriMo’s Young Writers Program because of the AI statement. “I want nothing to do with your organization from this point forward,” she wrote.
Novelist and essayist Roxane Gay said on social media that she was “embarrassed” for NaNoWriMo.
Ellipsus, a collaborative writing software company, said in a statement Tuesday that it had decided to end its sponsorship of the group on the grounds that “we strongly disagree with NaNoWriMo’s recent statements regarding generative AI.” Ellipsus said AI was responsible for “the wholesale theft of authors’ works, and a lack of respect for the craft of writing.”
One writer, Laura Elliott, reacted strongly to NaNoWriMo’s assertion that opposing the use of AI would be ableist. She wrote on X that as a “disabled writer,” she was “furious.”
“Disabled writers do not need the immoral theft machine to write because we lack the ability to be creative without plagiarism — encouraging AI is a slap in the face to all writers and this excuse is appallingly ableist,” Elliott wrote.

After writers began to respond negatively to NaNoWriMo’s position, the group updated its online statement “to reflect our acknowledgment that there are bad actors in the AI space who are doing harm to writers and who are acting unethically.”
The group said that although it found the “categorical condemnation for AI to be problematic,” it was “troubled by situational abuse of AI, and that certain situational abuses clearly conflict with our values.” AI’s complexity, the organization said, made it “simply too big to categorically endorse or not endorse.”
“We see value in sharing resources and information about AI and any emerging technology, issue, or discussion that is relevant to the writing community as a whole,” NaNoWriMo said. “It’s healthy for writers to be curious about what’s new and forthcoming, and what might impact their career space or their pursuit of the craft.”
posted by jenfullmoon at 8:10 AM on September 4 [4 favorites]


NaNoWri[sic]Mo
posted by JohnFromGR at 11:48 AM on September 4


s/AI/automated plagiarism/g
posted by sourcequench at 8:20 PM on September 4 [1 favorite]


I haven't seen anything implode this fast since the OceanGate Titan.

Here's Courtney Milan's understandably incendiary take.
posted by mittens at 8:57 AM on September 5 [3 favorites]


National Novel Writing Month defended the use of AI. Now authors are stepping down from its board [CBC]
Cass Morris, a fantasy writer and editor who started participating in the program as a junior high student in 2001, said she "immediately" decided to step down from her position as a board member after reading the post, arguing that using AI would discourage creativity and ruin what made the challenge valuable to aspiring writers in the first place. [...]

Ellipsus, a collaborative writing tool, announced Thursday it was stepping down as a NaNoWriMo sponsor "due to their recent actions, stances, and PR regarding generative AI." [...]

Science fiction and fantasy writer Daniel José Older posted on X Monday that he was also stepping down from the organization's board, and urged other writers to follow suit. [...]

Author Maureen Johnson, who was involved in the organization for about a decade, posted on X Tuesday that she was stepping down from the board of its Young Writers Program. [...]

Rebecca Thorne, a fantasy novelist who has participated in NaNoWriMo since 2008, called out the organization in a TikTok video that has more than 35,000 likes, accusing the organization of using politically correct language "so that you can't argue their stance."

Thorne also alluded to turmoil behind the scenes at the organization, including an alleged exodus of staff members earlier this year, which CBC News has not been able to confirm.

Former executive director Grant Faulkner posted on LinkedIn in March that he was leaving the organization. Using an internet archive search, CBC News found that NaNoWriMo removed the "Staff" section of its website sometime in March, and removed the "Board of Directors" section in April.

The NaNoWriMo website does not currently list any staff or board members.
posted by heatherlogan at 7:33 AM on September 6 [3 favorites]


« Older You can check out any time you like   |   How blue exactly? Newer »


You are not currently logged in. Log in or create a new account to post comments.