Is homework out of touch? Or are the children wrong?
February 15, 2025 4:33 PM   Subscribe

In the time of ChatGPT, "is there some way for adults to force teens to still do homework? Or to convince them they should want to?" The search for an answer, or at least some clarity, touches on the history of the five-paragraph essay, past letter writing culture, whether it's OK to use AI to write your wife a birthday card, and talks with John Warner, an author and long-time writing teacher who recently published More Than Words: How to Think About Writing in the Age of AI. Also featured are researchers at Harvard's Center for Digital Thriving, on what they're learning from talking with Gen Z about tech and LLMs.

(Sorry there is no transcript. The episode is about 45min long if you skip through the ads.)
posted by coffeecat (82 comments total) 31 users marked this as a favorite
 
I'm not here to defend generative AI. I disapprove of the technology for multiple reasons. That said, it is patently obvious we do not live in a world in which diligence, effort and ethics are rewarded. Rather, fortune favors the venal, the corrupt, the cruel, the unprincipled. The guiding directive is no longer "Don't get caught"; it has become "If you break enough laws at once they can't touch you." Even a teenager with no more than typical perspicacity is aware of this. Where's the incentive not to use ChatGPT to shave a couple hours off your homework and play video games instead, when there's a real chance your entire family could be forcibly transported to "wellness camps" in the very near future?
posted by Faint of Butt at 5:22 PM on February 15 [38 favorites]


I have ADHD and ChatGPT has been the greatest disability aid I’ve ever encountered. Where before I would have an idea buzzing in my mind while sitting in front of a blank screen and a blinking cursor, now I excitedly tell ChatGPT about my thoughts. I particularly enjoy using the voice interface with my earbuds, so I don’t even need to use a screen or keyboard. We have a fun conversation, and I ask ChatGPT to ask me questions, which spark even more ideas.

Later, at my desk, I ask ChatGPT to summarize our conversation, and I edit the result. Instead of spending a torturous week writing or simply giving up on my idea, I get to share it with my colleagues and friends.

Is this me lazily using ChatGPT to shave a few days off an awful experience so I can play video games? Yes! And I really like the games I play. And, thank god - my life is so much improved, and I think so are my friends and colleagues lives and work.

Sure, some people complain that sometimes it seems like an AI wrote it. When I hear that, I try to figure out what they’re actually criticizing. It’s usually that the writing was too verbose, repetitive, or too vague. I’ve been learning far more about style and editing than I have in the last two decades, perhaps because people feel more comfortable offering some, any criticism if they can blame it on something other than me.

But “it seems like an AI wrote this” is a useless, hollow critique. Who cares how someone wrote something? I bet they even used a spellchecker! Ultimately, they’re offering you something to read and respond to. Take it seriously, and be brave enough to offer real criticism if it’s warranted. If it doesn’t make sense or is too vague, just ask questions and engage them as a writer.

Students will be students. I really think the best thing we can do is treat each other seriously and with respect. If you engage with someone consistently, perhaps they’ll rise to your expectations.
posted by AstroCatCommander at 6:53 PM on February 15 [23 favorites]


It also depends on whether one's teenager has been informed that if they want to become proficient at something more than merely being an information retrieval, processing, and presentation/misrepresentation machine, or achieving objectives beyond merely what is asked of them, they'll be well served to be taught to exercise mental musculature that can't be helped by machine-learning/"AI". Does one merely want one's child to learn how to impress teachers and please the test-taking machinery? Is what will be achieved by doing that worth what will be lost? Reflection, analysis, struggle, risk, failure, re-integration, etc. are all important for many professional endeavors ... and just fucking living. I suppose there's a place for ChatGPT and the like even amongst such pursuits, but one won't become a good thinker by jumping at every opportunity to Find The Answer Something You Can't Vet Says Is Right, and avoid the necessary frictions that come with thinking. Some of the greatest thinkers in humanity were poor students. If we ever do invent a time machin, go back and give Newton or Faraday or Churchill or Twain or Edison or Jobs some ML prompt, we are all good and truly fucked.

ChatGPT isn't bad and shouldn't be avoided at all costs, but it's a tool for certain jobs, good at some and terrible at others, and one of the most important aspects of being a teenager pursuing an education is far, far from Finding The Right Answer As Fast As Possible. As Twain famously said, "I never let my schooling get in the way of my education." Hopefully the current (and future!) generations learn to never let an "AI" get in the way of their educations.
posted by jerome powell buys his sweatbands in bulk only at 7:10 PM on February 15 [14 favorites]


I mean, what I liked about this reporting is that it points out that the fact that students can easily use LLMs to cheat is in part rooted in how poorly writing is taught (i.e. the dreaded five paragraph essay). As the host observes, the formulaic nature of most writing assignments make them great fits for a LLM to write. I agree with John Warner's overall argument that you need assignments that students will actually want to write, because writing can be fun. It's hard, but not impossible (I'm still proud of one writing assignment I designed that college students in a general education course I taught always really got into). I also found interesting that the Center for Digital Thriving researchers report that students are distinguishing between what they see as ethical uses of LLMs (busy work) vs non-ethical uses of LLMs.

Who cares how someone wrote something? I bet they even used a spellchecker!

Well, like the piece gets at, the experience of writing from a blank page, all on your own, is a fundamentally different experience than what the LLMs do, whereas the computer spellcheck and a human spellcheck are basically the same. Writing and re-writing is a form of thinking - the LLMs are not thinking - and a human generating content out of an LLM is not wrestling with their own thoughts in the same way as a human working from a blank page. Anyway, the podcast episode does not demonize LLMs – it argues that knee-jerk bans from schools are unproductive, for example. And this report on how Gen Z is using AI is coming at the topic with curiosity and an open-mind.
posted by coffeecat at 7:11 PM on February 15 [15 favorites]


I understand what you're saying generally, jerome etc, but given the unethical way ChatGPT was and continues to be trained, largely over the objections of the people who created the data and contributed it to the great commons of the Web for other people to use themselves and not contribute to the bottom line of an increasingly terrible company, I dispute your statement that it shouldn't be avoided at all costs. Let them created it off of ethically sourced data first, then I might consider it.
posted by JHarris at 7:14 PM on February 15 [24 favorites]


JHarris, thanks for that. i appreciate the perspective, and my own opinion is that largely it should be avoided at all costs, driven somewhat by the points you raise. I have extremely low regard for most "AI" companies and even less for the thieving spineless liars who lead them. But it is a conundrum. Should I guide my teenager not to drive a car because the harms caused during the extraction of the petroleum that power it, or the negative health consequences of the particulate matter produced as tires and brakepads are ground down? . Rhetorical question. I don't take great issue with what you've written, and am going to think on it some more. Using only the contents of my cranium.
posted by jerome powell buys his sweatbands in bulk only at 7:25 PM on February 15 [6 favorites]


actually chatgpt is very bad. tools are not neutral instruments of our will but they shape our world and set the grooves on which our lives travel. i think the llms should be destroyed, and made illegal globally. i do not think the AGI will take over skynet style. that’s stupid. i think they will lead to the atrophy of the human spirit, the transformation of humanity into a receptacle for what is most probable. they are an engine of terrifying conformity. once the model is trained, all of its possibilities are set. all that is left is the endless repetition of the same, the well trodden, the obvious. we move forward as a species through leaps into the abyss, swerves of creative spontaneity, the urge to oppose the present with another possible future. the llm knows no future, but is stuck in an eternal present, the moment of deployment. it’s technological totalitarianism, and if children born today learn to write only with these models, to create art with them, to research with them, there is little hope that they will retain the imaginative faculty to dream that another world is possible, or the spontaneous spirit that will make such a world possible. the llm is stasis. it is death, and it will kill us if we let it
posted by dis_integration at 7:34 PM on February 15 [45 favorites]


I am a high school teacher and it should be banned unless the assignment is specifically to use it. Many students now reflexively use LLMs even when it would be easier to just answer the question themselves - like a one-parageaph "what did you find most interesting about the reading? What did you find least interesting?" Kind of prompt.

There may be some interesting edge cases but teaching high school a lot of times is about thinking about what would work best for MOST students. The LLMs aint it.

Also, in our current political climate where the oligarchs are all interested in LLMs I just don't think we can trust these tools to stay even as "neutral" as they currently are. They will become instruments of social control. If teens need them to think and they are poisoned towards certain political ideologies we are all doomed.
posted by subdee at 7:36 PM on February 15 [38 favorites]


I care whether someone wrote me a card or couldn't be bothered to do it. Same for homework or a work email. The stakes and relationships are different, but a lot of the reasons it matters to me are the same.

"Sounds like ai content" from me means not only is the prose vapid and annoying and probably way too long, but I also can't really trust anything in it either. And that makes me feel like the sender is wasting my time. And if you'd rather waste my time than spend yours to think a little about communicating your thoughts to me, then I'm disinclined to spend any of my time caring about or trying to understand what you've sent me.
posted by SaltySalticid at 7:39 PM on February 15 [31 favorites]


In general the answer is probably to stop assigning homework tho. I think the research on whether homework even works in the first place is pretty mixed to begin with.

When I assign homework that can be answered with chatGPT etc I grade purely on whether students actually answered the question (and not some other, related question) and used specifically what we discussed in class to answer it, or something else they could plausibly have thought of. Sometimes I also look at how much time they took, if we're using a program that tracks that. At this point I don't even care if what they said was correct, just whether they tried to think through the question.

For example, I teach an engineering class. We do all the building, testing, programming, etc in class and the homework will be for students to evaluate what worked or didn't work, in 1-3 complete sentences. Really just want to see their thoughts here.
posted by subdee at 7:45 PM on February 15 [12 favorites]


You can't learn eg how to do calculus without doing calculus though. Lots of things work that way, math is just what I've taught and know the most.

Sure, maybe a lot of that can be done in class and not at home, but then you have to either go much slower or have the students do.the teaching at home so they can free up time to do the homework at school. And that can also work.

But a general principle of learning is that you usually have to put in some time somewhere if you want to gain skills and knowledge.
posted by SaltySalticid at 7:56 PM on February 15 [24 favorites]


I expect that the curriculum will adapt to the technology. My kids are learning maths with an assumption that they have a calculator capable of symbolic algebra, which feels like cheating to me! But when I was learning, I had a calculator that meant I never had to learn logarithm tables or slide rules, so I was probably cheating according to someone's definition.

It doesn't seem likely that any punishment is coming for AI companies' scraping and profiting from other people's copyrighted works. It's not like a Napster moment where the copyright holder is a large well-resourced oligopoly vs a collection of individuals and smaller companies: this time the thieves are the well-resourced oligopoly and their victims are the collection of individuals and smaller companies.
Many students now reflexively use LLMs even when it would be easier to just answer the question themselves
I am frequently astonished by the things my kids will ask Siri to do for them (set a timer? tell the time? multiply two numbers under 100?) instead of using their incredibly capable device directly or just their own mental faculties. But this also makes me think they will grow up preferring to believe that the convenience of LLMs to generate text justifies the means used to train them, a bit like how I was pretty happy to use Napster and even used to argue that, hey, the record companies were evil anyway, who was I hurting?
posted by pulposus at 8:45 PM on February 15 [6 favorites]


My wife switched to a flipped classroom some time ago so she may be dodging a bullet here. Students are asked to read or watch material at home and what is normally assigned as "homework" is done in class where she can help.
posted by charred husk at 9:01 PM on February 15 [17 favorites]


But “it seems like an AI wrote this” is a useless, hollow critique. Who cares how someone wrote something?

For me it's a signal that maybe the purported author didn't put any work into it at all. I'm not going to invest time on stuff that might be nonsense that can't even be productively engaged with because even the ostensible author didn't read it. How can someone take feedback when they didn't write the dang thing?

Now, obviously, there are ways to use LLMs that are more of a collaboration or tool, but for me as a reader "sounding like ChatGPT" is still a signal that maybe this text doesn't have any thought at all behind it, which is a real turn-off.
posted by BungaDunga at 9:26 PM on February 15 [21 favorites]


If it doesn’t make sense or is too vague, just ask questions and engage them as a writer.

and, sorry, more specifically: how can I engage with someone as a writer when they didn't write it? For example, if someone asks ChatGPT to write about a topic and it more or less recapitulates the Wikipedia article, it would be no more productive to engage with the purported writer than if they just had ripped it direct from Wikipedia. Now, again, I don't think using ChatGPT as part of a writing process taints the whole thing, but in the case of "oh I just asked chatgpt to write an essay and pasted the result into Google Docs" there's no author to engage with.
posted by BungaDunga at 9:38 PM on February 15 [17 favorites]


This type of conversation tends to make me think I’m on crazy pills. The output of ChatGPT is formulaic, basic and generally trash beyond the “whoa, my microwave just wrote a poem” gee-whiz.

It shouldn’t be allowed anywhere near an essay writing class. The tech is well supported for grunt tasks like self-documenting code. Even then; it needs editing.
posted by mathjus at 10:09 PM on February 15 [19 favorites]


I see the five paragraph essay becoming more important because of AI, because it can be completed in a single class setting. I may be teaching writing soon, and what I would probably do is give students a choice: write the essay long-hand, like in the bad old days, at a desk, or do it a computer lab on word processors, but with air-gapped machines.

I don't see how having AI assistance while learning writing will help students in any way above not having it.
posted by JHarris at 10:34 PM on February 15 [9 favorites]


Ultimately, they’re offering you something to read and respond to. Take it seriously, and be brave enough to offer real criticism if it’s warranted. If it doesn’t make sense or is too vague, just ask questions and engage them as a writer.

I mean, i'm not interested in taking the time to read and engage with something that someone else didn't take the time to write? I don't have enough time or energy for all the actual people in my life; if i suspect you're a machine, i'm just going to ignore you.
posted by adrienneleigh at 10:54 PM on February 15 [30 favorites]


In my writing classes, I try to help my students understand that whatever they are writing is in competition with the entire world for the reader's attention, and that they are asking for the reader's time, something the reader will never be able to replace. I try to get them to understand that there is a certain level of responsibility and respect that writing needs to proceed from: is the writer aware of the reader's time, and respectful of it?

When something feels like ChatGPT or any of the other LLMs, to me, that goes against pretty much everything I'm trying to teach to my kids. If the writer can't take the time to organize their thoughts, to work on what they want to say to the reader in a concise, useful way, essentially the writer is telling the reader "my time is more important than yours." Yes, ChatGPT can throw together a ten page essay for you in mere minutes, but now the reader is supposed to devote more time to the text than the writer did?

Sorry, no. If you want to communicate with another person, start by respecting the time they are devoting to the communication. If you're not willing to give them that level of effort, to devote that much of yourself to the communication, why should they? If it's important enough to be heard, shouldn't it be important enough to think it through clearly? If it's important enough to be read, shouldn't it matter enough to spend some time writing it?
posted by Ghidorah at 10:56 PM on February 15 [34 favorites]


Or, on posting, what adrienneleigh said.
posted by Ghidorah at 10:57 PM on February 15 [2 favorites]


Ghidorah: but you said it much better! (Probably because you're a writing teacher! And i'm a better editor than i am a writer (but i still don't use fucking LLMs.))
posted by adrienneleigh at 10:58 PM on February 15 [2 favorites]


I just left a comment in the MeTa thread about LLMs that is relevant here too, so i am incorporating it here by reference. It's about the assertion that one cannot reliably tell LLM output from human output.
posted by adrienneleigh at 11:08 PM on February 15 [5 favorites]


In my last substantive piece of writing about AI, I wrote (NB: I wrote): "ChatGPT is to improving one’s academic writing what taking a taxi is to learning how to drive."

Taxis have their place, and for people who don't know how to drive they're especially useful. But a taxi ride is completely orthogonal to a driving test.

The issue here isn't producing the "best" possible five-paragraph essay, it's about assessing the students—as in, figuring out what they do and don't know and what they are and aren't capable of. "They" means the students, not their plastic pal who's fun to be with. If teachers don't know how much of a piece of writing is yours, they can't know how much help you need or whether they've explained things well.

That's true whether it's the big end-of-year make-or-break assessment or a weekly homework task. Getting students to produce stuff, whether it's writing stuff or saying stuff or doing stuff, is an essential part of teaching, and portraying some kinds of assessment as "busywork" betrays a fundamental misunderstanding of what it's for.

You might protest that because of how your brain works you can't write as effectively as you otherwise might, and therefore it's okay to use ChatGPT to improve your work. In some contexts, it might be—I'm trying to keep an open mind about that—but not in the context of assessing whether or not you can write effectively.

It's hard to contemplate that aspects of our bodily or mental health or our personal circumstances constrain us and limit our potential. We quite reasonably push back against it. As a society, we've developed all sorts of ways to address those limitations and constraints: medication, education, anti-discrimination laws and assistive technologies. They're all worthwhile and important. Medicine keeps people alive. I use steroids daily to prevent my asthma from flaring up, so that I don't end up in hospital on an oxygen mask. But if I were using steroids to enhance my performance in international sporting competitions, I'd end up losing my medals and getting banned.

Teachers assessing their students are like doctors diagnosing their patients: they need to understand the unassisted or unmedicated person in order to teach or treat them. Perhaps the outcome of their assessment will be "this child can't write because their mind just isn't built that way, so if they end up in white-collar work they would benefit from using genAI as an assistive technology". But they have to assess the child first, and that means allowing enough time to see what the child can do without assistance. When it comes to matters of the mind, that can take a while. If a child is missing a leg, you don't need much time to figure out that they need a prosthetic or a wheelchair to get around. If a child can't write well, all you know is that they can't write well yet.

Promoting genAIs as assistive technologies also assumes morally neutral genAIs that aren't being used by billionaires to undermine entire economies and societies out of their own self-interest. As technologies go, genAIs in 2025 are techno-trousers: superficially appealing inventions that have been taken over by bad guys and threaten to walk civilisation off a cliff.
posted by rory at 12:24 AM on February 16 [22 favorites]


I can't write about this topic without also noting just how demoralising it is to be a teacher marking assignments in the age of ChatGPT.

I mark hundreds of pieces of work in any one year, most of them two to five thousand words long, some of them fifteen thousand words long, and a few eighty thousand words long. A lot of marking over the past twenty-odd years. Knowing, now, that a significant proportion of it is being spat out by a genAI—and I know, not just because some of it includes Chat-GPT-hallucinated references but because the collective writing style of the past two years compared with the previous fifteen has changed—is utterly draining. It saps your will to keep reading, to keep plodding through the work to give it a mark and some useful written feedback at the end of it. Marking forty-odd papers at a time was always a forced march, but at least before you knew you were engaging with actual people who might learn something from your response to their work. Now what are we doing? We're "feeding back" to a person who clearly doesn't give a shit, because they used a machine that can't give a shit.

Not entirely, of course. Many students—maybe most (please let it be most)—are still writing their own assignments and wanting to learn. That thought has kept me going so far. But the ones leaning on ChatGPT have pissed in the pool, and I'm not enjoying swimming in it.

(And all this is in a university context. I feel even more for school teachers, who are assessing essays that won't even have telltale fake references in them.)
posted by rory at 12:52 AM on February 16 [44 favorites]


pulposus when I was learning, I had a calculator that meant I never had to learn logarithm tables or slide rules
I'm so old I was [?1971] taught and used Charlier's check to catch errors when I was calculating mean and std.dev. with pencil and paper. Back then, I could tally up long columns of numbers as well as Bob Cratchit; now I reach for a calculator for (2025 - 1971 =).

Whatever about ChatGPT, homework is a) too much b) the tail wagging the dog. When we came back to Ireland in 1990, our chap was 14 and we sought advice about Irish practice w.r.t. homework. A slightly older family friend affirmed that he did 3 hours of homework every M-F evening. That never happened in our gaff, not even close. Those boys both grew up to be engineers and dads in their turn. The deficits in schoolwork/ homework balanced by moderate amounts of drinking, snogging, cooking, comics and video games . . . a holistic education.

2013-2020 I taught in a tech college. All student submissions had to be passed through Turnitin, a plagiarism detector. If our students had excelled as teenage academics, more would have gone to University; many of them struggled with writing e.g. research project reports. Turnitin typically found that 10-25% of submissions had been more-or-less lifted from elsewhere. Except one fellow whose report on the evolution of Schmallenberg Virus clocked in at 3%. It was science, Jim, but not as we know it.
posted by BobTheScientist at 1:19 AM on February 16 [3 favorites]


To reply to the first comment, "many" is a total weasel word.
posted by DeepSeaHaggis at 1:44 AM on February 16


Teachers assessing their students are like doctors diagnosing their patients: they need to understand the unassisted or unmedicated person in order to teach or treat them.

In an ideal world, I would not disagree with this. But teachers (especially at the university level where I teach, but also at all other levels) are not trained in assessing learning differences. Assessing learning differences is an entirely different specialty that requires its own years of specialized education and a distinct degree. And my experience is that a minority but still distressingly large number of my peers hold prejudicial attitudes contrary to how the average student learns and most definitely contrary to the needs of most non-average students.

In the medical analogy, teachers are the family doctor (or nurse practitioner, at the university level, where we aren’t required to have a degree or special training in education though some of my colleagues certainly do). And there are many, many stories of misdiagnosis from a family doctor when a separate issue a patient is having is un-treated. Or the doctor not even asking the right questions or doing the right tests - analogously to poorly designed learning assessments.


That is all beside the point in a discussion of the ethics of students using LLMs, or how many of our educational institutions are under-resourced and expected to run as factories by the politicians setting budgets and educational policy, or a lack of knowledge and skill in using flipped classrooms and active learning methods that would enable teachers/instructors to more accurately and authentically assess students (methods that, when done at a similar skill level as more teacher-centered methods, have been repeatedly shown in educational research to be far more effective for student learning, but that often teachers have less experience with and less support as they are learning to teach with those student-centered methods).
posted by eviemath at 4:43 AM on February 16 [3 favorites]


I teach at the university level. Much of what I teach involves writing. All generative AI should be burnt to the ground, full stop. I do run into students like AstroCatCommander way upthread, where that blinking cursor on the blank page just drives them mad because of the way their brain is wired. And it took me a while, but I found a way around it.

On the first day of class, I hand out blank sheets of paper in some weird color I found in the copy room and give them a pen* and a personal prompt: last month's was "Tell us about an experience where you suddenly had to look around and think 'am I the only one here who doesn't see what's going on?' How did you deal with it?" This is a silly enough prompt to pique people's interest, and it's personal, so they're great about spending half an hour scribbling. After we're done, anyone who wants to have their story read aloud gets to hand it to someone else, who reads it. It works great as a first-day icebreaker, AND it gives me a secret power. I then point them to the syllabus, where in boldface with a big old black rectangle it says that if I catch you using generative AI for any reason, you fail and get to go talk to the dean of students. And now, I have a good sample of what you really sound like when you write, so don't try to slip ChatGPT by me.

But then I say if you're the kind of person who just fritzes out at the blank page with blinking cursor, it's okay to throw prompts at AI. Ask it to give you a couple of paragraphs on the topic, and then don't copypaste a word of it, but just use it as a rough general guide for how to structure your writing. This seems to do the trick for most people who are intimidated by writing, which to be fair to them is one of the worst-taught of all subjects/skills.

It helps that most of what I teach is close reading of literary texts, and LLMs cannot do this at all: they spit out obvious nonsense. I showed this to them by asking ChatGPT to explain the first three pages of Neuromancer to us, after we had spent 90 minutes talking about it, and everyone was able to see what hot garbage it was. Though it also turned out to be comedy gold.

If it's just a dumb busywork assignment, it's hard to blame students for wanting a quick and easy way out. And you're never going to convince everyone that doing the hard work of learning to write is worthwhile. But so far, it's proved pretty successful.

* You would not believe the number of students who show up to class without a writing utensil. It shouldn't boggle me, but it does.
posted by outgrown_hobnail at 6:05 AM on February 16 [32 favorites]


We're "feeding back" to a person who clearly doesn't give a shit, because they used a machine that can't give a shit.

LLM sludge work deserves LLM sludge feedback along with a nice consistent D- grade.
posted by flabdablet at 6:08 AM on February 16 [4 favorites]


And the teachers can be AI and we can leave humans out of the process entirely.
posted by girandole at 6:15 AM on February 16


Speaking as a former teacher let me tell you two great truths that no parent wants to hear:

1) Outside some VERY specific things, homework is worthless.

2) Spelling tests are always worthless.

Every teacher knows this. The only time homework has any possible value is when working on something where repetition is key to memorization and even then only somewhat. I'm not counting essays here as homework, that's a whole different thing.

So why do teachers assign homework and spelling tests despite knowing that both are total wastes of everyone's time?

Because parents demand it.

If a teacher tries to end spelling tests the parents flip their shit, same with homework.

Even before GPT there were homework apps that'd scan math homework and give you both the answer and the steps, and of course every kid knew that if they googled the questions then odds were good they'd find the same site the teacher got them from, along with the answers.

The solution to the problem of kids using GPT for homework is simple: stop giving kids homework. It's pointless bullshit.
posted by sotonohito at 7:10 AM on February 16 [13 favorites]


But “it seems like an AI wrote this” is a useless, hollow critique. Who cares how someone wrote something?

This has been addressed extremely well by others upthread...but I will add this data point: Among other things, I produce 5 minute video tutorials that are very tightly scripted in an attempt to convey as much information as possible.

A final script that has gone through multiple revisions and proofreading typically comes in between 750-850 words. Any more than 850 words and it's going to be too long- guaranteed.

I often receive proposals or draft scripts from potential instructors and recently I got one that was around 3000 words, so 4x the length that it needed to be. That by itself was actually not a red flag, many people who are passionate about their subject often write far more than necessary, and the hard part is editing it down.

I got a couple of paragraphs in and realized it was a few threads of an idea that were supplemented by LLM garbage. (I should mention that this individual was an expert in his field for more than 25 years, and yet...) What I should have done is stopped reading right there and then...but I'm a professional and in good faith it's my job to read things to the end and offer feedback.

In the end, it was a complete waste of my time. COMPLETE. WASTE. I've read hundreds of these draft scripts over my career and my default and reflexive mode is to put on my "feedback hat" as I'm reading. I was so angry by the end of the script because there was no grounds for criticism or discussion except "Do this over again completely and in your own words." (In actuality, I decided I would not be able to work with person in any capacity, because...uh they were the type of person who thought is was OK to submit this to begin with.)

So yeah, it matters how someone wrote something, because it puts the onus on the reader to add yet another layer on the already difficult (and time consuming) job of analyzing and critiquing writing and the ideas behind them.
posted by jeremias at 7:12 AM on February 16 [14 favorites]


A well designed homework assignment (which is not all of them, to be sure) is exercise for the mind. Getting ChatGPT to do it for you renders the whole thing as meaningless as building a machine to lift weights for you. I can, begrudgingly, acknowledge that there may be uses in which it is a beneficial tool to help people think; the problem lies in using it as a substitute for thinking.
posted by Horace Rumpole at 7:15 AM on February 16 [5 favorites]


But “it seems like an AI wrote this” is a useless, hollow critique. Who cares how someone wrote something?

Well, as someone who isn't reflexively and virulently anti-AI my answer would be that the person didn't write it. AI has it's place in a lot of things, but if you think you wrote an essay by punching the prompt into an LLM then you're simply wrong. You didn't write that, the AI did. It wasn't an aid, or a help, you just outsourced your writing to a third party and it's no different from back in the pre-LLM days when a student might bribe a smarter student to write a paper for them.

Now you can question if trying to teach everyone how to write a comprehensible and semi-decent few paragraphs is worthwhile, given how many adults did pass high school and are demonstrably incapable of writing even a few short sentences I can agree that either we're doing something wrong or we just can't do it.

But if you suffer from the delusion that putting the words "rite an esay about george washington" into the prompt box in GPT is just a different way of writing an essay then you're simply wrong.
posted by sotonohito at 7:18 AM on February 16 [6 favorites]


most of what I teach is close reading of literary texts, and LLMs cannot do this at all

I also teach close reading of literary texts, and this was not my experience at all when I tried feeding some sonnets (Spenser, Sidney) to ChatGPT-3 a couple of years ago, using specific prompts (e.g., focus on explaining the metaphors, literary devices, and so on). Its output was not brilliant, but was above the level of a good portion of my actual students. I would caution against the view that it's going to be easy to tell when it's an AI doing a close reading of a literary text due to obvious nonsense. If any thought at all is given to writing the prompts, it can produce relatively proficient work.
posted by demonic winged headgear at 7:41 AM on February 16 [2 favorites]


My experience with LLMs is that they almost always work best when used, as a tool, by a human who is already an expert. Those folks typically seem to use the LLM to generate small results (eg a paragraph, not an essay) and then iterate. It speeds their work up rather than replacing it. I’ve seen genuinely amazing work done that way! But that relies on the fact that the human knows what they are doing!

The same process seems like a huge impediment, rather than an aid, if the human is in the early stage of learning. And especially students whose goal is less “learn how to write” and more “pass this class I can’t see the point of”. At that point it’s a substitution, not a tool.

In most topics I’d be happy to say “just flip the classroom”. Homework generally adds little value. But there are some topics, especially at the university level, that really do just require working through the same concept for many hours. Learning to solve problems. And it’s hard to see how you could fit that many hours into a classroom.

(Because it feels necessary to preempt these two common criticisms on MetaFilter: the idea that LLMs cannot be used productively is simply false. I am not great with them myself, but I’ve seen too many sincerely excellent people use them to accelerate their work to dismiss them out of hand.

And the idea that this tech can be banned at this stage seems extremely unrealistic. Even if model training were severely restricted on environmental grounds — which I would support! — the set of “models you can run on a Macbook” is now so high quality that we’ll have to deal with these issues in education even if we close all the data centers.)
posted by learning from frequent failure at 7:42 AM on February 16 [8 favorites]


Warner's been doing great work on teaching writing in the age of AI. In addition to his book he's a regular columnist at Inside Higher Ed.

Another great resource is the Refusing GenAI in Writing Studies website from Maggie Fernandes, Jennifer Sano-Franchini, and Megan McIntyre.
posted by audi alteram partem at 7:53 AM on February 16 [4 favorites]


Just to bring it back to the topic of the podcast episode linked to in the FPP, the question it grapples with is not whether AI should be used by children to learn to write, but rather, in the age of LLMs how can adults (teachers and parents) advocate for the value of learning to write without LLMs? How do we encourage teens that placing some limits on their technological use is good not just for their learning but for their well-being? As someone who spent a little over a decade teaching in college classrooms, draconian "all stick" approaches won't work - at least not for a lot of students - you need a "carrot."

The "carrot" is that writing can be fun. It can help one develop their "voice" or refine one's thoughts, or connect with strangers, etc. When writing a dissertation, I always had a general idea of what I wanted each chapter to be, but I never fully understood my thoughts until I had spent days writing them out - that process could be frustrating, but it was also often fun. How to capture that in shorter assignments appropriate for high schoolers? It's a worthwhile question, and I don't think the answer is to keep on with the five-paragraph essay.

The part of the episode I found particularly striking was when John Warner mentions one diligent student who came to him in one of his college writing courses and showed him a detailed list of writing rules one of her former HS teachers had about what transition words to use, how to structure an introduction, etc. She wanted to make sure he approved. And as it then goes on to discuss, this very formulaic way that the five-paragraph essay gets taught is essentially teaching students how to write like an LLM - no wonder some kids are opting to just ask ChatGPT.

The section of clips from homework "influencers" on how to use LLMs and not get caught was mildly terrifying.
posted by coffeecat at 8:32 AM on February 16 [7 favorites]


I think that kids today still care about being "authentic." Real. Whatever. It still sucks to be a poser, to not have an original thought, right? Then don't use AI.

Just say no never works, I'm aware, I was a D.A.R.E. kid. But AI is deeply uncool the way real art isn't.
posted by tiny frying pan at 8:46 AM on February 16 [2 favorites]


The purposes of school are) to physically warehouse children so parents can work.
To rank and gatekeep children so class structure can reproduce itself.
To accustom children to the type of working environment the economy will need them for: obedient factory workers and retail clerks, entertaining performers, etc.
To inculcate the values of the system of political economy
To modify rewards and behavior so that children learn to pursue artificial social goals instead of natural impulses.

So ... homework (including writing) used to play a role for much of that, and now does not Software/tech handles the babysitting, the ranking, the gamefied work environment, the indoctrination and the behavior nudging. So chatgpt and videogames is about all we need the schools to provide, (that and pop-tarts). The benefits of being able to write instead of deploying writing-software are a luxury for those who want it but not a need at scale. Its a pity, but like horesmanship and smithing, its now just a hobbycraft.
posted by No Climate - No Food, No Food - No Future. at 9:03 AM on February 16 [4 favorites]


What a stupid enumeration of the "purposes" of school.

pursue artificial social goals instead of natural impulses.

Lol.

Sounds like school's pointless. May as well bin it. 🙄
posted by DeepSeaHaggis at 9:10 AM on February 16 [3 favorites]


It’s terrifying to wade into ChatGPT discussions on MetaFilter, but I have a counterpoint use case: I use it as a tool to bounce ideas back and forth (usually its ideas are stupid but it jogs my thinking), then build outlines that describe my ideas. I then feed it my ideas to write, with descriptions of tone and voice and tense etc. When I’m satisfied with the output, I then get into my word processor and edit it. I edit it thoroughly, and I have no problem calling the work of fiction I’ve created my writing by the time it’s through. LLMs help me get past the blank page, which my ADHD makes feel completely insurmountable. They have made it possible for me to be a writer to the full extent that being a writer is part of my identity.

That said, the thesis of this article applies. I can only do this because I already know how to write and edit. But I also remember being in high school in ‘98-‘01 and not being allowed to use the internet for assignments, which was obviously stupid then and patently impossible now. The technology isn’t going away, for better or for worse, and pedagogy needs to catch up, fast.

All the usual other caveats about various companies being evil and using AI to replace workers apply.
posted by BuddhaInABucket at 9:14 AM on February 16 [3 favorites]


I use it as a tool to bounce ideas back and forth (usually its ideas are stupid but it jogs my thinking)

Like I said, this is legit, for that blank page anxiety. But, due respect, I would have a tremendous problem describing the final product as "your" writing.
posted by outgrown_hobnail at 9:20 AM on February 16 [1 favorite]


I have accepted I’ll never convince some people. But the ideas are mine and by the end of multiple drafts the words are mine. Where do we draw the line between “I wrote this” and “A machine wrote this”? Language is a kind of technology we use to communicate. Writing is a technology to transcribe language. LLMs are a technology I use to expedite my writing and overcome a handicap.

We use tools to augment every other human activity. The reasons why this particular tool is so controversial has more to do with the people and companies behind them and how they are used by other companies to displace and disadvantage labor. The tool itself is simply a tool and can be used for any end that language can be used, good or evil.
posted by BuddhaInABucket at 9:28 AM on February 16 [3 favorites]


Parent here. As it happens, my day job involves working with "AI" (LLMs and Langchain), but my comment is about the larger context here.

I think we really missed an opportunity with lockdown to earnestly inquire into our approach to raising and schooling children. Despite some progress, our current approach is largely based on regimentation, conformity, and compliance. Those of our kids who are PDAers or otherwise neurodivergent are the canaries in this coal mine, but instead of listening to them we try to punish and suppress them.

What we can learn from some indigenous societies is that simply working with instead of against a child's natural desire to learn, explore, and socialize we can do so much better, especially if the child has not already learned to suppress and subvert their own intrinsic motivations under the boot of external adultist ones.

As for AI: if we look into our hearts, we know the right thing to do. Instead of punishing kids for using what is around them, perhaps we should look at what adults are doing here? Pushing billions of dollars and fossil fuel into subsidizing giant, greedy, opaque oligarch empires which steal our data but guard their training pipes? What instead if we force these companies to fully disclose their models - not merely open weights, but truly open source reproducible from first principles? What if we allow kids to create their own AI algorithms, not mindlessly consume corporate ones? What if instead of centering the corporate impulse to emulate humans with machines we tried imaginative ways for machines to do things that don't consist of impersonating humans? What if we severely punish tech companies that promote mindless engagement and addiction, instead of rewarding these corporations and then punishing the kids that use their creations?

Liberate kids. Constrain corporate greed.
posted by splitpeasoup at 9:29 AM on February 16 [12 favorites]


As a parent, I agree with sotonohito: homework is bullshit. Its main effects are to cut severely into the daily downtime that kids need in order to remain effective learners and to create opportunities for conflict between parents and children that both would be better off without.

When I was a kid, homework began in third form (year 9), the rationale being to teach students the self-directed study skills they'd need after leaving school and going on to higher education. Since then I've seen it creep younger and younger until even kindergarten kids are routinely expected to do it now. It's so stupid.

I count this as one of those ways in which education has definitely gone backwards over the years, along with routinely walking to and from school now having been entirely supplanted by twice-daily SUV traffic jams.
posted by flabdablet at 10:00 AM on February 16 [6 favorites]


This Ben Thompson piece goes pretty deep on LLM-assisted writing and echoes some of the themes I’ve seen. Excerpt from his assessment of a writing experiment he gave to OpenAI Deep Research, a $200/mo service:
The second answer was considerably more impressive. This question relied much more heavily on my previous posts, and weaved points I've made in the past into the answer. I don't, to be honest, think I learned anything new, but I think that anyone encountering this topic for the first time would have. Or, to put it another way, were I looking for a research assistant, I would consider hiring whoever wrote the second answer.
Successful uses of LLMs for professional work look a lot like successful uses of internships. Interns are generally but not specifically informed, motivated but not self-directed, and require regular check-ins and additional work from supervisors to succeed. In the context of corporate work I’m seeing a lot companies trade this kind of production for the non-LLM benefit of internships that produce new experts. The present benefits are there, but it feels a bit like eating your seed corn? Where will tomorrow’s contributors come from as companies swap out their junior staff for robots?

In the context of education I can’t think of a reason why a high school or college student should hire an intern.
posted by migurski at 10:53 AM on February 16 [5 favorites]


The solution to the problem of kids using GPT for homework is simple: stop giving kids homework.

I agree with much of your longer post sotonohito, but another answer could be to just totally overhaul homework and learning objectives. For example, in a first-year college writing class I had assigned a graphic novel, and I wanted the students to really think a bit about how visual material is a form of communication because I didn't want them to just rush through reading the text bubbles. So I assigned them as homework "pick one panel and describe all of the visual information contained within that panel - make your description as detailed as possible." And then the next part of the assignment was to get them to reflect on the experience of close looking and translating the visual to text. Students really liked this! Even the "bad" students. They generally didn't find it be "busywork" but a practice in thinking in a different way that was interesting, if not kinda fun.

Likewise, Warner's example of a writing assignment when he was a kid of having to write instructions of how to make a peanut butter and jelly sandwich, and then having to follow his own instructions - which because he forgot to mention a knife, meant getting his hands messy. And so he learned that writers need to think about their audience. He's in his 50s, and he still remembers that assignment - clearly homework can have an impact!

Of course, as long as we are under the shadow of "No Child Left Behind" testing bullshit, we'll never get there. But I'm heartened that various thoughtful people are thinking about reform, because good lord is it needed.
posted by coffeecat at 11:10 AM on February 16 [4 favorites]


stop giving kids homework. It's pointless bullshit

tbh I'm not sure how you'd learn math without doing problem sets, and unless you're doing the flipped-classroom thing, aren't they going to be done outside of class time by necessity?
posted by BungaDunga at 11:15 AM on February 16 [2 favorites]


I can't seem to delete my previous comment, but I'll rephrase.

However joyous and useful the content of an education can be for some students, some teachers and some communities/fields of knowledge, the institution of full time compulsory youth education operates under different imperatives. The pandemic was a useful opportunity to make this more obvious. From radio and tv to digital libraries and learning software, online and irl communities of hobbists, the task of delivering content and testing skills has long been liberated from the school for those who want to learn something.

compulsory homework for the reluctant student is now technologically impractical for educational purposes: as remote learning laptop surveilance state techniques reveal - we can"t stop cheating, but we can make onerous and privacy intrusive requirements.

Driving a car 26 miles doesn't prepare the driver for running a marathon, but we don't need to make marathon runnning a compulsory requirement of having an income. School - prison = ?
posted by No Climate - No Food, No Food - No Future. at 12:52 PM on February 16 [4 favorites]


Where do we draw the line between “I wrote this” and “A machine wrote this”?

When a machine writes it. In that case, you didn't write it, and so it is not yours.

I've been editing some friends' books lately. It's so much fun, and I get to add a million of my thoughts to the conversation I'm having with their work.

But the books are their books. They wrote them. Not me. Even if I had edits for every single sentence, these would not be my books.

This has to count. If communication with other people is to mean anything, the ultimate source of those words, and the care behind them, has to have meaningful distinctions.

I have erased a bunch of comments I've written in this thread, because other people are making good points and I don't want to detract from those. I love me some LLMs and use them all the time and have lots of opinions about them but nothing they say is mine, none of their prose is mine, nothing they come up with is something I should or would claim, if nothing else because oh my god I can write so much better than them. Because I'm human, and because my species has spent millions of years evolving as social animals who care about communication. Sounding boards? Sure! Extremely expensive alternative to Google? You betcha! But generating prose to offer to someone else as my own words?
posted by mittens at 1:19 PM on February 16 [13 favorites]


When I think of ChatGPT and the like, I think of when I was laid off, and wrote a farewell email to my colleagues. One of my colleagues, who was also laid off, sent a farewell email too, lo and behold, he had copied, word for word, about 50% of what I had said in my email—my personal sentiments.

People have always plagiarized to avoid research or make themselves look more impressive, to lie about accomplishments. But the use of these tools to generate your innermost human thoughts is pretty unsettling to me.
posted by girlmightlive at 1:26 PM on February 16 [9 favorites]


tbh I'm not sure how you'd learn math without doing problem sets, and unless you're doing the flipped-classroom thing, aren't they going to be done outside of class time by necessity?

Are you talking grade school? They do the problem sets in class, in addition to having instruction in class, and other exploratory and creative activities that are equally important in learning math also in class, and self-paced learning structured and scaffolded to help students learn how to learn which wouldn’t require the direct involvement of the teacher but which should be done in class because kids need down time and time to be kids and lots of research clearly shows that the primary outcome of homework at the grade school level is to reproduce and exacerbate socioeconomic differences between students. That’s why kids go to school and are in class for a fair chunk of each day.

At the university level? Yes, that’s where a flipped classroom model can be useful. Supporting first year students in their study skills and supporting instructors in developing effective courses in this teaching model are both key to the success of such initiatives, of course. (Which is also true of more traditional educational models, it’s just that such support for students and teachers in those models has often occurred already previously.)
posted by eviemath at 1:35 PM on February 16 [3 favorites]


Once our writing program director gave faculty at a workshop (higher ed, undergrad level) two short papers written by students and asked how we would grade them. Most gave the technically correct paper the higher grade, but she pointed out the one that appeared less polished had some real thought and originality and the correct one didn’t. (Incidentally the correct one was from a student with a good suburban US high school and the expressive one was by a first generation student who did not come from a community with well-funded schools.)

So much depends on what both teachers and students think writing assignments are for. Is it to learn how to write grammatically correct and fluid prose or is it to learn how to think or is it to learn how to convey what you think in a way that others will find informative or is it to show you have mastered some material or to practice writing in a particular style expected in a profession or…. Or maybe you assign writing because students are supposed to write papers and you are supposed to ruin many evenings and weekends marking them up because it’s how you experienced school?

For students, writing assignments are too often tests of being able to produce writing that sounds like someone else wrote it with someone else’s ideas and not too many mistakes. It’s no wonder they want to use LLMs. I think Warner has great ideas but it’s hard for both instructors and students to overcome old habits.
posted by zenzenobia at 1:37 PM on February 16 [6 favorites]


Mittens, thanks for your perspective. You’ve made a case that I am not the writer. What does that make me then? I make detailed changes to add back the human element after chatGPT writes down my ideas based on what I feed it. Am I a storyteller with an intern? Because I don’t think it makes my work of art less capable of transmitting meaning and emotion just because an LLM created the scaffolding I worked around.
posted by BuddhaInABucket at 1:59 PM on February 16


But this is kind of the problem with LLM writing. Ideas are not worth much, in writing. Everyone has ideas. It's just what brains do. Too much focus on the value of ideas gives you something like "high concept" in movies.

Writing is execution. Writing is taking a particular context (the relationship between you and someone you're texting, the relationship between you and the boss you're emailing, the relationship between you and a mass audience you cannot see or hear but who is reading your novel), and reflecting on that context, reflecting on your audience, sometimes trying to persuade them, sometimes trying to get ahead of them to offer them a surprise. It's manipulative in a good way. You want someone to break your heart, then they have to understand what it means to have a heart.

Certainly people can jot down ideas and give them to someone else to write; James Patterson has made a career of it. But the whole problem with James Patterson is that he has become something other than a writer. He's an empty house full of ghosts.

If you read books on writing, it's the most depressing genre ever. Nothing makes you give up your faith in humanity like those books that tell you the ten, twelve, twenty-six beats all fiction has to have. "My analysis of Star Wars (or Jaws, or whatever example of blockbusters late-70s cinema) will show you how novels work." There's this awful sentiment where, if you just have the right skeleton, boiled down until all the skin and meat have dissolved off of it, you can take it out and build a working beast out of it.

But that's not writing, that's taxidermy.

And it's what LLM writing is like. I have, out of curiosity, tried various LLMs at nearly every stage of writing, just to see what they're capable of. And it's sad. It's so sad. There's a complexity to human communication that just gets drained out. It's anticollaborative. Collaboration with another human writer constantly surprises you, keeps you on your toes, hurts your feelings sometimes, but feels like a relationship. Writing with an LLM...there's something lonely about it, talking to someone who will never really understand.

Annie Lamott had everyone talking about shitty first drafts; I've heard other folks talk about the vomit draft, or the gossip draft. It's such an essential part of writing, getting things wrong. The LLM doesn't know how to get things wrong. It doesn't know how to experiment, it doesn't know how to try and fail. The little...communication completion loop I've tried to describe before, a little perfect soap-bubble of words, is contrary to the open-endedness of a draft.

I suppose what I want to say is something like this: We fool ourselves a little with the wrong analogies in writing. There's no such thing as an outline. There's no such thing as a scaffolding we build around. Writing is a unitary thing. The style is the structure is the meaning. We cannot inject humanity into an LLM-generated text the way you might inject marinade into a turkey. It's too far gone, it's too late, it's something other than writing.
posted by mittens at 2:34 PM on February 16 [15 favorites]


Driving a car 26 miles doesn't prepare the driver for running a marathon, but we don't need to make marathon runnning a compulsory requirement of having an income.

It's interesting you use that comparison - it actually gets brought up in the FPP episode. People don't train for a 5K because they need to get from point A to point B faster. If that's the goal, they'd be better off biking or driving. People jog/run because they find it beneficial to their mental and/or physical health, they like getting outside, it's a good activity to do with their dog, etc.

I think everyone should be given the opportunity to learn to write not because I think everyone will have jobs that will require writing skills. (Although, written communication is part of a lot of jobs, at least to some extent) But I'd argue, as some in the podcast do, that everyone can benefit from learning to write because ultimately, writing = thinking. And everyone should get that chance in school. But yeah, it will require a major rethinking of how schools should be structured and what the learning outcomes/goals should be.

On another note, BuddhaInABucket, your description of how you use ChatGPT sounds like you're mostly producing your own writing (if I'm reading you correctly). To me what matters is whether you, the human, are doing the work of organizing your thoughts and crafting your voice. Feeding an LLM a prompt like "Write a five paragraph essay arguing [x] about the Odyssey" or whatever is obviously not writing, even if you edit what it produces. On the other end of the spectrum, I've heard a journalist say they find it useful to ask Chat "What are five top counterarguments of [x]" where [x] is an argument they want to make. They find it useful as a way to know what negative reactions readers might have, and that they might want to address in their article. That seems totally fine to me.
posted by coffeecat at 2:44 PM on February 16 [2 favorites]


Boy, if you only look at the negatives of school and homework, it sure seems like a trash system. Good thing there's nothing positive about it!
posted by DeepSeaHaggis at 3:10 PM on February 16 [4 favorites]


Some of my friends have gone pretty hard into trusting LLMs, and talking to them really reminds me of a CEO having a manic episode (*cough*). When people are having interactions like this, how is it not going to launch some kids off the deep end?
posted by lucidium at 3:34 PM on February 16 [2 favorites]


Mittens: I really do understand what you’re saying, but I definitely disagree. I consider the initial LLM output my garbage draft (I love vomit draft! I might switch to using that), and then my first revision of that the real first draft. My process includes all the same meanderings and rewrites and discoveries and changed plans as what you consider “real” writing. I’ve written that way my whole life. Writing the new way takes away the parts that are torturous for me. I disagree with your comment that ideas are not key to good writing. The way you execute your writing is one of your ideas. I don’t believe that struggle is what leads to good writing. It’s just struggle. I love having a tool that helps me get where I’m going without it.

Coffeecat: I like the distinction you make between directing the model vs. just asking it to produce a final output. I think too many people assume that’s the only way people use it.
posted by BuddhaInABucket at 3:55 PM on February 16 [3 favorites]


I think getting past the blank page is a skill in and of itself, and maybe isn't a skill that's taught particularly well. I had to figure out that skill on my own, and I'm as ADHD as anyone. Using ChatGPT doesn't allow you to develop it, and definitely doesn't help you refine and practice it, leading to bad outcomes. If someone says your writing reads like AI, and you used AI, it doesn't matter what they're reacting to. They're correct.
posted by fnerg at 3:40 AM on February 17 [3 favorites]


FIve paragraph essays get the job done -- the essay equivalent of a canned tomato soup & grilled cheese (kraft singles on white bread) dinner. I'd probably still do shlock five paragraph essays if I were in school now. It's the expected response to something like "Compare and contrast The Gift of the Magi with The Lottery" -- show you've read and grasp the broad strokes of both works and can assemble a reasonably coherent compare-n-contrast argument (which you have probably been taught to do in the preceding week's instructional time).

Doing it in rhymed couplets will not result in a passing grade. Doing a twenty page compare-n-contrast will also not be a passing grade even if you have interesting things to say for the whole damn twenty pages. Writing in an authentic voice (lol, they never want this even when they say they do) and rambling happily through assorted literary stuff pertaining more-or-less to The Fall of the House of Usher in five parts (the prologue would be longer than the play) instead because that's more interesting to you... also not going to generate a passing grade. #askmehowIknow

I will state for the record that failing to play along and expressing your individuality and so forth did not please either the teachers or the parents (at least mine). I tried, believe me, I tried. That was not the way. Eventually I realized that both the teachers and the parents just wanted a tomato soup and grilled cheese, the same damn thing, over and over. So I did that. Generate the expected output, get a passing grade (typically a very good passing grade). And dang, given how flipping rote and canned and annoying that whole experience was, I can't blame people for using ChatGPT to do it these days.
posted by which_chick at 6:53 AM on February 17 [4 favorites]


Are you talking grade school? They do the problem sets in class, in addition to having instruction in class, and other exploratory and creative activities that are equally important in learning math also in class

I was thinking more high school-level math. But yeah, if anything, math homework is probably worse than useless in lower grades, I didn't have any at all and wouldn't have benefited from it.
posted by BungaDunga at 7:50 AM on February 17


We cannot inject humanity into an LLM-generated text the way you might inject marinade into a turkey.

It's late, my old eyes are bleary, the +1.5 screen glasses are not really cutting it, and I initially read that as "inject marmalade into a turkey" and thought I was in the wrong thread because actually that sounds kind of delicious.
posted by flabdablet at 8:44 AM on February 17 [4 favorites]


BuddhaInABucket If we cut out the LLM and you did the same process with a human ghostwriter would you argue that anything significant changed?
posted by sotonohito at 8:51 AM on February 17 [1 favorite]


“Someone gave you something to read and respond to” is the worst defense of AI slop I’ve ever read.

I should spend my finite time and energy reading something another person didn’t bother spending their time and energy writing? That essentially makes me the victim of a Gish Gallop Rope a Dope. I can’t think of a more contemptuous gesture.
posted by The Monster at the End of this Thread at 11:12 AM on February 17 [6 favorites]


Are you talking grade school?

I was thinking more high school-level math.


Ah, I think we had this confusion in a previous education thread. I count high school as grade school (grades 9-12, specifically, in all of my US experience; it does vary slightly in Canada).
posted by eviemath at 12:12 PM on February 17


Sotonohito: I guess I’m just not communicating well enough how involved I am in what the LLM is putting out (or maybe I don’t know how deeply involved some authors are with their ghostwriters?). Either way, I accept that it’s a different way of doing things and not everyone is on board. I’m transparent about my process, though, and I’ll never accept anyone calling my work AI slop.
posted by BuddhaInABucket at 12:46 PM on February 17


Some points from a college writing professor:

1. The five-paragraph theme is my sworn enemy. It is a cookie-cutter garbage template that removes thought, not encourages it. If you're a writing teacher, you should not be seeking polished final-product papers, you should be seeking opportunities to assess how students think in response to situations that require audience awareness. I spend a ridiculous amount of time un-teaching the five-paragraph theme. Here's a clue: do you like to read paragraph themes? Do they strike you as engaging or thoughtful? No. I try to teach engaging and thoughtful writing.

2. There is a very large practical gap between homework in high school and homework in college. United States law and the credit hour require students to spend time out of class and in class on work. In my teaching, I believe that the direct method of instruction works best: students will learn to write better by writing than they will by me lecturing about writing. Practice works, whether it's Michael Jordan's three hours per day practicing 3-point shots when he was in Durham or Army soldiers at sniper school putting thousands of rounds downrange. If you're going to write, learn by practice. That's my primary gripe against LLMs, but at the same time, my experience as a veteran tells me, "train like you fight." Students who are not interested in performing MeFi-style virtue signaling will use LLMs in the real world, so they should learn how to use them well. But, to the broader point, the talk about "eliminate homework" will not fly in college. Still, this semester, students are spending at least 30 minutes every class session on sustained writing, and I leave the readings and the additional instructional materials for homework.
posted by vitia at 1:21 PM on February 17 [3 favorites]


And please, can we see the "I will never use LLMs" declaration go the same way as the "Well, I don't own a TV, so. . ." comment cliché declaration?
posted by vitia at 1:23 PM on February 17 [4 favorites]


FWIW, the linked author's previous book is Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities. Warner is a really smart, thoughtful writer.
posted by vitia at 1:46 PM on February 17 [1 favorite]


As someone old enough to never have to do it again, I say: Homework is bullshit.

Why in god's name can't kids come home after a long day and CHILL? Maybe one reasonably scoped assignment per class over the course of the semester, something interesting, OK. But busywork just burns kids out while attempting to teach them that work never, ever stops.
posted by 2soxy4mypuppet at 4:59 PM on February 17 [8 favorites]


Not to belabor the point but I don't think I could have learned, say, AP Calculus in high school without doing problem sets outside class time, and it did actually prepare me reasonably well for a lot of undergrad level math. For calculus or algebra it's the equivalent of having to read chapters of The Grapes of Wrath in English class, it's not busywork.

The blood/tears involved in some of the elementary-age homework assignments (how do you think the papier mache diorama is getting made) was, indeed, a strongly net negative all around.
posted by BungaDunga at 6:06 PM on February 17 [3 favorites]


Problem sets don't necessarily help much in math. Unless things have changed, math teachers go over everything again and again and again in lectures, which is way way more than enough for some. Dioramas, on the other hand... I never met diorama homework that was a waste of my time.

In conclusion, learning is a land of contrasts.
posted by surlyben at 8:51 AM on February 18


I mean, I'd say "homework is bullshit" is about as true as "education is bullshit." Homework and education can both be designed badly, and they can both be designed well. Nobody is advocating for "busywork [that] just burns kids out while attempting to teach them that work never, ever stops." Warner is a long critic of the contemporary education system - he might believe deeply in the potential of education/learning, but that doesn't mean he's unaware of many current problems in the American system.
posted by coffeecat at 9:12 AM on February 18


Would everyone hate the five-paragraph theme so much if it weren't for the conclusion paragraph? I thought I would ask, because that was always the killer for me. Teachers would say--with some degree of weary serious joking--"first, tell me what you're going to tell me, then tell me, then tell me what you told me," and like, if you already know how stupid that is, why are you assigning it to me in that format? But the minute I didn't have to come up with a conclusion that directly repeated everything I'd just said (twice!), paper-writing got much easier.
posted by mittens at 9:17 AM on February 18 [1 favorite]


Lots of folks have pointed out parallels between the 5-paragraph "beginning, three things in the middle because humans like threes, end" form and other genres—like, say, the classical rhetoric "exordium, three things in the middle because humans like threes, peroratio" form. And yeah, I know plenty of exhausted teachers in high schools where even managing to communicate the tiniest amount of learning—in between the competing pressures of classroom discipline, student distractions, and relentless testing—is an enormous challenge. So I can see the pedagogical rationale for the simplest possible template.

But it's a template. A fill-in-the-blanks exercise. And if someone assigns me a fancy fill-in-the-blanks exercise and calls it "teaching," I got no problem with the student impulse to use a fancy autocomplete to fill in the blanks in that template.

The "tell me what you told me" conclusion is one variation on the genre of the fifth paragraph in that template—other variations include the "rah rah! everything's gonna be great," the "nobody knows because everybody has an opinion," and the "we need more justice for everybody." Those are moves that they're learning, and I'm fine with that, but when I talk with them one-on-one about conclusions like that, the advice I offer that most frequently gets them nodding or frowning and writing down some notes is, "Give me the 'So what.'"
posted by vitia at 11:09 PM on February 18 [2 favorites]


Clancy of Culturecat once asked me what I wanted the final word of my dissertation to be. That was a really smart question, and it's made me think hard in the decades since about what conclusions should do.
posted by vitia at 11:17 PM on February 18 [2 favorites]


There’s no way that you can tell if a decent, adult writer used AI. I think that a lot of people are kidding themselves on this point. I see numerous comments here about AI-writing being too long. When I run drafts through AI, it consistently shortens my draft, adds more clarity, and, in the case of personal messages, adds a bit of east coast gruffness that I edit out with my Midwest sensibilities, which inevitably adds more words. Please don’t assume that overly verbose = AI
posted by agog at 8:17 PM on February 19


There’s no way that you can tell if a decent, adult writer used AI.

I'm not sure whether people are really arguing that you can always tell, no matter what level of usage the AI has gotten. In fact the issue of detection is tricky, as the failure of plagiarism/AI detectors has shown us. Kind of like with AI generated images, where for a while you could reliably count fingers to judge, the technology can get ahead of many of the tell-tale signs. But for any LLM-ism in the writing--the conciliatory cheer, the length, the 'let's examine this from all sides neutrally' style--certainly you can prompt out those features, or edit them out afterward. But the labor intensity increases. If you want an LLM to mimic an actually decent, actually adult writer, it's going to take some work!
posted by mittens at 4:44 AM on February 20 [2 favorites]


I teach high school and the last few years have been very interesting with regards to AI.

At first, the students most likely to use it were high performing students who cared a lot about grades and were also carrying a heavy academic and extracurricular load, and seemed to think this could be a shortcut.

Now, many of those students seem to understand that even if the AI is not 'caught', that will usually result in a lower grade than they could earn independently. It seems that most of them either learn how to use AI more effectively (which can be quite time intensive on its own, as well as intellectually engaging and educational), more selectively (for teachers who don't care), or they just do the work and get the good grades they were already getting anyway.

Teachers who don't care - these are generally teachers who might be considered 'easy As'. It doesn't mean they are necessarily bad teachers. Some have just given up on the quagmire (pressure from students, parents, administration, district, school board + many other legitimate critiques against grading systems) that is grading, or have ran out of patience with the salesmanship part of the job. They want to focus their teaching energies on the students that are bringing their learning energies.

I say 'caught' because mostly my colleagues and I no longer officially 'catch' students when they're using AI, even when it's very obvious. I treat it the same way I would if I suspected they were getting overly enthusiastic parental/sibling help. I know I cannot prove it. I will usually just give it the grade the AI has earned (usually quite low because the work is minimally responsive to the prompt and expectations and rarely succeeds at demonstrating understanding/progress/mastery of the actual skill(s)/content being addressed). Depending on the situation, I will have conversations with the student and/or parent that don't include accusations, and depending how that goes, may give them an alternative opportunity to demonstrate their understanding/abilities.

One piece that often gets lost in the AI conversation is that school writing tasks are not only for building writing/communication skills. The more different ways that a student actively interacts with content, the more likely they are to be able to understand it, identify their own questions and critiques, engage with it, and remember it. AI used as substitute for writing also generally completely undermines those benefits.

AI is not only an issue for homework. To the extent we use computers for writing and research in the classroom, it is impossible for one human being (me) to accurately surveille 26-34 students' laptops, and the district has not been able to effectively prevent access to AI while preserving access to the internet.

Many students at various levels of academic performance are able to understand that the purpose of writing work is to exercise and improve their learning and skills, and are willing to forego AI if they actually believe that they can earn as high a grade with their own imperfect work as with AI-produced work (which can seem perfect to them, even if it is obviously flawed to the teacher). One thing I will sometimes do for this is to assign a (low weight) grade for effort/completion and an info-only 'grade' so they can see how they would have done if it had been graded for quality. This can help them feel brave enough to write and get the benefits while also letting them have the satisfaction of seeing their progress and growth.

Ultimately, students who rely on AI to substitute for learning are hurting themselves, and a teacher can only do so much for a student who is opting themselves out of learning - whether that is through AI, cell phones, substances, cutting, etc.

They will get the education that they get, and hopefully make the best of it.
posted by Salamandrous at 9:46 AM on March 1 [1 favorite]




« Older Just another perfectly normal president rug...   |   "I've got a little list" Newer »


This thread has been archived and is closed to new comments