But We Will Realize Untold Efficiencies With Machine L-
June 19, 2024 7:27 AM   Subscribe

 
Look at us, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly.

Yeah, that about sums it up. It's all about chasing hype and FOMO instead of actually, you know, solving problems.
posted by Ickster at 7:38 AM on June 19 [64 favorites]


Quote of the day:

I'm going to ask ChatGPT how to prepare a garotte and then I am going to strangle you with it, and you will simply have to *pray* that I roll the 10% chance that it freaks out and tells me that a garotte should consist entirely of paper mache and malice.
posted by NoMich at 7:38 AM on June 19 [38 favorites]


The title alone is cathartic!
posted by toodleydoodley at 7:41 AM on June 19 [10 favorites]


just visited for the piledrive, was not disappointed
posted by HearHere at 7:42 AM on June 19 [2 favorites]




OK, so surely now it's time to short nVidia? Right?

Or will the Brawndo S&P 500 AI immediately fire everyone if it deflates?
posted by snuffleupagus at 7:46 AM on June 19 [2 favorites]


Preach, brother.

Also: Synergy Greg
posted by lalochezia at 7:47 AM on June 19 [11 favorites]


I wish I could send this to so many people at work but it's a little sweary and don't think it will fly. I will resort to sending it to family instead.
posted by fiercekitten at 7:53 AM on June 19 [10 favorites]


There's a meme someone posted somewhere - an account from someone talking about a recent interaction with her teenage nephew that illustrates her issue with AI. She proofread a lot of his papers for him as a favor, and he was getting good-naturedly frustrated by how many notes she always had and as a challenge asked to proofread one of her own papers. She was a bit surprised when he came back in only ten minutes to say he was done, and asked how he did it so fast. He pointed at all the red- and blue-underlined bits in the Word Document. "Oh, that doesn't mean anything, though," she said. "Yes, this word right here has a red line, but it's actually spelled right. The computer just doesn't recognize that."

"But....it has a red line."

"Yes, but the computer is wrong. Here's the word in this other document."

And the kid was baffled. And that's when she realized that the kid honestly and sincerely thought that proofreading was simply a matter of correcting all the words that the computer had flagged in its review of the document, and he had assumed that the computer was infallible. The notion of a person knowing more than a computer, or a computer being imperfect, was completely foreign to him. And she had to explain that there's only so much a computer "knows", and the human brain is much smarter still - if we use it.

She also goes on to address some of the things people are using AI for - "brainstorming ideas" is one thing. "Can't you just brainstorm with your....brain? And okay, yes, ChatGPT might come up with an idea you wouldn't have thought of - but so could another person, and the other person's idea is more likely to be correct."
posted by EmpressCallipygos at 8:00 AM on June 19 [86 favorites]


This is good and I enjoyed it.

It reminds me of the article that I just read in the Guardian about a woman whose husband died and who has not received probate/power to sell their house and use their funds for two years, and this is because the brain geniuses in the UK government decided in 2019 to "centralize" probate processing rather than let regions have their own offices and make it mostly an online process, which has been a massive, crashing disaster for everyone, which almost anyone could have foretold. Radically changing complex and important processes is hard, and if your primary goal is to save money/pay your grifter contractor buddies, you will fuck it up.
posted by Frowner at 8:01 AM on June 19 [31 favorites]


So it is with great regret that I announce that the next person to talk about rolling out AI is going to receive a complimentary chiropractic adjustment in the style of Dr. Bourne, i.e, I am going to fucking break your neck. I am truly, deeply, sorry.

I. But We Will Realize Untold Efficiencies With Machine L -

What the fuck did I just say?


---

And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business - not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.

II. But We Need AI To Remain Comp-

Sweet merciful Jesus, stop talking.


----

The fight culminates with Wulfgar throwing away his weapon, grabbing the chief's head with bare hands, and begging the chief to surrender so that he does not need to crush a skull like an egg and become a murderer.

Well this is me. Begging you. To stop lying. I don't want to crush your skull, I really don't.

But I will if you make me.


----

Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)


OK, this is Dave Barry for computer dorks, but with better opinions. And salt. I love it.
posted by snuffleupagus at 8:02 AM on June 19 [22 favorites]


That's a lot of (entertaining) words around a very reasonable premise of how to approach pretty much any new technology that has both real-world uses and a ton of hype:

"You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now."

I think what makes the reaction to generative AI slightly different than previous hyped technologies is that it's so much more accessible. Compare it to cloud computing - truly disruptive, but not something that most non-engineers can "touch" themselves. I didn't really get the benefits of cloud computing until I built out my own AWS stack, and that that's not something that most people are going to do. It's pretty much impossible for a non-specialist to separate the (deserved) hype of cloud computing from the frothy thrash around blockchain and quantum, to take the author's examples.

With generative AI, anyone - technical or not - can load up ChatGPT or DALL-E. Your kids are using it, upper management is using it, the people who make pictures of cats dressed up as pirates on Etsy are using it.
posted by true at 8:06 AM on June 19 [4 favorites]


Turns out no one at all is ever talking about AI. But the average is skewed by AI Georg
posted by chavenet at 8:11 AM on June 19 [17 favorites]


Hello I'm a marketing exec from a major streaming service and I've pushed this essay through our innovative AI LLMs in search of The Next Thing, and I am absolutely stoked to tell you that this fall we will be releasing a new episodic thriller about the wonders of AI called Synergy Greg.
posted by mcstayinskool at 8:12 AM on June 19 [26 favorites]


I remember when EJBs (Enterprise Java Beans) came out and were all the rage and people started using them for ORM, and I said "this is ridiculous, too heavyweight and will fail under all of the inevitable tech debt" and just ignored their existence. Like, sure, they'd have applications when actual distributed systems that required RMI were a necessary part of the architecture, but the vast majority of websites don't need that. See also: SOAP.

I'm glad we live in a RESTful world now with very simple ORM and feel absolutely no regret at not digging into those bloated concepts.

AI (or, specifically, LLMs) seems similar. It will have specific, targeted applications but is not going to be Everything For Everyone Everywhere.
posted by grumpybear69 at 8:13 AM on June 19 [10 favorites]


Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anything - or rather, you do not need to do anything to reap the benefits.

THIS OMG THIS.

My previous organization, the one I rage-quit because the CEO's idea of my job and MY ACTUAL JOB just couldn't be reconciled, is...I'm trying to be slightly circumspect here because I guess I have integrity...a community/philanthropic organization. It also does Other Things, but none of the things it does include producing software or anything at all with programming.

The CEO is on an AI kick. He wants to hire an "AI Specialist" for the org, at a time when the IT department is EXTREMELY overworked and understaffed, when they can't keep employees in the childcare portion of the org because they don't pay enough, and when people who really want to make a difference (me, several other people who have left in the last year) are leaving because C-Suite is so fucking useless, and remember: THE ORG IS NOT IN TECH. At all. Even a little. It's a people-centered organization and this dude. This dude wants to "pioneer AI in the movement."

Whatever.
posted by cooker girl at 8:15 AM on June 19 [40 favorites]


An excellent rant from an experienced software engineer.

Which is to say that when it sticks to technical details it does quite well, but it also suffers from the common issue that the narrower someone’s expertise is the more they feel qualified to comment on things way outside their field.

A degree in machine learning and consulting for a few companies does not a business expert make.

Still a fun rant though.
posted by Tell Me No Lies at 8:17 AM on June 19 [5 favorites]


The median stay for an engineer will be something between one to two years, so the organization suffers from institutional retrograde amnesia. Every so often, some dickhead says something like "Maybe we should revoke the engineering team's remote work privile - whoa, wait, why did all the best engineers leave?". Whenever there is a ransomware attack, it is revealed with clockwork precision that no one has tested the backups for six months and half the legacy systems cannot be resuscitated - something that I have personally seen twice in four fucking years. Do you know how insane that is?

Fucking preach, brother. I swear IT teams are basically seen as dobby the fucking house elf, expected to magic up shit out of nowhere* whenever a 'senior leader' reads some bullshit article in the bullshit press written by bullshitting bullshitters out to extract all the money from such suckers by hype alone (--> see blockchain) until it all collapses in cries of fraud and mismanagement and nobody could have seen it coming, yet it's somehow now the IT team's fault that all the money got sucked away from ordinary business support services, (just trivial shit like working wifi, email, and backups) and blown on the latest fad du jour, so it's time to crack the whip some more on the peons when it all collapses.

* of course, there were hordes and hordes of elves slavishly making all that stuff, but they didn't exist or count to anybody important.
posted by Absolutely No You-Know-What at 8:21 AM on June 19 [25 favorites]


It's neither artificial, nor intelligent. The term "AI" does a disservice to both those words.

Anecdote time: I work at a nonprofit (~20 million dollars annually, 70 years old, social services... children and elderly. Some DV intervention too). We have recently had meetings where senior management has brought up how we should be using AI.

Our website is 100% separated—by policy—from the IT department. The Marketing department is 100% in charge of the company website, everything from hosting issues, communicating with the host, dealing with technical questions, etc. Sure, sometimes an IT person (there are three) attends a meeting about website issues as a 'courtesy.' We had six months last year when for some reason our web host was about to completely kick us off their platform—with letters and emails saying "we no longer want your money" because our organization repeatedly failed to respond to the host's question of allowing the host to change their hosting technology (I was not directly involved in this so I do not know the exact issue, the host needed us to simply officially say "yes" to them doing an upgrade on their end, and it never happened for over six months... probably more like a year).

As far as I can tell, the Marketing dept is in charge of the website because around 1997 or so, someone in Marketing decided to take it upon themselves to make "one of those new web-site-things!" for the organization. So forever since, the website has eternally fallen into the realm of the Marketing department. And IT forcefully keeps itself distant from website issues. "Not our job."

This organization is talking about adopting AI into its practice of running a pre-school, home visitation for new parents and a day-center for old folks.

AI is the perfect buzzword.
posted by SoberHighland at 8:22 AM on June 19 [21 favorites]


I wish I could teach with this, but cartoonish threats of violence are still... threats of violence, and that's not something I want in my classroom.

I enjoyed the hell out of reading it, though. I recently lost a ton of respect for a major dean in my university because he went all-AI-lemming on a roomful of librarians, ethics people, and others who basically know better. Also the brand-new university librarian, who is walking into a fairly troubled organization, wants two "AI librarians" to be her first hires.

I wish I could put this in their eyeballs. Non-violently, of course.
posted by humbug at 8:26 AM on June 19 [10 favorites]


A fun angry essay by someone I don’t want to meet.
posted by Going To Maine at 8:27 AM on June 19 [6 favorites]


I just wish it wasn't making the internet more garbage than it already was.
I miss being able to find niche stuff via web searches. Now it's just LLM excreta designed for max SEO.
posted by neonamber at 8:30 AM on June 19 [11 favorites]


yes yes yes i love this thank you
posted by capnsue at 8:43 AM on June 19


you could probably paste the essay into an ai chat bot and tell it to make it less sweary and more professional
posted by seanmpuckett at 8:55 AM on June 19 [17 favorites]


https://www.businessinsider.com/ai-model-collapse-threatens-to-break-internet-2023-8
Once all the novel human-created training data has been exhausted, all that'll be left to train "the system" is the output of "the system" which means....it's regression to the mean all the way down. I don't have a proof of it yet, but I think that's where much of the bias in AI models comes from - you get a beer-drinking Alpenhorn player in lederhosen when you ask for "German" because that says "German" to the lowest-common-denominator (or a sufficient set of nodes inside a zillion-node neural network)... it's not going to get better when 99.44% of the images associated with "German" in the data are variations on that output. Look, he's driving a Porsche in this one! Or is that a Volkswagen?
posted by adekllny at 8:56 AM on June 19 [9 favorites]


Reading this was like getting a nice massage. I wonder when we'll get a Molly White (hallowed be her name) for LLMs, as I can't imagine I'm alone in wanting a centralized "this is why not!" site to call out the garbage-hype powered misuse of applied statistics that are being smeared all over what used to be worthwhile tools, like Search.
posted by foxtongue at 8:56 AM on June 19 [7 favorites]


the average is skewed by AI Georg
AI George Carlin? [the verge]

George Carlin, content note: the f-word
posted by HearHere at 8:57 AM on June 19


Seriously, with the recent layoffs in Silicon Valley, now is the chance for a lot of companies around the country to beef up their IT and Devops capabilities by hiring programmers for the mundane stuff.

C suite execs around the country need to read this rant and act on it.
posted by ocschwar at 8:58 AM on June 19 [9 favorites]


foxtongue, I have been thinking the same thing, but I don't want to do it alone -- with three or more buddies to co-post I would absolutely give it a go, though.
posted by humbug at 9:04 AM on June 19 [2 favorites]


Well that was just delightful. And I feel vindicated wearing my AI-skeptic hat now.
posted by DiscourseMarker at 9:05 AM on June 19 [3 favorites]


Holy hell but I loved this.

I cannot emphasize this enough. You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now.

This reminded me hard of an acquaintance of mine getting super into bitcoin five or six years ago, and constantly posting on facebook about it, hyping it hard, etc. And to be fair, he seemed to know a lot more about it than the average person getting into it probably did. But I said something to him about how my suspicion was that bitcoin was going to turn out to be massively profitable for a very small number of people who either got in on the groundfloor before anyone else really had even heard of it (i.e. when mining for it was relatively trivial) or those who created it to begin with. And that the grand majority of other people involved were going to get the shaft.

And he agreed! But, in his lawyer-version-of-engineers'-disease way, he was certain that he was in the first group. I haven't heard from him in a few years now, and I kinda doubt it's because he's retired to a private island now.

Being in the entertainment industry, I've heard a lot about the "brain cloud" around LA, where ideas that seem self-evidently bad anywhere else (like "Let's make multiple Spider-Man movies without Spider-Man in them!") instead make total sense to the people in charge. Similarly, the few times I've been in the Bay Area, I find myself falling in love with the area and getting excited by the palpable sense of utter potential running through everyone there. And then, as soon as I'm outside of the San Francisco Brain Cloud, I remember just how much I detest this shit.

TL;DR: Fix your shit. LLMs are like a breakthrough medication for a specific ailment, and the grifters like Sam Altman et al. are on soapboxes selling them as cure-alls. They're not snake oil exactly, because they have specific uses, but they're not the cure-all for what ails your company either. Fix your shit. "AI" (and I hate calling it that) isn't a magic solution to your problems. It's almost certainly just a waste of your resources that could be better spent fixing your shit.
posted by Navelgazer at 9:20 AM on June 19 [12 favorites]


He does make a point, just a tad lede buried, half way through:

I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does.

So as much as I loved the grrar here, yall are missing the point of the blog post, he's anti-corporate-flimflam. Cut and paste LLM with Blockchain and the meaning remains identical. And I totally agree. Totally onboard with everything he says. But! But I know people that are using this new tool (New Tool) effectively in their personal and work lives.

Nvidia will likely level off as one of the biggest tech companies. These are useful tools now. No one. Absolutely NO ONE knows if LLM's will continue to grow in power or level off, but like Excel and email its a useful tool, learn to use it for your and societies benefit.
posted by sammyo at 9:22 AM on June 19 [9 favorites]


I think AI is pursuing a plan to pass the Turing test by making actual humans dumber.
posted by dances_with_sneetches at 9:23 AM on June 19 [33 favorites]


Oh I really loved this footnote:

6. I don't actually know what 'zero-trust' architecture means, but I've heard stupid people say it enough that it's probably also a term that means something in theory but has been sullied beyond all use in day-to-day life.
posted by sammyo at 9:23 AM on June 19 [2 favorites]


The violence of "I Will Fucking Piledrive You" sits in an ugly way on the Metafilter page. I understand most of the folks here are real mad about AI, I've long since given up trying to have a meaningful conversation about the topic here on Metafilter. I was going to just ignore this post. But there's that fascist impulse on the MeFi front page every time I load it. And yes, I realize that the essay and phrase is tongue-in-cheek. It still feels bad to me.
posted by Nelson at 9:24 AM on June 19 [12 favorites]


On the one hand, the hype cycle of AI is at its peak. I replaced a computer for an aggressively non-tech-savvy family member lately, and they asked, "Does this computer come with AI?" You can't even form a consistent semantic meaning from that statement. It's like asking if (to borrow a quote from someone who is not dead) ideas sleep furiously. If people who don't know how to text are asking questions about AI, you know the hype has reached almost everyone.

I am not an AI person but have trained a few models to explore the technology. It is true that even a state of the art generative model is, unquestionably, quite stupid compared to even a 4 year old. It takes hundred of tries, even under ideal training conditions, to solve a game that is much simpler than Tic-Tac-Toe. So far, all AI models make up for this lack of general reasoning by using huge quantities of data. If there is ample training data, such as in the case of image recognition, text prediction, etc., no problem. Just let the computer churn on the data for a while. And it works! The quality of these models is like magic compared to 20 years ago. As someone who read hours upon hours of sample documents to train speech recognition software, while. pausing. between. each. word., don't try to tell me that AI isn't important. There is absolutely a tendency for any aspect of machine learning that actually works to be retroactively removed from "AI" and put into the category of "stuff we already know how to do".

Which is not to say that the well-recognized hype cycle is not producing immense amounts of BS. There was "big data" before there was "the cloud", it just keeps going. Remember "applets"? In the early 2000s these were full-up Java applications that would run in a sandbox in your browser to prevent them from wreaking utter havoc and were supposed to take over the world. They flopped. But along came Javascript (completely unrelated to Java) and fulfilled the promise of applets and then some. Almost all web pages today are closer to applets than the original concept of text marked up with HTML. And many, many new "desktop" applications are actually web applications running in Javascript. The applet hype was completely unwarranted in a narrow sense but in the long run they really did take over the world.

So while I agree with the poster that the vast majority of problem domains that are crying "AI!" right now are not suitable for it, in the long run it is likely to be quite important, even in non-tech businesses. If (and only if) you have a large amount of input data and the outputs can be wrong without dire consequence, that will be an opportunity for AI. The poster's point about the complexity of AI models is well taken. It's effectively impossible for a non-expert to determine whether one should use something relatively simple like a random forest or Markov chain, or something sophisticated like a generative model, and then you need to set hyper-parameters, train the thing without spending your entire budget on cloud GPUs, and finally deploy it. It's pretty complicated right now. But it will get easier.
posted by wnissen at 9:27 AM on June 19 [12 favorites]


Loved every word of this. Most importantly is that he genuinely gets it: what we have right now understands language - syntax and concepts - but not the world we inhabit or the minds which needed to communicate within that world and created language to fill that need. And he is aware that may change, but that if so it will be a while yet.

He acknowledges that a) detecting students using LLMs for homework is nearly impossible (a single LLM, like the current latest ChatGPT version specifically might be on the edge of possible, but never “all LLMs”) and b) with over 50% of students now likely doing so, are you really sure you want to double down on this?

Because the way most businesses are attempting to employ headless language is inane, but education is front and center of institutions that are actually going to need fully separate mindsets for before and after LLMs. Right now most large businesses increasingly appear to believe this about themselves and most of them are wrong.

Reading an AI rage piece that gets this was unique and more like this, please.

OK, so surely now it's time to short nVidia? Right?

No. The time it is safe enough to hazard that is never. The markets are based on perception not reality. There are going to be contractions in usage and deployment as the glaring public failures of undercooked deployments (Google Search, MS Copilot, etc) continue to mount, but the timeline on sufficient reasoning to be useful in an assistant capacity is four years or less, with a growing consensus of 2027. Moving the goalposts on “AGI” to fit this reduced standard is complete bullshit, but it doesn’t change the basic fact that the upcoming approaches appear fairly solid and somewhat closer to how humans operate.

Basically the point at which this shit is palpably more useful is near enough that the hype on the current gen won’t have faded.

And nVidia is *years* ahead of everyone for hundreds of vector math pipelines on a chip, and if anything appear to be extending their lead. Google’s TPU is the next closest and current Google could not ship an actual product if there was a gun to the head of every member of the board and CXO suite.

Once all the novel human-created training data has been exhausted, all that'll be left to train "the system" is the output of "the system" which means....it's regression to the mean all the way down

If your role or institution is among the limited set actually threatened by LLMs really, really don’t plan on model collapse saving you. To the extent that all work on significantly furthering State of the Art models isn’t fixated on incorporating reinforcement / continuous training methods, it’s fixated on training sample efficiency with major improvements seen in the colossal model nVidia just dropped this week, and senior OpenAI employees publicly reporting progress on this front recently. Again: model collapse will not save you and you should not bank on diminishing returns in the near future.
posted by Ryvar at 9:41 AM on June 19 [11 favorites]


Yeah, I feel this. In my last salaried role, at a tech company, a company-wide mandate was for all teams to start using AI for something. It didn't seem to even matter what. But in my case, instead of just improving the product, I was supposed to do that and start getting the product ready for AI ingestion at some unknown future date.

In a current gig, because there's a lot of interest in AI and its potential impact on our industry, I'm now supporting working groups, communities of practice, and events to help people learn about it. I think learning about its potential uses and impacts can be good, and I don't mind helping steer that energy and monetary support in a productive direction. I personally remain skeptical about a lot of the supposed use cases, though. As this guy talks about, the tech isn't good enough yet to do a lot of the things people want it to do, and rights issues, data leaks, and model collapse are real concerns. So many people are just pouring their data into these machines for dubious benefit, and I don't think they're worried enough about how that info might be repurposed or mined by bad actors. But if you say anything about that, it seems like people back away in fear, like your naysaying or doubt will rub off on them. I get it—the industry I'm in has been deeply hurt by execs who dismissed and didn't adopt technology quickly enough, and so were supplanted by other products. But I'm not a Luddite. I'm perhaps more educated about the ethical and security implications of data collection than others are.

My partner is super excited about AI and its potential to assist in creative pursuits, so I'm trying to be supportive of that while maintaining my own healthy skepticism. As someone in a meeting I was in yesterday pointed out, none of the leading companies in this space can be trusted fully to use data ethically and look out for the best interests of their customers. I'm personally unhappy about the way some of the tools that have been essential to my work as an artist (e.g., Adobe products and Instagram) are now feeding the results of my hard work into their AI models against my will.
posted by limeonaire at 9:46 AM on June 19 [4 favorites]


In the interests of "not everybody has lost their entire mind," William Kilbride on AI in digital preservation and records management. Digipres is my wheelhouse, and I saw almost nothing to disagree with.

(I do take Ryvar's information about model collapse seriously. I cautiously disagree with Ryvar that it won't be a problem for "AI" the hype machine, though.)
posted by humbug at 9:48 AM on June 19 [4 favorites]


Umm.... *raises hand cautiously

So the impression I got is that AI is going to be very, very useful and effective, because any corporation can get it, and then lay off all the people whose jobs it can do.

If you are a corporation the way you make profits is by taking equity out of the company, right? You downgrade the quality of the product, you don't test, you cut customer support, you mechanize production, you keep working conditions low so that no one sticks around long enough to earn multiple raises, you capture the regulatory authority so that fines for misconduct are laughably small in proportion to profits from misconduct. It's a standard pattern. And of course you hype your product and business so you can get investors to provide the capital to run the thing, instead of using profits, because the company exists solely as a vehicle for you to withdraw money either in the form of stocks and shares, or C-suite salaries and bonuses.

As a customer, observing the adversarial relationship that successful companies - and registered charities - have with their customers, clients and employees, I am pretty sure that is the way it works. Customers, clients and employees are something to extract money from until no more money can be extracted and if not enough money is coming in, a successful company looks for a way to increase revenue with surcharges, or shed the customers that don't bring in enough money. Same thing with employees. Value extraction means not paying employees until the plane starts moving, and requiring them to respond to after hours e-mails.

It appears to me the sole purpose of AI is to give companies a plausible reason for reducing their staff. All that hype? The across the board adoption by companies big and small? It's a tool to reduce staffing expenses. Of course they all want it. Admin and creative staff are on the chopping block.

Naturally, to get away with the abysmal products and services that will result, they all have to adopt AI, because they need to get everyone accustomed to the change, just as they got us used to our phones being unable to connect us to a knowledgeable person instead of an unhelpful menu, and the way we got used to not being able to search for information on line without being steadily shunted towards advertisements instead.

AI is not meant to work. It's meant to provide plausible deniability for an enormous wave of layoffs. And they really don't care what their marketing copy looks like, or their images, or their e-mails to customers. Why would they care? As long as they all adopt the technology they'll all reduce salary and wages expenses and have more money for the C-suite and the major shareholders.

Maybe I am wrong? It makes no sense to me otherwise.
posted by Jane the Brown at 9:54 AM on June 19 [67 favorites]


I do agree that the "I'll dropkick you" style of argumentation is a bit outdated and not necessary. The writer makes good points. They could just as easily have been made without the unfunny and dated parodic violence—feels like stuff a techy ex used to say. But that stuff was fairly easy to ignore for me, anyway.
posted by limeonaire at 9:56 AM on June 19 [4 favorites]


Maybe I am wrong? It makes no sense to me otherwise.

it's a grift. It would be wildly coincidental if it actually stood up to rigorous logic.
posted by philip-random at 9:58 AM on June 19 [2 favorites]


I have complicated feelings about "AI", LLMs, and the associated hype. I am a long-time author, and I am also a software engineer who has worked with some of the precursors to modern AI (I've worked with networks of naive Bayesian classifiers, and Markov chains).

My understanding of LLMs is that they build text by randomly selecting a token (phrase, word, or symbol) that has a high probability of following the previous token(s) based on pre-computed statistical analysis of a huge corpus of training text. Repeat this a certain number of times, and if your training data is good, you get text that is convincingly human. It's like a Markov chain, but enhanced using probability.

Owing to the use of statistical probability of word order, if the training data describes a lot of knowledge, a sort of ghost of that knowledge is imprinted into the LLM--it will be more likely to build sentences that say true things. But as a consequence, no matter how good a model is, the strongest claim one can ever make about the output is "that's probably true." And I doubt that's good enough for most applications in the long run.

The tech I'm more worried about is AI image generation--images don't need to be "true" most of the time, "convincing" is good enough. It's going to chew up and regurgitate a lot of artists and illustrators.
posted by Hot Pastrami! at 10:19 AM on June 19 [12 favorites]


The business world won't be transformed by AI, because AI generates a lot of bullshit and the business world doesn't run on that

wait
posted by clawsoon at 10:55 AM on June 19 [10 favorites]


I enjoyed reading this because:

1) I am easily convinced that people are smarter than me, and I enjoy it when smart people validate my prejudices.

~~~
It appears to me the sole purpose of AI is to give companies a plausible reason for reducing their staff. All that hype? The across the board adoption by companies big and small? It's a tool to reduce staffing expenses. Of course they all want it. Admin and creative staff are on the chopping block.
Here's a good rule of thumb: no matter who works for a company, no matter what they make, no matter what they do, no matter what their mission statement says, no matter who funded them, no matter who owns them, if a company is for-profit, profit is the only thing they're for.

This applies to everything from organic food producers to AI startups who want to relieve humans of the drudgery of work (they don't; they want to relieve humans of the drudgery of getting paid). I enjoyed John Oliver's little piece about deep-sea mining, but I was disappointed that they gave so much screen time to that dipshit owner to talk about saving the environment by mining metals for batteries. No, the only thing he wants is to make enough money to be politically powerful. AI companies are no different. They don't give a fuck about anything but money, even if they've convinced themselves (and everyone else) that they do.
posted by klanawa at 10:59 AM on June 19 [9 favorites]


Metafilter: I guess I have integrity
posted by Greg_Ace at 11:00 AM on June 19 [3 favorites]


Hot Pastrami: you’ve correctly summarized how LLMs work and how embeddings can effectively map out semantic relationships in human language (an apple is a fruit and thus plant-based/plant-composed, etc).

But it’s all a static set of model weights, with the only runtime aspect being the current prompt, limited in scope to a context window - Google has recently expanded the context window into the millions of tokens in their latest models (Gemini Pro), but that doesn’t change their fixed weight nature. Best you can do to handle new contexts is arduous fine-tuning of the base model or overlaying it with a human-trained LoRA.

Human neural networks operate in realtime, continuously modifying their “weights” by reinforcing connections on-activation (or adjacent-to-activation? Any neuro folks want to fact check me?). Seldom-activated connections are slowly culled via synaptic pruning. We are all basically an ongoing, glacial-paced survival of the fittest (= most frequently accessed) competition between all of our combined synapses. Our weight-equivalents (number of dendritic connections between neurons in a synapse) are never static.

In artificial neural networks this is closer to how reinforcement-learning models work, in particular the (by LLM/Deep Learning standards severely underdeveloped) continuously trained variety. Stuff more applicable to robotics or neural-based gaming bots than linguistic translation or content generation.

But! LLMs can author reinforcement model scoring code that is consistently better than human ML experts’, zero-shot (= on the fly, for situations similar but not identical to those in their training set). Search nVidia’s Eureka demo for more on this. The rough idea of self-taught reasoning / step by step verification is to
1) break problems down into tons of tiny steps,
2) have the LLM write reinforcement model scoring code for each step, as with Eureka
3) spawn millions of tiny semi-randomized reinforcement models for each step, which are
4) ranked by the scoring code, then
5) cull the underperformers and finally:
6) in theory you now have some basic level of reasoning.

This is what is referred to as Q*, though the AI hype machine has made searching for this term less than helpful. I suggest “self-taught reasoning” or “step by step verification” instead.

Point is: there’s a solid path forward to considerably more in the near term, and I don’t just mean patching access to Wolfram Alpha into ChatGPT or whatever, but rather a significant expansion of capability by actively employing reinforcement models. This won’t give you Skynet or Data - agentic thinking or human response prediction isn’t on the horizon - but it should still be more useful than just emulating the human language-parsing layer in context-free, afactual isolation.

Re: image generation - there’s actually a massive economic problem here where nobody who can afford to train a major new model wants to be on the hook for the illegal varieties of deepfake pornography or political propaganda. Stability.AI’s been the only major player to release open models that weren’t censored to shit, and that does not describe their last couple releases. What I said previously about not banking on model collapse for LLMs? When it comes to the diffusion models used in image generation I’m not sure market forces and fear of bad PR are going to let the tech continue making giant leaps forward. We’ll see.
posted by Ryvar at 11:02 AM on June 19 [6 favorites]


I'm sensitive to the concerns about threats of violence, but I think the situation with "AI" is quite similar to how we've normalized the threat of violence from automobile users, and any attempt by pedestrians or bicyclists to take back our ability to safely use public spaces is met with pushback.

Let's be absolutely clear: LLMs are tools of violence. From the ways they're being used to pollute our information spaces, to the contribution to climate change from all of the excess energy use, with no value other than polluting our information space, to the water use. They're fundamentally an imposition of externalities on all of us, making our lives worse, so that a few people can fumble through their incompetence by submitting bad pull requests to our code repositories.

Code that we'll have to fix. Language that we'll have to fix. Propaganda that we'll have to work that much harder to debunk.

So, I don't know, complaining about threats of violence in this essay kinda feels like telling people they can't act in self-defense.
posted by straw at 11:02 AM on June 19 [42 favorites]


"but the timeline on sufficient reasoning to be useful in an assistant capacity is four years or less

LLMs don't *reason* and never will. They replicate and identify patterns. This can be useful! But "sufficient reasoning" is pure nonsense.
posted by tavella at 11:16 AM on June 19 [7 favorites]


Correct: LLMs don’t. They can be a major component of a larger hybrid system that does model state in a limited capacity.
Do Large Language Models learn world models or just surface statistics? and
Efficiently Modeling Long Sequences with Structured State Spaces
(Both links courtesy kaibutsu. Because we’ve been over this many, many times before.)
posted by Ryvar at 11:34 AM on June 19 [6 favorites]


The hyperbolic violence isn't my stylistic preference.

That said, having been laid off by management who salivate at the idea of a workerless future and perhaps having made myself unemployable for politely objecting to their thinking and the harms it causes, reading the author's ferocity is at least cathartic.
posted by audi alteram partem at 11:35 AM on June 19 [11 favorites]


time to short Nvidia

I've really struggled with this, because I'm convinced that the potential impact of LLMs is way overblown, and that they're going to dump us into a recession when it's realized that all the companies going head-first into this stuff haven't quintupled their revenues.

But: in a gold rush, you don't short the shovel store. They'll use their infinite capital to pivot away to the next thing, a bunch of golden parachutes will deploy for executives of their customers who bet wrong, and legions of customer service folks and prompt engineers will be out of work.

So: bet against the whole market, I guess? That seems pretty reasonable given the state of the world at large.

This blog has been a light in the storm for me over the past year as I've struggled with how far off the rails "data science" has gotten. As a long-time reader, I'd like to assure you that the violent language here is strictly satirical. The author is from Malaysia and living in Australia - probably steeped in hyperviolent American media but without the decades of trauma that we in the US have. Not my favorite, either, but don't let it dissuade you from reading further on the blog. Just skip the three posts titled "I Will _____ You If You Mention ____ Again" and you'll be fine.
posted by McBearclaw at 11:37 AM on June 19 [8 favorites]


One minor quibble about an otherwise great rant (though perhaps dated in its Bob-the-Angry-Flower-esque over-the-toppedness) : what the author attributes to "politics" in footnote 2 ("promising people things regardless of your ability to deliver") is what I would classify as a form of "salesmanship", definitely a component of politics but should be treated as a thing in its own right.
posted by mhum at 11:38 AM on June 19


Metafilter: I enjoy it when smart people validate my prejudices.
posted by Saxon Kane at 11:41 AM on June 19 [21 favorites]


I have been reading a lot about LLM's lately and tinkering with them in my free time and have some uncomfortable questions about human cognition now.

Which is to say I think the argument here is basically wrong -- or rather beside the point -- and that AI (LLM's) really is going to be transformative across the board.

Paradigm shifts are always somewhat apocalyptic but I don't think the author, as satisfying as the rant is, is correct, because our institutions and practices might not be that much more sophisticated than the LLM's.
posted by Ray Walston, Luck Dragon at 11:42 AM on June 19 [6 favorites]


because our institutions and practices might not be that much more sophisticated than the LLM's.

Aye. The verifiably true facts that we've squeezed out of nature have taken a huge amount of unnatural effort and commitment, and after all that effort it's surprisingly easy to convince people that the verifiably true stuff isn't true and that some bullshit that it took five minutes to come up with is true instead.
posted by clawsoon at 11:46 AM on June 19 [6 favorites]


So, if a company replaces their workers with AI, what's to prevent consumers from simply replacing the company with AI?
posted by SunSnork at 12:12 PM on June 19 [2 favorites]


It appears to me the sole purpose of AI is to give companies a plausible reason for reducing their staff.

Yes, and they require market saturation to pull it off. The initial transition from human labor to AI will initially create a much worse product the way the looms that the luddites broke made much worse cloth. Each company that wants to increase executive profit by reducing staff costs needs to make sure they are not the only business selling shitty cloth; they need a market that doesn't give the consumer any other choice or, even better, consumers who think that's the best cloth can be.
posted by tofu_crouton at 12:13 PM on June 19 [10 favorites]


So, if a company replaces their workers with AI, what's to prevent consumers from simply replacing the company with AI?

Lack of capital and class solidarity.
posted by tofu_crouton at 12:14 PM on June 19 [8 favorites]


Yeah, the essay would have been effective without the "Dave Barry on meth and steroids" style, but I think it's more about the institutions that consume the tech rather than the tech. Recall the previous essay linked here where the author posits that AGI is imminent, and it may be able to somehow preemptively destroy all nuclear weapons of an adversary. Like, please tell Elmo how that works? But some powerful people have been convinced of this, and other hand-wavey promises, and it's dangerous.
posted by credulous at 12:16 PM on June 19 [5 favorites]


or adjacent-to-activation?
directionality’s significant [ScienceDirect]
posted by HearHere at 12:20 PM on June 19


They used to say there’s a sucker born every minute. Now, I guess it’s the case there’s a sucker born every clock cycle.

In the future will corporations be only a single person sitting at a desk giving themselves their afternoon pay raise while watching as their stock ticker runs off their net worth second by second? Meanwhile, machines crank away making things for other machines?

We need new stock tickers registering indices in vileness, stupidity, and inhumanity. No short selling there.
posted by njohnson23 at 12:39 PM on June 19


It appears to me the sole purpose of AI is to give companies a plausible reason for reducing their staff.


The goal is to get AI in place everywhere, while it is still subsidized by VC money. Then, jack the price up to actually make money once it is entrenched.
posted by Dark Messiah at 12:43 PM on June 19 [6 favorites]


AI is really tempting if you're an exec who's tired of being told no, especially if you don't bother to listen to why you're being told no. What problems are they trying to solve? They don't know, but they want the pesky employees just to do it already and stop asking questions. AI is perfect for that, because it will answer confidently even if it didn't understand the question or if the question wasn't the right one to be asking.

From my personal work history I can think of a number of implementations of data dashboards and visualizations where none of the managerial class asking for them actually had any use in mind for the data. In one case a couple managers I worked with came back from a meeting having already committed to pay for a vendor product that "would combine all our data from multiple sources into one interface." I was like, "uhhh, you know I can already do that, right?" And they said, "but this puts it all in one place!" And I said, "you know I can already do that, right? What problem does this solve for you?" And they said, "it puts it all in one place! You just had to see it!" And then they paid for it, and they asked me to set up all the data sources to feed into it, and I'm pretty sure nobody ever even pulled a single practical report from it. The manager whose project it was pulled a few things to show that it had been implemented and was available for use, and I think that was that.

I think there's a broad delusion that you can just point AI at your business problems and get answers out of it, but the problem is, and will continue to be, that you have to know what question you're even asking it to solve and then you have to have enough knowledge to know whether its solution is both accurate and viable. For instance, if you ask a text-based LLM to do math, it'll fail, because it's just doing statistical analysis on what words go together and it doesn't understand numbers as representations of value. If you ask an image generator to create a downtown street scene, it'll put gibberish on all the storefronts because it doesn't understand text except as an assemblage of shapes that look letter-like. If you ask a data modeling AI to look for patterns in your data lake it might find them, but you know who's really good at finding patterns? A skilled human! Just like writers are good at writing, software developers are good at writing code, and artists are good at creating art!

And none of that even gets into the whole "plagiarism machine" aspect of it, where text LLMs were trained willy-nilly on copyrighted text (not to mention copyrighted source code on GitHub) and image generators were trained on stolen images. Good luck to the company that replaces its senior software developers and then ships broken and/or plagiarized code because there wasn't anybody there to fix the bugs or notice where the code was just stolen.

People solve problems by knowing what problem they're trying to solve in the first place, and sometimes shifting the question when other factors are involved. That's why they keep asking the boss questions about what problem he really wants to solve, before he fires them and replaces them with an AI that won't talk back or demand crazy things like a "salary," or "benefits," or "time off." In almost every case, AI isn't actually going to make the output better, but it will briefly make a company appear more profitable as it turns its products to shit.
posted by fedward at 1:19 PM on June 19 [19 favorites]


This felt good to read. "It can't think or reason" is something I find myself saying in a work context often enough that I've begun to annoy even myself
posted by signsofrain at 1:43 PM on June 19 [1 favorite]


This felt good to read. "It can't think or reason" is something I find myself saying in a work context often enough that I've begun to annoy even myself

About AI or your coworkers?
posted by clawsoon at 1:44 PM on June 19 [6 favorites]


This was a great read, and I wish the FPP wasn't built around a single inflammatory pullquote that alienates a whole lotta readers. I scrolled past this three times before I finally read it.
posted by intermod at 2:00 PM on June 19 [1 favorite]


But I know people that are using this new tool (New Tool) effectively in their personal and work lives.

There's just the small matter of short-term biosphere collapse. All of the crowing and hand-wringing over AI when we're effectivey on the cusp of - if we're very lucky - a recycled Iron Age.
posted by ryanshepard at 2:06 PM on June 19 [3 favorites]


> I wonder when we'll get a Molly White (hallowed be her name) for LLMs

Mollyy White is already the Molly White for LLMs.
posted by thaths at 2:28 PM on June 19 [5 favorites]


intermod: I wish the FPP wasn't built around a single inflammatory pullquote

It's the title of the piece. I chose it so people would know exactly what they were getting before clicking.
Also, it's the title of the piece.
posted by signal at 2:51 PM on June 19 [12 favorites]


I wish the FPP wasn't built around a single inflammatory pullquote

Probably for the best. As you can see from the thread the amount of pent-up AI ranting was reaching dangerous levels. It's good that people had someplace to vent rather than having to insert it anywhere AI is mentioned.
posted by Tell Me No Lies at 3:02 PM on June 19 [1 favorite]


AI is really tempting if you're an exec who's tired of being told no, especially if you don't bother to listen to why you're being told no.

So many of these tech bro executives and marketers and vulture capitalists have convinced themselves that being confident is more important than being right ("Leadership!"), that an AI that answers confidently whether it knows the answer or not—that says, "Yes, I did what you commanded" whether or not it has any way of knowing what it has done—is their ideal employee.

I fear the best we can hope for is that sometimes some of the right people will get hit with some of the consequences for this obvious disaster.
posted by straight at 3:25 PM on June 19 [8 favorites]


Are we really having a tone argument?

I've posted previously about what I think of "Modern AI" and the people who push it, and this ticks almost all of the boxes. The frustration is real.
posted by Sphinx at 3:41 PM on June 19 [7 favorites]


My employer is pushing generative AI hard. I took a two day "boot camp" last month to learn along with the rest of the kids. The only moderately useful thing I've seen ChatGPT do at work is to make notes by summarizing the transcripts of Teams meetings, and it does that all right. But by design it only hits the high points, and it is very common for things that really matter to be absolutely invisible to its analysis. Tone of voice of participants, verbal asides, decisions or discussions that only take a little time but are hugely impactful.

I don't think we are all in danger of losing our jobs any time soon.
posted by Sublimity at 3:51 PM on June 19 [4 favorites]


I don't think we are all in danger of losing our jobs any time soon

I'd feel more reassured about this if my employer (and my spouse's) understood what LLMs are actually capable of rather than believing what's to me an obviously fantasy version of that capability.
posted by mollweide at 4:08 PM on June 19 [20 favorites]


There's just the small matter of short-term biosphere collapse. All of the crowing and hand-wringing over AI when we're effectivey on the cusp of - if we're very lucky - a recycled Iron Age.

But until then, somebody is really gonna nail their KOMs.
posted by fedward at 4:28 PM on June 19 [1 favorite]


There's just the small matter of short-term biosphere collapse.

If the AI optimists are right, it'll be using more energy than Iceland by 2028, so AI will make a contribution in that area, too.
posted by clawsoon at 4:41 PM on June 19 [4 favorites]


I should have linked ‘KOM’ in that comment to haha.business.
posted by fedward at 4:43 PM on June 19


“What happened when 20 comedians got AI to write their routines,” Rhiannon Williams, MIT Technology Review, 17 June 2024
posted by ob1quixote at 6:26 PM on June 19 [1 favorite]


I laughed my butt off reading this, and I was raised a Quaker who was taught that violence is never the answer.

Fortunately I can just have AI beat up AI bros for me.
posted by UltraMorgnus at 7:09 PM on June 19 [7 favorites]


I think what makes the reaction to generative AI slightly different than previous hyped technologies is that it's so much more accessible.

Yeah, this. Way back when blockchain was the new hotness, it seems like there was hardly any actual change on the ground, because C-suite goobers who kicked pronouncements down the line that various software or services needed blockchains had no idea what blockchains actually were or how they would affect a product. And those on the receiving end of these pronouncements knew (a) that their overlords had no actual idea what blockchains were, and (b) blockchains were not really applicable at all to their product, and so they just kept on doing what they'd always done, and the marketing and hype people could lie about there being a blockchain in there, and everybody ended up happy because nobody really cared.

But with AI, the CEO can play with ChatGPT for an hour and then ask underlings, "hey, why doesn't our product have this neat thing?" and then will not be happy until some sort of shitty LLM has been visibly embedded in something which really didn't need it.
posted by jackbishop at 4:29 AM on June 20 [11 favorites]


I admit I’m torn about the tone. The appeal to adolescent violence is not only an idiom I usually shy away from. It additionally seems less effective to say “if you continue to do stupid thing I will harm you” than “here’s why doing stupid thing will itself harm you.” On the other hand, the “Anger Translator” shtick works for a reason. Being the reasonable one in the face of unrelenting idiocy gets exhausting.

“Synergy Greg” is probably the best trope in the whole thing, because it so concisely sums up a character that needs no further elaboration to anybody in the industry, and that’s exactly who’s furiously turning the crank of the AI hype machine.

The best thing about AI hype is the way it’s made Greg shut up about blockchain. The worst thing about AI hype is that it’s precisely one step shy of Greg thinking he can sit down at ChatGPT and type, “in this session you will play the role of a visionary tech industry genius destined to disrupt the world. Here’s where to send the money.” His whole ilk sees generative AI as “fake it til you make it” as a service, and blissful liberation from the tyranny of nerds who think “years of experience” and “knowing what you’re talking about” can justify the mortal sin of “negativity.”
posted by gelfin at 7:55 AM on June 20 [7 favorites]


MetaFilter: “fake it til you make it” as a service
posted by ArgentCorvid at 8:01 AM on June 20 [3 favorites]


Being the reasonable one in the face of unrelenting idiocy gets exhausting.

Indeed. Eventually people begin to suspect both that they are not the smartest people in the room and that the world involves complexities beyond their (or possibly any human’s) understanding.

I mean, not in this thread, but in general.
posted by Tell Me No Lies at 8:02 AM on June 20


“in this session you will play the role of a visionary tech industry genius destined to disrupt the world. Here’s where to send the money.”

A Youtube ad came up for me the other day with Tony Robbins and some other guy talking about their amazing new session where they had been able to come up with all the material in just a couple of days instead of months because of AI and they were going to show you how to do it, too.

Exciting!
posted by clawsoon at 8:10 AM on June 20 [3 favorites]


Gah.

I made it maybe a fifth of the way through TFA when I encountered
Whenever there is a ransomware attack, it is revealed with clockwork precision that no one has tested the backups for six months and half the legacy systems cannot be resuscitated - something that I have personally seen twice in four fucking years.
At which point I found myself thinking "Where the fuck do you work? Stop working for outfits like that."

TFA is a blast of stupid violent invective that will be read by nobody who could conceivably benefit, and which models the opposite of right speech for the unfortunates who actually will read it.
posted by Aardvark Cheeselog at 9:52 AM on June 20


a blast of stupid violent invective that will be read by nobody who could conceivably benefit

Okay, let's presume for a moment that instead of swearing and invective, the piece made all the same points in LinkedIn-friendly jargon. Would the people who might benefit from it actually read it? [Touches earpiece] I'm hearing that they would not.

Let's say somebody took the long, LinkedIn-friendly version and then wrote an executive summary. Would those people benefit from rea- [Touches earpiece] My people are telling me they would not read or benefit from that either. [An assistant rushes up and whispers in my ear. I pause and start to turn my head. The assistant whispers more] I have no further remarks at this time.
posted by fedward at 10:15 AM on June 20 [25 favorites]


a blast of stupid violent invective that will be read by nobody who could conceivably benefit

if you're that put off by a few swear words being thrown around (perhaps heavy-handedly) to make an emotional point ...

A. fair enough, but
B. I don't think you're the target audience, which
C. makes your blanket dismissal of the rant not exactly persuasive, by which I mean ...
D. I read it and I feel I benefited.
posted by philip-random at 11:44 AM on June 20 [8 favorites]


Ethan Marcotte posted: As ever: from a labor perspective, the real danger of these “AI” platforms isn’t that they’re truly capable of replacing you, or matching the quality of your work. Rather, it’s that your bosses think these platforms can do just that.

He also posted this incredibly depressing article:
"We're adding the human touch, but that often requires a deep, developmental edit on a piece of writing," says Catrina Cowart, a copywriter based in Lexington, Kentucky, US, who's done work editing AI text."The grammar and word choice just sound weird. You're always cutting out flowery words like 'therefore' and 'nevertheless' that don't fit in casual writing. Plus, you have to fact-check the whole thing because AI just makes things up, which takes forever because it's not just big ideas. AI hallucinates these flippant little things in throwaway lines that you'd never notice."

Cowart says the AI-humanising often takes longer than writing a piece from scratch, but the pay is worse. "On the job platforms where you find this work, it usually maxes out around 10 cents (£0.08) a word. But that's when you're writing, This is considered an editing job, so typically you're only getting one to five cents (£0.008-£0.04) a word," she says.

"It's tedious, horrible work, and they pay you next to nothing for it," Cowart says.
posted by fedward at 3:09 PM on June 20 [13 favorites]


(I think my only note on the ad wars is that I believe that if a video creator doesn’t monetize their videos, there are no ads. This isn’t a one way street, and the people making content do have some agency here themselves.)
posted by Going To Maine at 8:24 PM on June 20 [2 favorites]


Mod note: [btw, this grifty nifty post has been added to the sidebar and Best Of blog! 🤖]
posted by taz (staff) at 2:16 AM on June 23


I work in a Customer Care department that has paid for an AI Chatbot that has not quite been released for use by customers.

The prompts and flows have been written by employees in the department who have not meaningfully taken a customer contact in YEARS.

I was a member of the team to "test the bot" after flows were written, less than a week before it is expected to be released for use by customers.

The first day we tested, it was completely unusable. Any question that was asked had the exact same, completely unhelpful response. The second day we tested, it was slightly better.

I can't imagine releasing this, as a customer-facing solution within the next week, but that is above my pay grade.

It will be a new embarrassment in a series of embarrassments during the last two years for this company I work for.
posted by toddforbid at 7:17 PM on June 23 [11 favorites]


At least our calls will still be important to you.
posted by flabdablet at 12:48 AM on June 24 [6 favorites]


Coming late to the love in for this article but it hit some points for me. I’ve commented before about my burnout from management consulting, which seemed more spiritual than because of sheer workload.

This articulated it perfectly:

Grifters, on the other hand, wield the omnitool that they self-aggrandizingly call 'politics'2. That is to say, it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable.

I wanted to be a practitioner in… something useful. I fell for the same guff they sell to the clients. “Come work for the Big 4, be an expert that big companies call when they need help.” And then they turned me into a grifter.

Unfortunately I can feel the same thing happening in my current job (data manager at a large financial institution). I started here six weeks ago. What I want is to help resolve the decade of technical debt causing issues left right and centre, and find more resources to do that. The execs want to build Tableau dashboards (on top of a rotten infrastructure) so our users can self-serve and they can make the rest of us redundant.

I’ll be fine though - I made up a bunch of stuff in PowerPoint and it seems to have gotten the right kind of attention. Thank goodness I learned the skills you need to survive in this industrial era :/
posted by Probabilitics at 3:20 AM on June 25 [5 favorites]


I can't imagine releasing this, as a customer-facing solution within the next week, but that is above my pay grade.

They got a chatbot at my old job, it was pretty much intended to answer questions like "How do I sign up for summer school?" (literally so many emails JUST saying this) by sending them a link. I tested the thing by writing totally illiterate phrases because that is what people do and it worked about how you'd figure. I was told by students that "people cuss out the chatbot a lot."
posted by jenfullmoon at 8:10 AM on June 26 [1 favorite]


I'm not sure if Chatbots as worse interface to FAQs is a good enough use case for wasting the enrgy of a medium-sized country on.
posted by signal at 12:43 PM on June 26 [4 favorites]


What totally cracks me up is how poorly existing information is integrated or set up in a way which allows a user to get the answer they need.

A simple thing like - why is the structure of your phone tree completely different to the structure of your website? If you divide your customers into individual and business, why not replicate that with your website and phone tree?
posted by Barbara Spitzer at 6:26 PM on June 26 [3 favorites]


how poorly existing information is integrated or set up in a way which allows a user to get the answer they need

As TFA puts it, several times: fix your shit!
posted by flabdablet at 12:41 AM on June 27


flabdablet - Preach!
posted by Barbara Spitzer at 9:40 PM on June 27


« Older Juneteenth small press roundup   |   Contort Yourself: RIP James Chance Newer »


You are not currently logged in. Log in or create a new account to post comments.