Podcast all the things
September 27, 2024 12:48 PM   Subscribe

NotebookLM is a deceptively simple tool from Google that at first glance looks like a fairly straightforward demo of their Gemini AI platform. Upload pasted text, a link (including YouTube), audio files, or up to 50 documents/500k words (which aren't used for training) and after a brief analysis it will produce various text interpretations -- summaries, tables of contents, timelines, study guides. It even has a chat window so you can pose suggested questions about the source material or ask your own. Useful, if a bit dull. šŸ„± ...until you open the "Notebook Guide" panel and see the unassuming "Audio Overview" feature. Hit the "Generate" button and (after a few minutes of processing) the results astonish: an utterly lifelike, minutes-long "deep dive" conversation about your documents between two nameless podcast hosts. Examples [transcribed non-Google versions inside]: Harris-Trump debate transcript - Folding Ideas "Line Goes Up" video essay - Jabberwocky - MetaFilter - The text of this FPP itself (how meta)

NotebookLM is currently a free "experiment" (with occasionally glitchy audio and minor flubs), but using it or listening to shared audio requires a Google account. Here are alternative copies of the above "podcasts" if you don't have one: Harris-Trump - Line Goes Up - Jabberwocky - MetaFilter - This post. You can share a public link to generated audio by clicking the share button above the player (next to the thumbs up/down) once it's done.

Protip: Since you can upload and summarize multiple documents at once, try including a typed list of "listener" questions targeted at the podcast and the "hosts" just might answer them "on-air". (YMMV)

One Singularity commenter highlights an unexpected benefit:
This is.. just extraordinary. I really canā€™t quite process it. But Iā€™m going to try. I uploaded the first 17000 words of a story Iā€™ve been writing for years. I am terrible at finishing stories. I just lose motivation and forget about it and move on with other projects far too easily. But every now and then I remember one of them and go back to it and think (perhaps egotistically) ā€˜damn this actually had a lot of promiseā€™ and I add a bit more. Maybe by the time I die Iā€™ll suddenly have a publishable collection.

And now here I am listening to two pretty convincing ā€˜peopleā€™, taking my work seriously. At the back of my mind I know itā€™s not real but still it feels incredibly validating somehow. I actually felt very emotional listening to them, and Iā€™m sure that speaks to all kinds of suppressed mental issues, but stillā€¦ I never would have considered, until now, that one of AIs great potential uses could be to provide artistic encouragement.

Iā€™m tempted to use AI to generate some kind of terrible story and then upload that to hear it pontificate in the same way and put myself back in my place. But Iā€™m kind of reluctant to do that to myself. I guess the ultimate measure of the value of this is going to be whether it makes me finish that damn story.
Can confirm: I uploaded an 18-year-old Halo fanfic (don't ask) and it is both weirdly gratifying and a bit surreal to hear two seemingly professional podcasters dissect its plot and themes like it was the latest entry on the NYT bestseller list. If Google follows through on the promised ability to interact with these hosts in real time, this could prove to be a powerful learning and creative brainstorming tool.
posted by Rhaomi (41 comments total) 10 users marked this as a favorite
 
I really hope that more comedians use this than sayā€¦ proud boys, giving an NPR sheen to misinformation. Thanks for the headā€™s up OP!
posted by drowsy at 1:00 PM on September 27


This is kinda insane. I sent it a link to the wikipedia page for the 1980s classic BMX film Rad and it gave me this. Not too shabby.
posted by downtohisturtles at 1:06 PM on September 27 [1 favorite]


Now I don't feel bad about not listening to podcasts.
posted by grumpybear69 at 1:07 PM on September 27 [2 favorites]


And now here I am listening to two pretty convincing ā€˜peopleā€™, taking my work seriously. At the back of my mind I know itā€™s not real but still it feels incredibly validating somehow.

People fell for ELIZA even though when you read the chat transcripts, they're incredibly superficial. I can't believe anyone ever fell for it then or now. Is everyone gulping down lead paint? It's not convincing AT ALL.

It reminds me of the 1928 New Yorker cartoon where a man keeps talking to a woman as she sits there, silently looking at him. He finally ends by saying, "You're a very intelligent little woman, my dear." But I guess this will be loved by the people who listen to 5 hour podcasts where dude bros just talk about whatever.

The main purpose of AI is generating undetectable spam.
posted by AlSweigart at 1:28 PM on September 27 [8 favorites]


All this synthetic slop stuff - and the future that it portends, that is unfolding now - just makes my soul ache.
posted by lalochezia at 1:31 PM on September 27 [9 favorites]


Aaaaaaaah - I just confirmed that this technology was the source of a low-quality YT video I noped out on earlier this week. (Iā€™m not linking to it because that just feeds them views, but the ā€œhostā€ voices are exactly the same as in the Line Goes Up one.)

Someone found a ā€œ10 best D&D modulesā€ listicle, fed it into this AI, and then added some slides to try and monetize the other content they stole.

Garbage stacked on garbage all the way down. Ainā€™t the modern internet grand?
posted by FallibleHuman at 1:35 PM on September 27 [8 favorites]


Thanks for the heads-up! I've never been able to click with podcasts before, so there's a good chance I wouldn't have necessarily heard about this as an impending source of imitation content before it proliferated.
posted by CrystalDave at 1:42 PM on September 27 [1 favorite]


All this synthetic slop stuff - and the future that it portends, that is unfolding now - just makes my soul ache.

to be fair recycling wikipedia pages into podcasts is a well-worn technique
posted by BungaDunga at 1:43 PM on September 27 [1 favorite]


Mod note: A few comments deleted for violating, well, several guidelines. Let's avoid turning this thread into a fight with other members. If you see something you dislike, flag it and move on.
posted by loup (staff) at 1:49 PM on September 27 [2 favorites]


This is amazing. I gave it my mumā€™s self published book and she literally cried a little hearing the hosts talk about it as if it were the latest bestseller. She feels seen!
posted by bakerybob at 1:50 PM on September 27 [2 favorites]


I never would have considered, until now, that one of AIs great potential uses could be to provide artistic encouragement.

fantastic, we've automated encouragement, which means you can whip up an infinite supply of yes-people who will encourage your worst impulses. like having a community of cultists who love you on-tap. heavenbanning as predicted two years ago
posted by BungaDunga at 1:51 PM on September 27 [3 favorites]


My mind is too blown to even have an opinion. I need an ai model to help me have a take!
posted by jeoc at 1:58 PM on September 27


Human Feedback Makes AI Better at Deceiving Humans, Study Shows

it turns out it's easier to make a text more convincing than it is to make text more correct, so LLMs being reviewed by humans learn to be more convincing but just as incorrect, so they're accidentally training convincing sophistry into these models
posted by BungaDunga at 2:02 PM on September 27


A friend just created one of these, based on a totally forgotten band we were in way back in the mid-ā€˜90s. (I wrote up a bio for the band at the time.) The results were astonishing - it sounded like an NPR feature on our band, with the host queuing up all these questions for the expert to go deeper on. I can totally see how this could be useful in finding the deeper meaning inside long pieces of text.
posted by saintjoe at 2:04 PM on September 27 [2 favorites]


I uploaded one of my stories. It's very uncanny valley. A part of you is psyched that these people are talking about your story, another part is saying "that is so obviously computer-generated".
I made it about 20% through and had to close it.
posted by signal at 2:05 PM on September 27 [1 favorite]


I was torn between just posting, "Thanks, I hate it," and something that acknowledges that "Thanks, I hate it" is exactly the way I feel about this, that it's a quote that's rapidly gone into common use since it was first posted on twitter, and that my usage of the phrase is not that much different than the way LLMs deploy existing language.

But one difference is that I hate this, even as I recognize that it's ridiculously advanced. (These voices are also way more realistic than the generated ones I was listening to earlier for my various annual trainings, so even though I'm not a fan of where we're going with this in general, I hope that maybe next year these trainings will be slightly less grating.)
posted by thecaddy at 2:09 PM on September 27


I gave it my mumā€™s self published book and she literally cried a little hearing the hosts talk about it as if it were the latest bestseller. She feels seen!

That's what this is all about, isn't it? Who cares if the critique or compliments have actual substance. This feeds our worst social instincts. You might as well have bots write fake glowing reviews of the book as well (Amazon doesn't care as long as they're positive: it sells more books. It's the genuine bad reviews that I've seen them take down.)

I always thought we'd need a holodeck-level of simulation to make us throw away our real lives, but it turns out our standards are so much lower.
posted by AlSweigart at 2:14 PM on September 27 [4 favorites]


It's all a giant bubble. These things cost much more to generate than any minor benefits they give to humanity (if any.)

For example, OpenAI expects to *lose* $5 billion this year. they are desperately looking for investment cash to keep the power bill funded. it's insane. this will all go away as soon as MS or Google realize it's a money sinkhole and step back. The rest of the ecosystem will collapse.
posted by Rhomboid at 2:15 PM on September 27 [1 favorite]


I'm interested in this despite myself.
posted by signsofrain at 2:16 PM on September 27 [1 favorite]


Hey everyone. Now that I've raised $500 in venture capital, I'm offering my AI service (a small Python script) that will give all of your Metafilter comments and posts 100 Favorites. You, too, can be part of the elite Mefi intelligentsia and have the most liked comments on this site.
posted by AlSweigart at 2:19 PM on September 27


ā€œWhat I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.ā€
ā€• Joseph Weizenbaum (creator of ELIZA)
posted by chavenet at 2:19 PM on September 27 [6 favorites]


I can see how this could be validating and motivating, but like, it also seems like using a sockpuppet or bot army to talk about or praise your own work, only with fewer steps and equally as hollow. My main artistic endeavor is in the immensely low stakes realm of writing and posting fanfiction, and sure, I could in theory set up some kind of botnet to mass kudos my fics and comment positively on them, but what the hell would be the point of that? Same if I was a pro author, I could flood a Goodreads or Amazon page with positive reviews by sockpuppets or whatever. But I'd know it's fake! Any sense of pleasure or satisfaction at the praise would be fleeting and basically delusional, since it'd all be fake! Same with this. I'd feel so creepy and sad doing it. Even a single emoji-only comment on AO3 or a like on tumblr or whatever are more meaningful and valuable than this.

I guess something is better than nothing in the lonely toil that is writing, but oof. Seems grim.
posted by yasaman at 2:34 PM on September 27 [2 favorites]


AlSweigart: "People fell for ELIZA even though when you read the chat transcripts, they're incredibly superficial. I can't believe anyone ever fell for it then or now. Is everyone gulping down lead paint? It's not convincing AT ALL."

Dr. Sbaitso Was My Only Friend

Never underestimate the human drive to anthropomorphize inanimate objects, let alone something that appears to respond intelligently and engagingly. Even googly eyes will do the trick. And if tarot cards and inkblots and self-help manuals can facilitate creativity and introspection and self-esteem, why not this? It might be abusable, but it doesn't make the concept inherently bad.
posted by Rhaomi at 2:41 PM on September 27


As a result of becoming old and cranky, I have no patience for podcasts; please give me a transcript that I can skim so I don't feel another hour of life slide away listening to people (or fake people) bloviate, no matter how insightful they are supposed to be?

As an aside, I cannot image being a teacher of English literature or composition in this new world; who would bother to put in the work of writing an essay when machines can plausibly do it for you? Why think, when the thinking has been done, and all that has remains is to make a collage of already existing texts? I honestly find myself at sea with all this, and I have been wondering lately if there indeed there comes a time when you really are just Too Old to adapt, and that the frameworks that you have learned or built about the world and how it works are just as archaic, and about as much use, as the worldview of someone born a hundred years earlier which we cannot really access and just have to speculate upon. I'll see myself out to the rocking chair on the porch now.
posted by jokeefe at 2:49 PM on September 27 [3 favorites]


I used to have discussions about what constituted a "robot", and we often found that googly eyes resolved edge cases neatly. Most people have trouble accepting their coffee maker as a robot, but it turns out if you give it eyes, they come around.

I've kind of loathed podcasts for a while, and honestly this doesn't impress me so much as deepen my loathing for podcasts. It's just taking the simple tricks that podcasts use to stretch out sixty seconds of information into half an hour and making them obvious. Of course, what I really should try to use this for is exactly that-- boiling podcasts back down into the sixty seconds of information they often contain. Which means... if I plug the output of tube A into input funnel B... and flip this switch... [Brazil ensues]
posted by phooky at 2:53 PM on September 27


Plugged the audio overview of this thread back into NotebookLM:
The source material is a discussion thread from a website about a new AI tool called Notebook LM. The thread explores how this tool can be used to create audio summaries and conversations based on uploaded documents, such as research papers, essays, and even fiction. Participants discuss how this technology can be used to improve learning, enhance creativity, and make information more accessible to a wider audience. However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools.
posted by phooky at 2:57 PM on September 27


As with all "ai" products my first reaction was one of astonishment.
I fed it a story ("The Cone") that I'd written to be deliberately hostile to the reader (and some readers loudly let me know I'd succeeded) and the generated podcast actually disentangled the plot and picked up on the themes to a surprising extent.
After feeding the thing with some other stories and poems, though, it began to feel pretty shallow and repetitive - granted, my writing tends to deal with repeated themes (huge scale shifts, the nature of duty) and it saw those easily but there was no real engagement with the writing beyond latching onto the most obvious of my tricks.
The results increasingly felt hollow and superficial. Incredible technology though.
posted by thatwhichfalls at 3:08 PM on September 27


I am now looking nervously at AISweigart and wondering about AI functions masquerading as Metafilter posters. And the idea of "heaven banning", which I had not heard of before, sounds so plausible, and somehow so tempting to use... what would the crisis of discovering that one had been heaven banned look like? O Brave New World.
posted by jokeefe at 3:08 PM on September 27 [1 favorite]


Thanks for posting this. I'd heard of NotebookLM but never heard it in action. Unlike some other folks here, I can see great potential for this kind of thing. The trillion dollar question (and I selected that figure intentionally) is how the fuck do we keep people from abusing the shit out of it?

And no, "make the companies making this shit put in guardrails" isn't the answer. Fur one, it's nearly impossible to do well if the bad actor is even slightly clever, but more importantly a few grand will buy you a computer good enough to use existing models for inferencing reasonably quickly and even if you need to train from scratch, that capability is well within the budgets of fairly small political campaigns using cloud compute.

The saving grace currently is that if you're at all familiar with the large commercial LLMs it's pretty easy to spot their house style when you run across it in the wild. They just have a certain way of writing that manages to be very distinctive. However, over the next couple of years I think we're going to end up seeing more and more one off models created by various people for various reasons that won't read the same because they're trained and tuned differently. Hell, it's probably already possible to use OpenAI and Gemini APIs to fiddle with things enough to work around the "house style" look without bothering with the trouble of a bespoke model.

What I can say for certain is that the genie ain't going back in the bottle. I can also say that being curmudgeonly about it isn't helpful either. Like it or not, there are serious productivity gains to be had in many kinds of work for humans who are willing to use these tools. Banning them outright is functionally impossible. Stop grousing and think about how we can ameliorate the negative effects while still being able to reap the rewards.
posted by wierdo at 3:10 PM on September 27




"However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools."

Ok, but this literally did not happen in this thread??? It's an awful summary, like all AI summaries are awful. It didn't summarize THIS thread, because LLMs can't truly summarize!
posted by muddgirl at 3:19 PM on September 27 [1 favorite]


I can also say that being curmudgeonly about it isn't helpful either.

Bingo.

...if being curmudgeonly about a topic changed any mind, ever, nobody on this site would still be religious, or even "spiritual".
posted by aramaic at 3:20 PM on September 27


As far as I can tell AI summaries are no better than like a psychic cold reading or a horoscope.
posted by muddgirl at 3:22 PM on September 27


I particularly hate the ā€œjust two dudes, chatting, on a specific topicā€. Perhaps itā€™s because I grew up in a house with bbc radio 4 playing constantly, where every subject is discussed by experts, and then edited, but I have high standards for audio - like, the audio actually has to do something, and two-dudes-chatting often devolves into ā€œone dude chatting, the other asking the stupidest questionsā€. I also particularly hate AI, so this is like two things I hate, stuck together.
posted by The River Ivel at 3:41 PM on September 27


AlSweigart: "[snip]"

Yes, as I said, it's possible to abuse this tech (for example, by piqueishly spamming a wall of blather). But that doesn't change the fact that there are plenty of positive uses.

phooky: "However, they also acknowledge the potential downsides of relying too heavily on AI and emphasize the importance of critical thinking and a balanced approach to using such tools."

muddgirl: "Ok, but this literally did not happen in this thread??? It's an awful summary, like all AI summaries are awful. It didn't summarize THIS thread, because LLMs can't truly summarize!"

It's not summarizing the thread, it's summarizing the transcript of the faux podcast, and those are points the "hosts" made. Here is a take on the thread itself up to this point, minus the long spam comment [alt].
posted by Rhaomi at 3:47 PM on September 27 [2 favorites]


Mod note: One comment deleted. Please refer to the AI Generated Content section of the Content Policy and contact us if you have any questions.
posted by loup (staff) at 3:54 PM on September 27


I'm sure there are people who claim that there are plenty of positive uses for "Crossing Over with John Edward" too.

> Here is a take on the thread itself up to this point, minus the long spam comment.

I'll listen to that when you thoughtfully respond to my long spam comment. You go first.

Or we could both not waste our time consuming AI slop. Seriously, let's not. :P

EDIT: Ah it was deleted. It's not quite why Metafilter has a "no AI comments" policy, but it does make my point.
posted by AlSweigart at 3:54 PM on September 27 [1 favorite]


WOW. I uploaded text from a news article, and within a few minutes, it created a 4:15 "podcast" featuring a man and a woman - and it sounded very legit and natural. I know there are many arguments against (and certainly also for) this kind of thing, but from a purely "check this out" POV - my mind is sorta blown.
posted by davidmsc at 3:59 PM on September 27


It's not summarizing the thread, it's summarizing the transcript of the faux podcast, and those are points the "hosts" made.

Ah I see, then it's even worse than my original thinking.

>The source material is a discussion thread from a website about a new AI tool called Notebook LM.

The source was not a discussion thread, the thread did not exist when the podcast was created.

>Participants discuss...

The summary implies it's talking about the participants of the discussion thread. There's no discussion thread. There is the original post, then there is the fake podcast. So if this is a summary of what the fake podcast hosts said, they're not participants in a discussion thread.
posted by muddgirl at 4:04 PM on September 27


Like many others, I try not to waste my time on pointless podcasts, so I grant that maybe the fake hosts of the fake podcast called themselves a discussion thread or something. That really buttresses my point.
posted by muddgirl at 4:06 PM on September 27


For the sake of argument, I will grant the existence of 'positive uses' of LLMs.

I have read and continue to keep up with the effects of LLMs, especially the externalities caused by their use.

For each unit of LLM Utility generated through LLM use, I would say that you are also generating two orders of magnitude more harm to the world than that unit of utility.

Those of you making excuses for their use are doing real and grievous harm to the world and to people.

That's something that's often implied in these threads, but I think it needs to be said explicitly. 100 times more harm than the utility you get out of each use. And that's assuming there is utility at all and not just a perception of utility.
posted by ursus_comiter at 4:27 PM on September 27


« Older The WordPress vs. WP Engine drama, explained   |   He is long dead, but his spirit continues to haunt... Newer »


You are not currently logged in. Log in or create a new account to post comments.