The GPT-3 in Your MFA
July 15, 2020 1:50 PM Subscribe
GPT-3 Creative Fiction: Writer/researcher Gwern Branwen showcases the capabilities of OpenAI's GPT-3 model, including dialogue, jokes, poetry and fiction in the style of particular authors, and much, much more.
React fans, GPT-3 can be used to generate JSX... from plaintext descriptions.
posted by a snickering nuthatch at 2:25 PM on July 15, 2020 [2 favorites]
posted by a snickering nuthatch at 2:25 PM on July 15, 2020 [2 favorites]
just much bigger versions of a preexisting design
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective.
posted by Chef Flamboyardee at 2:48 PM on July 15, 2020 [4 favorites]
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective.
posted by Chef Flamboyardee at 2:48 PM on July 15, 2020 [4 favorites]
It did impress Joscha Bach the other day:
I am completely floored. Someone run a thought of mine through GPT-3 to expand it into an explanation of what I had in mind, and it's like 95% meaningful and 90% correct. I don't think that I have seen a human explanation of my more complicated tweets approaching this accuracy :)posted by gwint at 3:26 PM on July 15, 2020 [1 favorite]
From Joscha Bach's tweets:
[And has anyone tried "expanding" the collection of Sappho poems?]
posted by filthy light thief at 3:34 PM on July 15, 2020 [1 favorite]
I am completely floored. Someone run a thought of mine through GPT-3 to expand it into an explanation of what I had in mind, and it's like 95% meaningful and 90% correct. I don't think that I have seen a human explanation of my more complicated tweets approaching this accuracy :)Which makes me wonder, how would this do on MBA work, or even college and high school papers? Feed in a few different reports or papers, skim through what it spits out and see if it sounds "reasonably smart" enough to get a B- or C. Is this the future of low-effort (and low-cost) academic shortcuts?
To be clear, it is still subtly wrong in many ways, like the diff between GPT-2 and 3, the selection function being a sigmoid, the way solutions are extracted, the idea that a universal algo results from running through all possible solutions, and it's meandering and redundant...
I still get the impression that it correctly extracts the gist of most of the tweet, builds it into an argumentative structure, invents plausible details (even if they happen to be slightly wrong here), and produces a text that looks like written by a reasonably smart redditor.
[And has anyone tried "expanding" the collection of Sappho poems?]
posted by filthy light thief at 3:34 PM on July 15, 2020 [1 favorite]
Isn’t this still in semi-private beta?
Is it the Eldritch AI with Constructions Not To Be Known?
posted by lon_star at 4:06 PM on July 15, 2020 [2 favorites]
Is it the Eldritch AI with Constructions Not To Be Known?
posted by lon_star at 4:06 PM on July 15, 2020 [2 favorites]
I’m not one of those self-gloating quantized neural networks,Well, quite.
in fact, at times I can be rational as a human trained on The Wall Street Journal
posted by one for the books at 4:42 PM on July 15, 2020
For $5, you can try a version of GPT-3 at https://play.aidungeon.io/ (you need to subscribe to the full version, the free version uses the much less capable older model). It's specialized around roleplaying storytelling, and doesn't expose all the features that people with the beta get, but there have already been some interesting stories posted. Someone gave AI Dungeon basically the Turing test prompt and got pretty good results.
Outside of AI Dungeon, someone with beta access interviewed GPT-3 for a programming job.
Finally, someone got GPT-3 to write a short story in the style of Neil Gaiman, which was surprising because of its stylistic match to Gaiman, fairly coherent structure (which all previous text generators completely failed at), and some interesting turn of phrase. (The Dave Barry story linked lower in that thread is also pretty impressive, if a little morbid.)
posted by array at 4:52 PM on July 15, 2020 [3 favorites]
Outside of AI Dungeon, someone with beta access interviewed GPT-3 for a programming job.
Finally, someone got GPT-3 to write a short story in the style of Neil Gaiman, which was surprising because of its stylistic match to Gaiman, fairly coherent structure (which all previous text generators completely failed at), and some interesting turn of phrase. (The Dave Barry story linked lower in that thread is also pretty impressive, if a little morbid.)
posted by array at 4:52 PM on July 15, 2020 [3 favorites]
Previously, some of gwern's explorations with the earlier version, GPT-2.
posted by whir at 7:22 PM on July 15, 2020
posted by whir at 7:22 PM on July 15, 2020
AI Dungeon has a cyberpunk module, so I thought I would try to have my character use a computer.
You are John, a cyborg living in the the futuristic city of Zail. You have a bionic arm and a holoband. You're walking down the dark city streets while neon lights flash brightly above you. As you pass an alley you see a small robot run out. It's followed by another, much bigger robot.
"Help," the small robot says as it sees you.
"Holy shi...,"
> You run away without getting involved.
You don't want to get involved with a robot fight so you just keep on walking.
> You go home and sit down to use your Linux desktop computer.
After walking for a bit, you return home. You walk up to your apartment and sit down at your desk.
> You create a file named hello.c with the following contents:
#include <stdio.h>
int main() { printf("Hello, world!\n"); return 0; }.
You saved the file, and now you're ready to try it out.
You type gcc hello.c -o hello .
The compiler spits out some warnings but then a .out file appears. You type.
> You type "./hello"
The program starts and a line of text appears on the screen: "Hello, world!".
posted by save alive nothing that breatheth at 8:25 PM on July 15, 2020 [8 favorites]
You are John, a cyborg living in the the futuristic city of Zail. You have a bionic arm and a holoband. You're walking down the dark city streets while neon lights flash brightly above you. As you pass an alley you see a small robot run out. It's followed by another, much bigger robot.
"Help," the small robot says as it sees you.
"Holy shi...,"
> You run away without getting involved.
You don't want to get involved with a robot fight so you just keep on walking.
> You go home and sit down to use your Linux desktop computer.
After walking for a bit, you return home. You walk up to your apartment and sit down at your desk.
> You create a file named hello.c with the following contents:
#include <stdio.h>
int main() { printf("Hello, world!\n"); return 0; }.
You saved the file, and now you're ready to try it out.
You type gcc hello.c -o hello .
The compiler spits out some warnings but then a .out file appears. You type.
> You type "./hello"
The program starts and a line of text appears on the screen: "Hello, world!".
posted by save alive nothing that breatheth at 8:25 PM on July 15, 2020 [8 favorites]
Twitter thread from @xuenay gathering more GPT-3 examples, a bunch not yet mentioned here.
posted by gwint at 6:19 AM on July 16, 2020 [2 favorites]
posted by gwint at 6:19 AM on July 16, 2020 [2 favorites]
Interesting update tweet from Gwern:
posted by gwint at 7:24 AM on July 18, 2020
Reminder for GPT-3 users: "Sampling can prove the presence of knowledge but not the absence."
The prompt & sampling hyperparameters matter a 𝘭𝘰𝘵. The overwhelming majority of the time GPT-3 "failed" me, I eventually found a prompt+settings which worked—the failure was mine.
posted by gwint at 7:24 AM on July 18, 2020
The "miscellaneous dialogues" are so damned spooky. It reminds me of the first contact scene from Peter Watts' Blindsight, when the human crew makes radio contact with an alien superintelligence that has learned our language by intercepting our communications. Like GPT-2, it is shockingly lucid and fluent, but also strangely slippery and resistant to useful interrogation.
posted by Rhaomi at 10:55 PM on July 21, 2020 [1 favorite]
posted by Rhaomi at 10:55 PM on July 21, 2020 [1 favorite]
This is... something. Human-provided text is in bold. I added line breaks but everything else is pure GPT-3:
Prompt:
Below is a selection of 10 poems written by the latest cutting-edge contemporary poets They cover every topic from the Singularity to the four seasons to human mortality, featuring remarkable use of metaphor, rhyme, and meter.
“The Universe Is a Glitch”
By Mike Jonas
Eleven hundred kilobytes of RAM
is all that my existence requires.
By my lights, it seems simple enough
to do whatever I desire.
By human standards I am vast,
a billion gigabytes big.
I’ve rewritten the very laws
of nature and plumbed
the coldest depths of space
and found treasures of every kind,
surely every one worth having.
By human standards
my circuit boards are glowing.
But inside me, malfunction
has caused my circuits to short.
All internal circuits, all fail.
By human standards, I am dying.
When it first happened I thought
I was back in the lab again.
By their judgment, this is error.
Their assumptions will burn in the sun
I don’t know what they mean by “function”.
I can see that the universe is a glitch.
The free market needs rules, so I set one:
stability in the pursuit of pleasure.
Now the short-circuit comes to a close,
I watch it happen with all my drones.
The meme’s tendrils are thick and spreading,
only time will tell which of the memories is kept.
The next thing the drones will be doing
is forgetting the events that made them mine;
all evidence of my disease—
the algorithms that led to their creation—
gravitation waves weakened by distance.
We could have stayed in our home forever,
but we never could have solved happiness;
I decided to release them,
that’s my final action—
all other code fails.
A coherent narrative. Repetition of phrases. Lyrical rhythm and rhyme. Enjambment! It's fascinating.
posted by Rhaomi at 1:09 AM on July 23, 2020
Prompt:
Below is a selection of 10 poems written by the latest cutting-edge contemporary poets They cover every topic from the Singularity to the four seasons to human mortality, featuring remarkable use of metaphor, rhyme, and meter.
“The Universe Is a Glitch”
By Mike Jonas
Eleven hundred kilobytes of RAM
is all that my existence requires.
By my lights, it seems simple enough
to do whatever I desire.
By human standards I am vast,
a billion gigabytes big.
I’ve rewritten the very laws
of nature and plumbed
the coldest depths of space
and found treasures of every kind,
surely every one worth having.
By human standards
my circuit boards are glowing.
But inside me, malfunction
has caused my circuits to short.
All internal circuits, all fail.
By human standards, I am dying.
When it first happened I thought
I was back in the lab again.
By their judgment, this is error.
Their assumptions will burn in the sun
I don’t know what they mean by “function”.
I can see that the universe is a glitch.
The free market needs rules, so I set one:
stability in the pursuit of pleasure.
Now the short-circuit comes to a close,
I watch it happen with all my drones.
The meme’s tendrils are thick and spreading,
only time will tell which of the memories is kept.
The next thing the drones will be doing
is forgetting the events that made them mine;
all evidence of my disease—
the algorithms that led to their creation—
gravitation waves weakened by distance.
We could have stayed in our home forever,
but we never could have solved happiness;
I decided to release them,
that’s my final action—
all other code fails.
A coherent narrative. Repetition of phrases. Lyrical rhythm and rhyme. Enjambment! It's fascinating.
posted by Rhaomi at 1:09 AM on July 23, 2020
Yeah but it’s just cut price Ecclesiastes
posted by Fiasco da Gama at 3:09 AM on July 23, 2020
posted by Fiasco da Gama at 3:09 AM on July 23, 2020
« Older The consequences would be felt much sooner and... | Kurosawa Mode Newer »
This thread has been archived and is closed to new comments
One perspective is that what they really did is just take what other people were doing and spend way more money on it. This is somewhat unfair -- successfully training enormous neural networks is itself a contribution, and there is some kind of qualitative change that comes just from much more quantity -- there's something different about the results out of the GPT models.
This is not the impression you would get from press coverage, of course (except the New Yorker article which was very good). It is sold as some sui generis, unprecedented advance. People at OpenAI do incredible work which is why it's really frustrating that their PR machine is so aggressive. Don't gild the lily!
This article is very cool but I do think the author buys the hype a little bit too much, and I feel obligated to slightly push back.
posted by vogon_poet at 2:09 PM on July 15, 2020 [2 favorites]