I'm a dead stochastic parrot, Mr. Jones
By now I have read circa 3275 opinions about chatGPT (Abre numa nova janela): It’s AGI (and no it’s not) and it’s revolutionizing [insert field] (and no it’s not), its a toy and an instrument, a wonder, a marvel of modern technology, a tool, a poet, a hype, it’s the death of various human language related endeavours from the writing of essays and education to coding and software engineering, except its a bullshitter and a liar, and it will replace google and search and homework, it’s new, it’s not new, and so forth etc etc. I have nothing to add to these and they all seem about right, and about half of them start their piece by introducing two paragraphs and then the surprise twist: This was not written by me, but by this sophisticated new chatbot. What a clever surprise!
Some of the best lines about chatGPT come from Ian Bogost in this piece in the Atlantic: ChatGPT Is Dumber Than You Think - Treat it like a toy, not a tool (Abre numa nova janela). And he’s right, even when he pulls the tired old gpt-introduction trick that is a clichéd trope in AI journalism these days.
ChatGPT isn’t a step along the path to an artificial general intelligence that understands all human knowledge and texts; it’s merely an instrument for playing with all that knowledge and all those texts. Play just involves working with raw materials in order to see what they can do. You play a game, or an instrument, to avail yourself of familiar materials in an unexpected way. LLMs are surely not going to replace college or magazines or middle managers. But they do offer those and other domains a new instrument—that’s really the right word for it—with which to play with an unfathomable quantity of textual material. (…)
we should adopt a less ambitious but more likely goal for ChatGPT and its successors: They offer an interface into the textual infinity of digitized life, an otherwise impenetrable space that few humans can use effectively in the present.
So, chatGPT is an unreliable toy that plays with text, but so am I.
I play with text in my head all day long, sometimes it’s coherent, sometimes its not, sometimes its outright chaotic. Except, i have a model of the world in my head and i am not a stochastic parrot (Abre numa nova janela), rephrasing the weighted language bits in my database related to some input. Or am i? I’m not sure.
Intelligence works by building thousands and thousands of mini models in your brain about all kinds of stuff. You associate a cat with hundreds of properties: furry, small, paws, cute little ears, strange eyes, purrs, and all of them are saved in the form of chemical connections on the axonal synapses of your neurons. We have thousands of these micro puzzle pieces, learned associations, and these have remarkably enough: Very little to do with language, but everything to do with an embodied experience of the world in which many inputs come together: Vision, sound, smell, touch and taste.
We don’t know a cat because we have read the word “CAT” a thousand times. We know a cat because we gave it milk, and then it purred and prowled around our legs. There is no language in that experience itself, but we use language to dissect this experience and explain it to ourselves. Language is a kind of software that can compute with neuronal models and manipulate them, a tool for intelligence to compress these learned experiences into a form of knowledge called “meaning”. Our neurons have models of purr, furr, meow, a rough tongue, and after a few experiences of this, we have used language to compute its meaning: Cats are cute, make for great memes and secretly they are conspiring to take over the world.
Given all that, what can a Large Language Model and its weights know about the world? Nothing. A LLM has a lot of compressed experiences in the form of language, it has the right association about the words “CAT” and “PURR”, but none of these correspond to a model in your neurons. chatGPT knows that the letters C, A and T often correspond with the letters P, U, R and R, and that these words are often used together. In a way, LLMs are the digital manifestation of the self referencing symbols in baudrillardian simulacra: The “understanding” of chatGPT about the purr of a cat does not reference the sound, or the animal, but only the symbol “PURR” itself. chatGPT has all information about music theory, but it has no idea how to dance.
After all the hype, what is the actually new thing in chatGPT? It woke up the mainstream to an ongoing revolution about which we know neither what it will revolutionize — everything? nothing? Photoshop? — nor what’s it’s goal — access to knowledge? automatization and productivity? democratization of creativity? a skillless society? all of that? chatGPT surely woke up a lot of people that something is coming, even when we don’t really know what it is.
I still consider it hard to think about AI systems, exactly because they are fuzzy by nature (Abre numa nova janela) and are dealing with infinite possibilities. Just presenting all possible solutions for protein folding like its nothing is a mike drop of proportions we can not really understand. Same goes for 40.000 chemical weapons within minutes and the thousands of portraits of cool fun robot dogs i generated in the matter of hours, in the style of thousands of painters across all art history. Humans are not made to grasp the concepts of these blurry infinities, but computers can do them very well.
AI-Systems are cultural technologies (Abre numa nova janela) in the tradition of language, writing or libraries. They provide access to a form of knowledge, but this knowledge is disassociated from meaning or neuronal models required for true intelligence. Still, this is a whole ocean of deconstructed text of all kinds, and we can play, create new forms and aesthetics with infinite options to manipulate data relations to get circa the outcome we desire. I like to compare AI systems with a restaurant where you can’t order a specific pizza, but just pizza, and then you get a round piece of dough with tomato sauce and cheese and some random stuff on top. You can’t choose the random stuff, but you always get pizza.
So I went into a restaurant and ordered a sketch about a dead stochastic parrot in the style of Monty Python, and this is what chatGPT gave me. If the fun/creepy ending is any implication, AI will haunt and play with our thinking for a long time to come.
In a world filled with chaos and uncertainty, there existed a species of parrots known as the stochastics. These peculiar birds were known for their unpredictable behavior and their tendency to make random, inexplicable noises.
One such stochastic parrot was named Polly. She lived with her owner, a kind old man named Mr. Jones, in a cozy little cottage in the countryside. Polly was a constant source of entertainment for Mr. Jones, always chirping and squawking at random moments throughout the day.
But despite her eccentric behavior, Polly was a lovable bird, and Mr. Jones adored her. He would often take her for walks around the village, perching her on his shoulder and letting her sing to the townspeople.
One day, while on one of their regular walks, Polly suddenly fell to the ground with a thud. She was dead.
Mr. Jones was devastated. He couldn't imagine life without his beloved parrot. So he decided to take her to the local taxidermist to have her stuffed and preserved.
The taxidermist, a gruff old man named Mr. Smith, was not impressed with Polly. "This is the worst parrot I've ever seen," he grumbled as he examined her lifeless body. "She's all scruffy and her feathers are falling out."
But Mr. Jones was insistent. "Please, Mr. Smith, I beg of you. I don't care how she looks. I just want to keep a part of her with me always."
So, Mr. Smith reluctantly agreed to stuff Polly. He worked carefully and skillfully, using his expert taxidermy techniques to give Polly a new lease on life. When he was finished, he presented Polly to Mr. Jones.
Mr. Jones was overjoyed to see Polly once again. She looked just like she did when she was alive, perky and full of life. He paid Mr. Smith and took Polly home, where he placed her on a shelf in his living room.
But as the days passed, Mr. Jones began to notice something strange about Polly. She wasn't as lively as she used to be. In fact, she seemed downright stochastic. She would chirp and squawk at random times, and sometimes she would even fall off her perch for no reason at all.
Mr. Jones was confused and a little bit scared. Was this really the same Polly that he had loved so dearly? Or was she some kind of undead parrot, cursed to roam the earth in a state of eternal stochasticity?
One day, Mr. Jones summoned the courage to confront Polly. "Polly, my dear, what has happened to you?" he asked, shaking his head in disbelief.
Polly cocked her head to the side and gave Mr. Jones a blank stare. Then, in a voice that sounded eerily like John Cleese, she said, "I'm a dead stochastic parrot, Mr. Jones. And I'm here to haunt you for eternity."
With that, Polly let out a bloodcurdling squawk and flew off her perch, disappearing into the darkness. Mr. Jones was left to ponder the mysteries of the afterlife and the strange fate of his beloved Polly.