[links] Why Consciousness can't be defined
Plus: The ethical minefield of AI porn / New editions for Kurt Vonneguts centenary / AI can dance / Dreambooth for weird synths and Magic The Gathering / ROBLOX_OOF.mp3
(Bild: One Flew Over The Coocoos Nest / Stable Diffusion)
Why Consciousness can't be defined
The problem with any definition of consciousness is that a definition works --by definition no less!-- across many people. A house is the same thing for you, me and everybody else. But i have no clue if you are not a robot, or a philosophical zombie (Öffnet in neuem Fenster), as the problem is called.
We have a theory of mind, in which we as individuals presume that you are, roughly, the same as me, have emotions, awareness, attention, intelligence. But I can not prove it. Thats Decartes 101: I think therefore I am, and nothing else is provable.
This is why we can not, scientifically, define consciousness, because we cant prove that our definition holds across people.
Maybe my consciousness is disabled in some way? Maybe i have some disorder that has yet to be discovered by science that, somehow, makes me not conscious? Maybe my qualia differs so hard from yours that our experiences can’t be described as similar?
But damn, it feels like i'm conscious, and everyday interactions with people suggest, that, yes, you and me are conscious on a similar level: we have a sense of self across time and a story about our changes. But for a definition, that’s not enough. I have my definition of my consciousness and i have good lifelong experience with my own mind to have an opinion what my consciousness is about.
My personal definition of consciousness is something like this: I developed the ability to compose brain activity to guide my attention and to select what i attend to according to my taste and instincts and this creates a series of qualia which form stories which translate into something like a personality and an identity. This ability to compose brain activity to guide my attention is my consciousness. When you are unconscious, during sleep for example, you loose your ability to compose while your brain activity is not silent. Consciousness therefore would be a learned skill, and the compositional patterns of attention are akin to moiré patterns of overlapping brainwaves.
But if this definition applies to your consciousness, if you even have any, which i can not prove, is, at least for now, a mystery.
This is why we can't really define consciousness in a meaningful way.
The ethical minefield of AI porn
Techcrunch has an article about Unstable Diffusion, the group trying to monetize AI porn generators (Öffnet in neuem Fenster).
Until now, it’s mostly hentai and adult furry illustrations, but it’s just a matter of time until a porn-CKPT is released that is not just hentai, but trained on the myriads of petabytes of porn floating around on the web. (Labeling these images will be quite a job). Then i can finetune this with a bunch of images from anyone i like (or who i want to troll), and can generate any porn with any face. I also presume that CKPTs trained on porn images already exists, but not in public.
This is a privacy nightmare and an ethical nightmare.
Its obvious to me that the next prominent target for a global trolling campaign will be bombarded with AI porn generated with their images. What do you think people like Anita Sarkeesian would be faced with, today? (I’m not a fan of her politics, but i’m also not a fan of the harassment people like her have to endure because they have opinions.) The harassment possible with Dreambooth is absolutely next level, and that’s because an old internet saying is true: You can’t unsee things, and images absolutely have the ability to enter your mind and disturb you.
I'm not sure if we have any handle on that, if this can be controlled in any way. Just as I can look at any image in private as a free human being, there is nothing you can do about it legally. I can’t stop you from using my face in private for whatever you want. But we now have the technology to actually make this true for everything: I can use your face to play Neo in The Matrix, i can use your face to put it on the ass of Donald Trump, or i can use your face to generate porn.
Maybe we have to live with the fact that imagination is not exclusive to the brain any longer, with every consequence that entrails.
This, ofcourse, has history. I’m not sure when the first celebrity porn fakes popped up, but it must have been mid 90s with Photoshop going somewhat mainstream beyond the publishing industry. There is porn for every actress in the world, if you know where to look at. And soon, this will be democratized and there will be underground software that is specialiced to finetune open source porn generators on images of your neighbor.
What does the public availability of generative sexual fantasies do to our relation to "intimate imaginary" that was once exclusive to our sexual imagination?
The Techcrunch article, well meant as it may be, doesn't even touch on these questions and the ethical stuff discussed in there is superficial. We should get ready for a world in which our sexual fantasies are publicly visualized on prompt, and in which everybodys face can and will be used for porn.
Beautiful new editions for Kurt Vonneguts centenary. Loving the white eye consistency: Alicia Raitt Re-Imagines All 14 of Kurt Vonnegut’s Book Covers to Celebrate His 100th Birthday (Öffnet in neuem Fenster).
Stable Diffusion dreambooth trained on classic/retro science fiction illustrations (Öffnet in neuem Fenster)using works from Chris Foss, John Harris, Syd Mead, Robert McCall and Philippe Bouchet.
This is the first JukeboxAI-thing I hear that sounds like an actual song: Fresh Snakes & Ancient Doves (Öffnet in neuem Fenster)
R&S-Records on Facebook: (Öffnet in neuem Fenster) “Infinite Vibes playing around with Aphex Twin's new software,'samplebrain'“. (I linked to Samplebrain (Öffnet in neuem Fenster) in this (Öffnet in neuem Fenster) issue of GOOD INTERNET.)
Magic3D (Öffnet in neuem Fenster) is a new text-to-3D AI-model from Nvidia. There are already Motion Diffusion models (Öffnet in neuem Fenster) (and below: Dance Diffusion models) which can create plausible movements for 3D-meshes. Next year, maybe earlier, we will see architectures that combine into text2animation-pipelines and create realistic animated moving 3D-animations on prompt. Text2toystory is getting closer.
VectorFusion: Text-to-SVG (Öffnet in neuem Fenster). I have a background in professional graphic design for print and advertising and these results are not production ready (as long as you want the clean vector style aesthetic). Line widths are unstable and the shapes are weirdly off. Usable typography still far away, except for experimental stuff. But we're getting there, I’ve seen very consistent line widths even from Dall-E 2, so that’s just a matter of time.
Fabians.eth trained a Dreambooth on “new weird-synthstruments” (Öffnet in neuem Fenster) and I want to hear the sound of all of them, especially the guy above. Reminds me of the Bleeplabs (Öffnet in neuem Fenster) stuff.
Magic The Gathering Diffusion (Öffnet in neuem Fenster): “A comprehensive Stable Diffusion model for generating fantasy trading card style art, trained on all currently available Magic: the Gathering card art (~35k unique pieces of art) to 140,000 steps, using Stable Diffusion v1.5 as a base model. Has a strong understanding of MTG Artists, planes, sets, colors, card types, creature types and much more.”
Displaying the Dead: The Musée Dupuytren Catalogue (Öffnet in neuem Fenster)
Gaming found its Wilhelm Scream: ROBLOX_OOF.mp3 (Öffnet in neuem Fenster).
Oh look a new Technorati: ooh (Öffnet in neuem Fenster). (Öffnet in neuem Fenster)directory (Öffnet in neuem Fenster). All the people who never took blogging seriously and then complained about their demise on big platforms and then abused their favorite blogger will be so delighted.
One key element on the way to something like AGI (Artificial General Intelligence) seems to be spatial intelligence, that is: Connecting knowledge to embodied experiences in a three dimensional space across time. Here’s MineDojo, a framework to train AI-agents in Minecraft (Öffnet in neuem Fenster) combined with a language corpus from Minecraft-subreddits and a video corpus from gaming videos. So cool! (Twitter Thread (Öffnet in neuem Fenster))
AI can dance: EDGE: Editable Dance Generation from Music (Öffnet in neuem Fenster). Here's a clip (Öffnet in neuem Fenster) featuring stick figures dancing to Rick Astley (the Steady-editor fails at embedding YT-videos).