Passer au contenu principal

16. Much of the output is biased, harmful, or unsafe

You’ll sound like a racist granddad who has learned how to write generic LinkedIn posts.

Highlights made of circles, he crouches and tilts his head.

Summary

These AI systems are trained on the web. And, yes, that includes the porn, abuse, and violent imagery.

Many of these tools randomly output extremely distasteful content, some of which might even expose you to legal risk. The blocks vendors put into place to prevent this are often inadequate and often easily be bypassed. Removing unsafe data from the training set will in some cases also reduce the quality of its safe output, putting vendors in a position where they might have a higher tolerance for unsafe output than you.

To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.

See our plans (S'ouvre dans une nouvelle fenêtre)

Sujet Intelligence Illusion

0 commentaire

Vous voulez être le·la premier·ère à écrire un commentaire ?
Devenez membre de Out of the Software Crisis et lancez la conversation.
Adhérer