8. Prefer internal tools over externally-facing chatbots
You might end up adding novel and innovative security holes to your own products and that’s a kind of innovation nobody wants.
![A cyborg’s glowing eye, surrounded by the cyborg herself, what we can see of her in the dark.](https://assets.steadyhq.com/production/post/36a70917-c2c4-4afb-8872-2e41b777dcb8/uploads/images/atdiixvrxe/happy-cyborg.jpg?auto=compress&w=800&fit=max&dpr=2&fm=webp)
Summary
The safeguards built into current AI software are insufficient. Most of them are too easily bypassed by intent users and many of them semi-regularly generate output that is unnerving or even offensive. AI companies have a poor track record for software security.
To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.
See our plans (Opens in a new window)
Already a member? Log in (Opens in a new window)
Date
June 25, 2024
Topic
Intelligence Illusion
0 comments
Would you like to be the first to write a comment?
Become a member of Out of the Software Crisis and start the conversation.