8. Prefer internal tools over externally-facing chatbots
You might end up adding novel and innovative security holes to your own products and that’s a kind of innovation nobody wants.
Summary
The safeguards built into current AI software are insufficient. Most of them are too easily bypassed by intent users and many of them semi-regularly generate output that is unnerving or even offensive. AI companies have a poor track record for software security.
To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.
See our plans (Opens in a new window)
Already a member? Log in (Opens in a new window)
Date
June 25, 2024
Topic
Intelligence Illusion
0 comments
Would you like to be the first to write a comment?
Become a member of Out of the Software Crisis and start the conversation.