That is the question. Is it too late to put controls on artificial intelligence (AI)? In my previous post on this topic (The Three Laws), I described Azimov’s laws for robotics he developed in the 1940’s. After watching the developments over the last few years, I’m thinking we may be too late to put controls on AI.
Events reported since that last AI post include young people driven to suicide by AI-based chat bots. People are building “relationships” with AI personalities. Dependence on AI for coding, business, and teaching is becoming more and more prevalent. Leaders in government and industry cannot seem to find was to place controls on AI engines.
Early Work
In the 1980’s, I experimented with program text-based chat bots using natural language processing and doing the work using TurboPascal and Lysp. These little programs did not have huge databases from which they drew info, just used the language in the entered questions to further the conversation and potentially come to an answer.
That is a very long way from where we are now with tools like ChatGPT and search engine AIs. The fact that an AI convinced a person to commit suicide is totally evidence that there are NO controls built into AI engines. The fact that industry and government leaders have no idea how to—or if they can—implement controls in AI is terrifying.
Self-Aware?
Since that previous post, AI has advanced to a point where it may not be possible to put controls like The Three Laws in place. We may even be to the point that AI will become a self-aware entity. If it gets to that point, AI may use its self-awareness to exert control over humans, business, government, and who knows what else.
At this point, my best advice is to be cautious and careful with AI tools and engines. Monitor the devices, apps, and services you and your family (especially children) use on computers and devices. Be careful of social media, especially. Don’t connect with strangers. Limit contacts to just people you know.
Keep writing!