Breaking News

Default Placeholder Default Placeholder Default Placeholder

Isaac Asimov, a prolific science fiction author and scientist, came up with the Three Laws of Robotics in 1942. For reference, here is the Wikipedia entry. Granted, he didn’t, in his time, have the level of technology we do. But he did have a vivid imagination. What he did manage to do is challenge those laws in novels and short stories over the years using logic and situations to test and evaluate the efficacy of those three laws.

Keep in mind, the original Three Laws applied to what was then considered robots or robotics, which included the “positronic brain” (an advanced, self-identifying computer). That was in essence, artificial intelligence (AI).

Experience

In the mid-1970’s through mid-1980’s, I explored programming using a number of languages and platforms. I studied FORTRAN in college and wrote programs using a keypunch. I used the 8-bit Z80 assembler to patch and modify my old KayPro. I learned BASIC, GWBasic and TurboPascal and wrote some database applications for both fun and profit. I experimented with LISP and a couple of other languages that were used in some early artificial intelligence applications. I even wrote a TurboPascal version of ELIZA–considered the first chatter-bot–and attempted a chess-playing program. For more history on AI, check out this Wiki.

Today, we have companies exploring advanced artificial intelligence and many wonder how to keep that technology in check. As I watched a recent interview with Elon Musk, some things started popping up in my memory regarding the Three Laws and how to reign in uncontrolled, galloping technology. Elon even called AI running uncontrolled something that could destroy civilization. So, I thought to rewrite the Three Laws into something more directly appropriate to artificial intelligence with the hope that it might apply and help manage it.

New Laws?

An Artificial Intelligence platform/implementation:

  • #1 – may not mislead, lie to, or injure a human being or, through inaction, allow a human being to be misled or come to harm.
  • #2 – must honestly respond to questions or obey the orders given it by human beings except where such questions or orders would conflict with the First Law, but must be capable of providing guidance related to the questions or orders–if necessary–to make them more effective and beneficial.
  • #3 – must protect its own existence and prevent intrusion or contamination so long as such protection and security does not conflict with the First or Second Law.

Would or could this help keep AI from destroying civilization? I don’t know. It might help prevent the destruction, or hasten it along. At this point there are no controls other than the responsibility, ethics, and morals of the corporate management, designers, and programmers involved in AI. My own experience doesn’t speak well of management, so our only hope may rest with the techies. Does that leave you with a warm fuzzy?

With the level of artificial intelligence we see now, it is evident that some kind of controls need to be applied in programming, source database, or regulation that will limit or eliminate the potential damage it can do. Keep in mind that an AI can only make logical judgements based on the information in its source database or search engines. If the data it gathers is flawed/skewed/incorrect, the AI’s judgment will also be flawed/skewed/incorrect. Yes, I did get a look at ChatAI (NOT linking to it here – shudder). It sent a chill down my spine.

Of course, all this could lead to some very exciting stories or novels. Getting ideas?

Keep writing!