Rules for Robots

Dear Dr. Joanne,

At the end of your article entitled Playing God: Human-Robot Boundaries in Ex-Machina, you used the term ‘Human Operating System’ (HOS) and I really wish I could hear more about your thoughts on this idea.  I envision the HOS as a sort of rule set for robots (similar to Asimov’s three laws) to operate under when interacting with humans. Is this what you meant?

Best Regards,

JR

Dear JR,

Your question reminds me of how ingrained (already) the idea of “rules for robots” is in our society, especially in terms of interactions with humans. We can probably thank Dr. Asimov for sparking the notion and all of the concerned humanists (who wonder what will become of them in a world infiltrated by robots) for spreading the idea like wildfire.

My thoughts about the Human Operating System (HOS) actually fall in the opposite camp – how will humans treat robots and who should be given the reins to create or modify intelligent machines in the first place?

When I sold industrial robots in the 1990s/early 2000s, I found it amazing that the engineers were interested in the robot’s operating system, while I was interested in the “human’s operating system”, what I referred to as HOS – specifically of those humans to whom I was selling the robot operating systems.

Every human has a different HOS, which is why people are so fascinating to me; yet robots and machines only have one operating system (not to imply that developing or modifying these is easy; it’s just a note of the difference in interests).

When I meet a human, I like to know what motivates them, how they think and act, etc., while most engineers ask the same questions for different robot systems.

Every human has a different operating system, and unpredictable people are more likely the root cause of any discord or unfortunate accidents in robots. In the 2015 science fiction film Ex Machina (spoiler alert), did Ava leave Caleb locked up because she was mimicking the behaviors of her somewhat villainous creator?

We might make the argument that when humans create, they perhaps unconsciously (and sometimes consciously) inject parts of themselves into the final product. We, as responsible citizens and scientists, may have the “best” or most “unbiased” intentions, but as the anonymous saying goes, “We don’t see things as they are, we see them as we are.”

As we move forward into uncharted territories, I envision HOS as a way for humans to be thinking about human behaviors, motivations, etc., before applying any social rules to robots.

What constitutes a foundational human model and which accepted universal morals and values should be present in a human being, before they’re given the go ahead to bring new AI into the world? Should HOS be based on Kohlberg’s Laws of Moral Development and/or Maslow’s sixth level of human needs, Self-transcendence?

Nathan, Ava’s machine’s creator in Ex Machina, is a good example of the potential ramifications of a misused, unstable HOS. Nathan’s HOS (or ‘HOSed’ up in his case) was not considered for his entry into AI development, and while Ava is undoubtedly more complicated than any humanoid or social robot on the market today, I suggest we give more credence to the idea of who has the power to wheel and deal in this soon-to-be-booming arena.

It may sound cliché, but the maxim “with great power comes great responsibility” applies, and turning some of our attention back toward a better understanding and optimizing of our own human moral operations before we spin off machine versions of ourselves seems a wise model indeed.

Hope this helps,
-Dr. Joanne