[The previous essays in this series are here, here, here, here, here, here, here, here, here, here, here, here, here, and here.]

Three decades after his death, Isaac Asimov’s robot-related fiction remains scarily relevant, particularly for law students and lawyers.

     I, Robot’s nine short stories, written between 1940 and 1950, prefigure problems related to self-driving vehicles, artificial intelligence (AI), and industrial, medical, military, and law enforcement robots.

     Although “robot” was first used in Karl Kapek’s 1920 play, R.U.R. (for Rossum’s Universal Robots), Asimov invented the term, “robotics” for his 1942 story, “Runaround.”

      That story also introduced his Three Laws of Robotics, although in a memoir Asimov acknowledged that “I heard the Three Laws first from [Astounding Science Fiction magazine editor] John Campbell and I am always embarrassed to hear myself given the credit.”

     Those rules, which continue to influence the legal, technology, and science fiction communities (the memoir notes that “other writers began to use them[, although t]hey couldn’t quote them directly, of course”), are:

            -A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

            -A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

            -A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

     Today’s readers might find ten aspects of his vision especially timely.

     ● First, like many of the policies governing social media platforms and AI applications, the three “laws” are neither legislative nor judicial, but are privately created and implemented by an owner of proprietary technology.  Asimov’s rules are “built most deeply into a robot’s positronic brains” by U.S. Robots and Mechanical Men Corporation, a monopoly that leases, but never sells, its products. 

     ● Second, just as advanced stock-trading algorithms might evade human explanation, conflicts among the three rules are “ironed out by the different positronic potentials in the [robot’s] brain,” even though no one “knows what a positronic field is or how it works.”

     In the 1941 short story, “Liar!” a robot’s brain breaks down when its every option will lead it to hurt humans.  That situation is referred to in the last of Asimov’s robot novel trilogy, The Robots of Dawn (1983), which discusses roboticists’ attempts to prevent “robot block,” or “roblock,” from “two roughly equal and contradictory orders.”

     In February, the American Bar Association’s House of Delegates adopted a resolution calling in part for developers to make AI “subject to human authority, oversight, and control,” and to “ensure [its] transparency and traceability. . . by documenting key decisions [concerning its] design and risk.”

     ● Third, Asimov’s rules aren’t absolute: U.S. Robots can modify their strengths and relative priorities.  At the risk of destabilizing the brains of a special model, in one story the company obeys government orders to abbreviate the First Law to “No robot may harm a human being,” so that these robots will not consider themselves compelled to rescue humans taking calculated risks. 

     ● Fourth, as that story—and one in Asimov’s comprehensive collection, The Complete Robot (1982)—suggest, in some circumstances preserving technology might be seen as more valuable than completely protecting humans.

     ● Fifth, the company’s troubleshooters—most notably, robopsychologist Susan Calvin—trace much of the robots’ seemingly rogue behavior to ambiguities in the three laws.  For instance, robots can logically construe preventing “harm” to include protecting humans not just from physical injury, but also from hurt feelings.  

     (Or even specifically sexual disappointment.  A human character in The Robots of Dawn recalls of a former lover: “He was a finely tuned robot who followed the Three Laws carefully.  To have failed to give joy when he could would have been to disappoint.  Disappointment could be reckoned as harm and he could not harm a human being.”)

     In fact, in a 1957 story, a proofreading robot’s attempt to protect a Northeastern University professor’s academic reputation leads to a memorable trial. 

     During his concluding conversation with Calvin, the professor predicts (sixty-five years before the rise of ChatGPT) that robots will soon “take over the original writing; the searching of the sources, the checking and cross-checking of passages, perhaps even the deduction of conclusions.  What would that leave the scholar?  One thing only—the barren decisions concerning what orders to give the robot next!”

     Separately, Asimov’s novel, Robots and Empire (1985) raises the key question of how the term “human” is defined for purposes of the First Law.

     ● Sixth, context counts.  Robots can’t always detect when the meaning of an order (such as, “Get lost”) is literal; and they may well be misled by the tone in which instructions are issued.  Asimov’s second robot novel, The Naked Sun (1965), quotes an unnamed roboticist to the effect that “A robot. . . is logical but not reasonable.”

     ● Seventh, in both collections of Asimov’s short stories the company disregards “strict laws” that prohibit the use of robots on Earth.  Its personnel deride as a “Frankenstein Complex” the widespread fear that robots will harm humans. (Nonetheless, the author’s first robot novel, The Caves of Steel (1954), partners a human policeman of a later-era Earth with a human-appearing robot who notes that, “A particularly strong drive has been inserted into my motivation banks; a desire for justice.”)

     ● Eighth, I, Robot’s final story and The Complete Robot’s penultimate one predict a possibly bloodless, but nonetheless existential, danger to humanity.

     Decades before 2001: A Space Odysseys HAL (whose motivations in both the screenplay and the contemporaneously-written novel could have been better explained), Martha Wells’ Murderbot Diaries, and real-world concerns about “the singularity,” Asimov envisioned that positronic brains would, consistent with their own interpretation of the First Law, quietly supplant humans as the controllers of civilization. 

    Susan Calvin, by far the most insightful of U.S. Robots’ humans (and once described by Asimov as “rationality personified”), concludes that “Only the Machines know [the future], and they are going there and taking us with them.”

     ● Ninth, as Calvin ultimately reminds the world’s leader (a probable android), humans will remain essential for activities, like precisely assessing grades of cotton, whose complexities can’t be completely captured by computer programs.  She also insists that, “The task of the human brain remains what it always has been: that of discovering new data to be analyzed, and of devising new concepts to be tested.”

     Similarly, Kevin Roose’s 2021 analysis, Futureproof, finds that “humans are much better than AI at handling surprises, filling in gaps, or operating in environments with poorly defined rules or incomplete information.”  Roose foresees that as such technology becomes a tool (and rival) in their careers, “Good lawyers will become more like legal therapists—creating trust with clients and helping solve their problems, rather than simply writing briefs and doing research.”

     ● Tenth, the law can itself be seen as a way of “programming” individuals, groups, and societies.  In the second novel of Asimov’s trilogy, the human protagonist admits to his robot partner, “We need Three Laws of our own, . . . but I’m glad we don’t have them.”  The following book amplifies this theme, linking it to the “psychohistory” of the author’s Foundation novels. 

     By the time of Robots and Empire (1985), which is set decades after the trilogy (and which connects Asimov’s robot stories and novels to those of both his Foundation series and his Galactic Empire series (both begun in 1951)), the human has passed away peacefully. 

     But his surviving robot partner concludes that the Three Laws themselves are incomplete, because “’humanity as a whole is more important than a single human being’”:

     “’There is a law that is greater than the First Law: “A robot may not injure humanity, or through inaction, allow humanity to come to harm.”  I think of it now as the Zeroth Law of Robotics.  The First Law should then be stated: “A robot may not injure a human being, or through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.”’”

     If Asimov’s three series of novels reflected increasing scale, scope, depth, and humanity, his robot short stories have aged relatively poorly. Though prone to distraction, resentment, and pranks, the robots are often more likeable than most of the cartoonishly-characterized humans.

    The exceptional Susan Calvin’s perception and professionalism defy pervasive sexism.  (The same 1969 story in which a former colleague acknowledges that “she always had more brains than any of us” features her reproving another executive: “Faced with a woman reaching a correct conclusion and unable to accept the fact that she is your equal or superior in intelligence, you invent something called feminine intuition.”)

     However, Calvin sees robots as more “essentially decent” than people, and is herself regularly compared by her colleagues to a robot.  Like most of Asimov’s short story characters, Calvin seems neither fulfilled nor particularly happy as she unravels positronic puzzles and paradoxes.

    (By contrast, many readers might agree with Asimov’s assessment, in The Complete Robot, that the moving 1976 story,“The Bicentennial Man” is “my favorite and, I think, the best” of these works.)   

     Yet, over the eight decades since “Robbie,” the earliest of I, Robot’s stories, was published in the pulp, Super Science Stories, science fiction magazines have often forecast, and even helped to lead, not just technological but also social and cultural changes. 

    For almost half a century, one of the most popular and thought-provoking of those periodicals has been named for Asimov himself.