I, Robot Symbols, Allegory and Motifs

I, Robot Symbols, Allegory and Motifs

Mrs. Weston

The collection opens with the story “Robbie” which features a little girl named Gloria who spends so much time with the titular robot that she comes to love it at least as much and possibly more than she loves her own parents. At the very least, Gloria has come to develop a special bond with the robot that transcends any normal relationship between even a child and a favorite “machine.” Robbie never does a single thing to stoke fear about this relationship, but almost inevitably Mrs. Weston develops an irrational dislike to the point of convincing her husband to remove the machine man from the home. Gloria becomes inconsolable at the loss of her companion. Only after a period of intense emotional devastation and trip which results in Robbie saving Gloria’s life does Mrs. Weston finally relent. In the interim, she transforms herself from human being to symbol. Specifically, Mrs. Weston become the book’s central symbolic personification of what had always existed in one form or another, but what is today known as a technophobe. Despite being given no real clear evidence of the logical reasoning behind it, some people are just naturally fearful of technological advances.

Cutie

In the story “Cutie” Asimov creates a robot officially called the QT-1, but more popularly known as Cutie. Cutie is the symbol of the ontological quest to under the nature of existence. Cutie cannot comprehend that a being as intelligent as itself could possibly owe its very existence to such an interior being as humans. In this sense, Cutie represents everyone who has ever tried to reconcile the nature of existence and the mystery of creation.

A Symbolic Rejection of Free Will

Because the robots in the stories population Asimov’s book are both physically and purposely similar to human beings, they can quite easily be interpreted as metaphors for the human condition. Some scholars argue that if this so, then the book is unrelenting in its rejection of the concept of free will. The robots are all constructed with a purpose which essentially predetermines their entire existence even before the first day on the job. If the robots are intended to replicate humans, then the underlying philosophical message here is that humans are also hardwired to live out a certain level of existence before they go out into society. Genetics, environments, economic and social status are all stand-ins for the programming within the circuits that lead robots straight to their eventual obsolescence. Or, as humans refer to it, death.

Three Laws of Robotics

Asimov’s famous codified laws of behavior regarding robots is very useful in application to that idea of not having free will.

One, a robot may not injure a human being or, through inaction, allow a human being to come to harm. ... Two ... a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.... Three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

With such laws in place, certainly a strong argument can be made that robots are limited in their capacity to transgress behavior modulated by the imposition of will. Indeed, the argument can be made that this is precisely what separates robots and humans when it comes to exercising free will. If a robot cannot injury a human and must obey orders, it is immediately occupying a real outside human beings whose behavior is not restricted by such orders. And yet, symbolically speaking, how are the Three Robot Laws substantively different from the Ten Commandments. Simply applying the term law to behavior guarantees nothing. Even writing into a program that robots cannot break these laws brings all three back into the symbolic realm. After all, is it not a human who wrote that program? Some human beings ascribe the Ten Commandments not to human hands, but God’s hand, yet that hardly keeps those same believers from violating them at will. The laws are symbols of desired behavior only. Their enforcement cannot be assured.

Herbie

In the story “Liar!”, Herbie is a robot which has developed the ability to read minds. In an effort to avoid violating the First Law’s mandate against harming humans, the robot reacts to this unexpected development by choosing to protect humans from harm with lies. When Herbie senses that telling the truth could cause a human some emotional pain, he instead avoids causing that harm by lying. Naturally, the consequences of telling humans what Herbie assumes they want to hear rather than telling them the truth eventually turns devastating and he shuts down after being unable to resolve an irresolvable dilemma a key scientist proposes to him. As a thematic symbol, there’s a lot going on there: he represents the danger of not being truthful and by extension even becomes a metaphor for megalomaniacal self-confidence that you alone can determine what is best for others. Beyond that, however, is Herbie’s greatest value as narrative symbol: he becomes a symbol of the fear of evolution. From the outset of the story, it is made clear that Herbie is a freak of nature and, what’s worse, an unexplainable freak. None of the other RB models ever manufactured have developed the ability to read minds and it remains unclear exactly how RB-34 managed to do this. Since the humans don’t know how this occurred, their response is only to make sure it remains a secret until the situation threatens to go beyond even that ability to control. Herbie is an evolutionary leap forward over any other robot ever created; he is something of a symbolic Nietzsche Uberbot. And rather than allow evolution to run its natural course, the humans take the step of ending it and reverting robot evolution back to the status quo.

Update this section!

You can help us out by revising, improving and updating this section.

Update this section

After you claim a section you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.

Cite this page