The Intelligent Man's Guide to Science is a general guide to the sciences by the American writer and scientist Isaac Asimov. It was first published in 1960 by Basic Books. Revised versions were published as The New Intelligent Man's Guide to Science (1965), Asimov's Guide to Science (1972), and Asimov's New Guide to Science (1984).
Asimovs New Guide To Science 1993 Isaac Asimov Pdf 18
With the death of Isaac Asimov on April 6, 1992, the world lost a prodigiousimagination. Unlike fiction writers before him, who regarded robotics assomething to be feared, Asimov saw a promising technological innovation to beexploited and managed. Indeed, Asimov's stories are experiments with theenormous potential of information technology.This article examines Asimov's stories not as literature but as agedankenexperiment - an exercise in thinking through theramifications of a design. Asimov's intent was to devise a set of rules thatwould provide reliable control over semi-autonomous machines. My goal is todetermine whether such an achievement is likely or even possible in the realworld. In the process, I focus on practical, legal, and ethical matters thatmay have short- or medium-term implications for practicing informationtechnologists.Part 1, in this issue, reviews the origins of the robot notion and explains thelaws for controlling robotic behaviour, as espoused by Asimov in 1940 andpresented and refined in his writings over the following 45 years. Next month,Part 2 examines the implications of Asimov's fiction not only for realroboticists but also for information technologists in general. Originsof roboticsRobotics, a branch of engineering, is also a popular source of inspirationin science fiction literature; indeed, the term originated in that field. Manyauthors have written about robot behaviour and their interaction with humans,but in this company Isaac Asimov stands supreme. He entered the field early,and from 1940 to 1990 he dominated it. Most subsequent science fictionliterature expressly or implicitly recognizes his Laws of Robotics.Asimov described how, at the age of 20 he came to write robot stories:"In the 1920's science fiction was becoming a popular art form for thefirst time ..... and one of the stock plots .... was that of the invention of arobot .... Under the influence of the well-known deeds and ultimate fate ofFrankenstein and Rossum, there seemed only one change to be rung on this plot -robots were created and destroyed their creator ... I quickly grew tired ofthis dull hundred-times-told tale .... Knowledge has its dangers, yes, but isthe response to be a retreat from knowledge? .... I began in 1940, to writerobot stories of my own - but robot stories of a new variety ...... My robotswere machines designed by engineers, not pseudo-men created byblasphemers"1,2 Asimov was not the first to conceive of well-engineered, non-threateningrobots, but he pursued the theme with such enormous imagination and persistencethat most of the ideas that have emerged in this branch of science fiction areidentifiable with his stories.To cope with the potential for robots to harm people, Asimov, in 1940, inconjunction with science fiction author and editor John W. Campbell, formulatedthe Laws of Robotics. 3,4 He subjected all of his fictional robotsto these laws by having them incorporated within the architecture of their(fictional) "platinum-iridium positronic brains". The laws (see below) firstappeared publicly in his fourth robot short story,"Runaround"5. The1940 Laws of RoboticsFirstLaw:A robot may not injure a human being, or, through inaction, allow a humanbeing to come to harm.SecondLaw:A robot must obey orders given it by human beings, except where such orderswould conflict with the First Law.ThirdLaw:A robot must protect its own existence as long as such protection does notconflict with the First or Second Law. The laws quickly attracted - and have since retained - the attention of readersand other science fiction writers. Only two years later, another establishedwriter, Lester Del Rey, referred to "the mandatory form that would forcebuilt-in unquestioning obedience from the robot".6 As Asimov later wrote (with his characteristic clarity and lack of modesty),"Many writers of robot stories, without actually quoting the three laws, takethem for granted, and expect the readers to do the same".Asimov's fiction even influenced the origins of robotic engineering."Engelberger, who built the first industrial robot, called Unimate, in 1958,attributes his long-standing fascination with robots to his reading of[Asimov's] 'I, Robot' when he was a teenager", and Engelberger later invitedAsimov to write the foreword to his robotics manual. The laws are simple and straightforward, and they embrace "the essentialguiding principles of a good many of the world's ethical systems"7. They alsoappear to ensure the continued dominion of humans over robots, and to precludethe use of robots for evil purposes. In practice, however - meaning in Asimov'snumerous and highly imaginative stories - a variety of difficulties arise. My purpose here is to determine whether or not Asimov's fiction vindicates thelaws he expounded. Does he successfully demonstrate that robotic technologycan be applied in a responsible manner to potentially powerful, semi-autonomousand, in some sense intelligent machines? To reach a conclusion, we mustexamine many issues emerging from Asimov's fiction. HistoryThe robot notion derives from two strands of thought, humanoids andautomata. The notion of a humanoid (or human-likenonhuman) dates back to Pandora in The Iliad, 2,500 years ago and evenfurther. Egyptian, Babylonian, and ultimately Sumerian legends fully 5,000years old reflect the widespread image of the creation, with god-menbreathing life into clay models. One variation on the theme is the idea of thegolem, associated with the Prague ghetto of the sixteenth century. This claymodel, when breathed into life, became a useful but destructive ally.The golem was an important precursor to Mary Shelley's Frankenstein: TheModern Prometheus (1818). This story combined the notion of the humanoidwith the dangers of science (as suggested by the myth of Prometheus, who stolefire from the gods to give it to mortals). In addition to establishing aliterary tradition and the genre of horror stories, Frankenstein alsoimbued humanoids with an aura of ill fate.Automata, the second strand of thought, are literally "self-movingthings" and have long interested mankind. Early models depended on levers andwheels, or on hydraulics. Clockwork technology enabled significant advancesafter the thirteenth century, and later steam and electro-mechanicswere also applied. The primary purpose of automata was entertainment ratherthan employment as useful artifacts. Although many patterns were used, thehuman form always excited the greatest fascination. During the twentiethcentury, several new technologies moved automata into the utilitarian realm.Geduld and Gottesman8 and Frude2 review the chronology ofclay model, water clock, golem, homunculus, android, and cyborg that culminatedin the contemporary concept of the robot.The term robot derives from the Czech word robota, meaning forced work orcompulsory service, or robotnik, meaning serf. It was first used bythe Czech playwright Karel Çapek in 1918 in a short story and again inhis 1921 play R. U. R., which stood for Rossum's Universal Robots.Rossum, a fictional Englishman, used biological methods to invent and mass-produce"men" to serve humans. Eventually they rebelled, became the dominant race, andwiped out humanity. The play was soon well known in English-speaking countries. DefinitionUndeterred by its somewhat chilling origins (or perhaps ignorant of them),technologists of the 1950s appropriated the term robot to refer to machinescontrolled by programs. A robot is "a reprogrammable multifunctional devicedesigned to manipulate and/or transport material through variable programmedmotions for the performance of a variety of tasks"9. The term robotics, whichAsimov claims he coined in 194210 refers to "a science or artinvolving both artificial intelligence (to reason) and mechanical engineering(to perform physical acts suggested by reason)"11.As currently defined, robots exhibit three key elements:programmability, implying computational or symbol-manipulativecapabilities that a designer can combine as desired (a robot is a computer);mechanical capability, enabling it to act on itsenvironment rather than merely function as a data processing or computationaldevice (a robot is a machine); andflexibility, in that it can operate using a range ofprograms and manipulate and transport materials in a variety of ways.We can conceive of a robot, therefore. as either a computer-enhancedmachine or as a computer with sophisticated input/output devices. Its computingcapabilities enable it to use its motor devices to respond to external stimuli,which it detects with its sensory devices. The responses are more complex thanwould be possible using mechanical, electromechanical, and/or electroniccomponents alone.With the merging of computers, telecommunications networks, robotics, anddistributed systems software. and the multiorganizational application of thehybrid technology, the distinction between computers and robots may becomeincreasingly arbitrary. In some cases it would be more convenient to conceiveof a principal intelligence with dispersed sensors and effectors, each withsubsidiary intelligence (a robotics-enhancedcomputer system). In others, it would be more realistic to think in terms ofmultiple devices, each with appropriate sensory, processing, and motorcapabilities, all subjected to some form of coordination (an integratedmulti-robot system). The key difference robotics brings is the complexity andpersistence that artifact behaviour achieves, independent of humaninvolvement.Many industrial robots resemble humans in some ways. In science fiction, thetendency has been even more pronounced, and readers encounter humanoid robots,humaniform robots, and androids. In fiction, as in life, it appears that arobot needs to exhibit only a few human-likecharacteristics to be treated as if it were human. For example, therelationships between humans and robots in many of Asimov's stories seem almostintimate, and audiences worldwide reacted warmly to the "personality" of thecomputer HAL in 2001.' A Space Odyssey, and to the gibbering rubbish-binR2-D2in the Star Wars series.The tendency to conceive of robots in humankind's own image may gradually yieldto utilitarian considerations, since artifacts can be readily designed totranscend humans' puny sensory and motor capabilities. Frequently thedisadvantages and risks involved in incorporating sensory, processing, andmotor apparatus within a single housing clearly outweigh the advantages. Manyrobots will therefore be anything but humanoid in form. They may increasinglycomprise powerful processing capabilities and associated memories in a safe andstable location, communicating with one or more sensory and motor devices(supported by limited computing capabilities and memory) at or near thelocation(s) where the robot performs its functions. Science fiction literaturedescribes such architectures.12,13 ImpactRobotics offers benefits such as high reliability, accuracy, and speed ofoperation. Low long-termcosts of computerized machines may result in significantly higher productivity,particularly in work involving variability within a general pattern. Humans canbe relieved of mundane work and exposure to dangerous workplaces. Theircapabilities can be extended into hostile environments involving high pressure(deep water), low pressure (space), high temperatures (furnaces), lowtemperatures (ice caps and cryogenics), and high-radiationareas (near nuclear materials or occurring naturally in space).On the other hand, deleterious consequences are possible. Robots might directlyor indirectly harm humans or their property; or the damage may be economic orincorporeal (for example, to a person's reputation). The harm could beaccidental or result from human instructions. Indirect harm may occur toworkers, since the application of robots generally results in job redefinitionand sometimes in outright job displacement. Moreover, the replacement of humansby machines may undermine the self-respectof those affected, and perhaps of people generally.During the 1980s, the scope of information technology applications and theirimpact on people increased dramatically. Control systems for chemical processesand air conditioning are examples of systems that already act directly andpowerfully on their environments. And consider computer-integratedmanufacturing, just-in-timelogistics, and automated warehousing systems. Even data processing systems havebecome integrated into organizations' operations and constrain the ability ofoperations-levelstaff to query a machine's decisions and conclusions. In short, many moderncomputer systems are arguably robotic in nature already; their impact must bemanaged - now.Asimov's original laws (see above) provide that robots are to be slaves tohumans (the second law). However, this role is overridden by the higher-orderfirst law, which precludes robots from injuring a human, either by their ownautonomous action or by following a human's instructions. This precludes theircontinuing with a programmed activity when doing so would result in humaninjury. It also prevents their being used as a tool or accomplice in battery,murder, self-mutilation,or suicide.The third and lowest level law creates a robotic survival instinct. Thisensures that, in the absence of conflict with a higher order law, a robot willseek to avoid its own destruction through natural causes or accident;defend itself against attack by another robot or robots; anddefend itself against attack by any human or humans.Being neither omniscient nor omnipotent, it may of course fail in itsendeavors. Moreover, the first law ensures that the robotic survival instinctfails if self-defensewould necessarily involve injury to any human. For robots to successfullydefend themselves against humans, they would have to be provided withsufficient speed and dexterity so as not to impose injurious force on ahuman.Under the second law, a robot appears to be required to comply with a humanorder to (1) not resist being destroyed or dismantled, (2) cause itself to bedestroyed, or (3) (within the limits of paradox) dismantle itself.1.2In various stories, Asimov notes that the order to self-destructdoes not have to be obeyed if obedience would result in harm to a human. Inaddition, a robot would generally not be precluded from seeking clarificationof the order. In his last full-lengthnovel, Asimov appears to go further by envisaging that court procedures wouldbe generally necessary before a robot could be destroyed: "I believe you shouldbe dismantled without delay. The case is too dangerous to await the slowmajesty of the law. . . . If there are legal repercussions hereafter, I shalldeal with them."14Such apparent inconsistencies attest to the laws' primary role as aliterary device intended to support a series of stories about robot behavior.In this, they were very successful: "There was just enough ambiguity in theThree Laws to provide the conflicts and uncertainties required for new stories,and, to my great relief, it seemed always to be possible to think up a newangle out of the 61 words of the Three Laws."1.As Frude says, "The Laws have an interesting status. They . . . mayeasily be broken, just as the laws of a country may be transgressed. ButAsimov's provision for building a representation of the Laws into the positronic-braincircuitry ensures that robots are physically prevented from contraveningthem."2 Because the laws are intrinsic to the machine's design, itshould "never even enter into a robot's mind" to break them.Subjecting the laws to analysis may seem unfair to Asimov. However, they haveattained such a currency not only among sci-fifans but also among practicing roboticists and software developers that theyinfluence, if only subconsciously, the course of robotics. Asimov'sexperiments with the 1940 lawsAsimov's early stories are examined here not in chronological sequence or onthe basis of literary devices, but by looking at clusters of related ideas.*The ambiguity and cultural dependence of termsAny set of "machine values" provides enormous scope for linguisticambiguity. A robot must be able to distinguish robots from humans. It must beable to recognize an order and distinguish it from a casualrequest. It must "understand" the concept of its own existence, acapability that arguably has eluded mankind, although it may be simpler forrobots. In one short story, for example, the vagueness of the wordfirmly in the order "Pull [the bar] towards you firmly" jeopardizes avital hyperspace experiment. Because robot strength is much greater than thatof humans, it pulls the bar more powerfully than the human had intended, bendsit, and thereby ruins the control mechanism15.Defining injury and harm is particularly problematic, as are the distinctionsbetween death, mortal danger, and injury or harm that is not life-threatening.Beyond this there are psychological harm. Any robot given, or developing, anawareness of human feelings would have to evaluate injury and harm inpsychological as well as physical terms: "The insurmountable First Law ofRobotics states: ' A robot may not injure a human being....' and to repela friendly gesture would do injury " 16 (emphasis added). Asimovinvestigated this in an early short story and later in a novel: A mind-readingrobot interprets the first law as requiring him to give people not the correctanswers to their questions but the answers that he knows they want to hear14,16,17.Another critical question is how a robot is to interpret the term human. Arobot could be given any number of subtly different descriptions of a humanbeing, based for example on skin color, height range, and/or voicecharacteristics such as accent. it is therefore possible for robot behaviourto be manipulated: "the Laws, even the First Law, might not be absolute then,but might be whatever those who design robots define them to be"14. Faced withthis difficulty, the robots in this story conclude that ..." if differentrobots are subject to narrow definitions of one sort or another, there can onlybe measureless destruction. we define human beings as all members of thespecies, Homo sapiens."14In an early story, Asimov has a humanoid robot to represent itself as a humanand stand for public office. It must prevent the public from realizing that itis a robot, since public reaction would not only result in its losing theelection but also in tighter constraints on other robots. A politicalopponent, seeking to expose the robot, discovers that it is impossible to proveit is a robot solely on the basis of its behavior, because the Laws ofRobotics force any robot to perform in essentially the same manner as a goodhuman being7. In a later novel, a roboticist says, "If a robot is human enough, he would beaccepted as a human. Do you demand proof that I am a robot? The fact that Iseem human is enough"16. In another scene, a humaniform robot issufficiently similar to a human to confuse a normal robot and slow down itsreaction time14. Ultimately, two advanced robots recognize each other as"human", at least for the purposes of the laws14,18.Defining human beings becomes more difficult with the emergence of cyborgs,which may be seen as either machine-enhanced humans or biologically enhancedmachines. When a human is augmented by prostheses (artificial limbs, heartpacemakers, renal dialysis machines, artificial lungs, and someday perhaps manyother devices), does the notion of a human gradually blur with that of a robot?And does a robot that attains increasingly human characteristics (for example,a knowledge-based system provided with the "know-that" and "know-how" of ahuman expert and the ability to learn more about a domain) gradually becomeconfused with a human? How would a robot interpret the first and second lawsonce the Türing test criteria can be routinely satisfied? The key outcomeof the most important of Asimov's robot novellas 12 is the tenability of theargument that the prosthetization of humans leads inevitably to thehumanization of robots.The cultural dependence of meaning reflects human differences in such mattersas religion, nationality, and social status. As robots become more capable,however, cultural differences between humans and robots might also be a factor.For example, in one story19 a human suggests that some laws may be bad andtheir enforcement unjust, but the robot replies that an unjust law is acontradiction in terms. When the human refers to something higher thanjustice, for example, mercy and forgiveness, the robot merely responds. "I amnot acquainted with those words".*The role of judgment in decision makingThe assumption that there is a literal meaning for any given series ofsignals is currently considered naive. Typically, the meaning of a term isseen to depend not only on the context in which it was originally expressed butalso on the context in which it is read (see, for example, Winograd andFlores20). If this is so, then robots must exercise judgment tointerpret the meanings of words and hence of orders and of new data.A robot must even determine whether and to what extent the laws apply to aparticular situation. Often in the robot stories a robot action of any kind isimpossible without some degree of risk to a human. To be at all useful to itshuman masters, a robot must therefore be able to judge how much the laws can bebreached to maintain a tolerable level of risk. for example, in Asimov's veryfirst robot short story, "Robbie [the robot] snatched up Gloria [his younghuman owner], slackening his speed not one iota, and, consequently knockingevery breath of air out of her."21 Robbie judgedthat it was less harmful for Gloria to be momentarily breathless than to bemown down by a tractor.Similarly, conflicting orders may have to be prioritized, for example, when twohumans give inconsistent instructions. Whether the conflict is overt,unintentional, or even unwitting, it nonetheless requires aresolution. Even in the absence of conflicting orders, a robot may need torecognize foolish or illegal orders and decline to implement them, or at leastquestion them. One story asks, "Must a robot follow the orders of a child; orof an idiot; or of a criminal; or of a perfectly decent intelligent man whohappens to be inexpert and therefore ignorant of the undesirable consequencesof his order?"18Numerous problems surround the valuation of individual humans.First, do all humans have equal standing in arobot's evaluation? On the one hand they do: "A robot may not judge whether ahuman being deserves death. It is not for him to decide. He may not harm ahuman - variety skunk or variety angel."7 On theother hand they might not, as when a robot tells a human, "Inconflict between your safety and that of another, I must guardyours."22 In another short story, robots agree thatthey "must obey a human being who is fit by mind, character,and knowledge to give me that order." Ultimately, this leadsthe robot to "disregard shape and form in judging between human beings" and torecognize his companion robot not merely as human but as ahuman "more fit than the others."18 Many subtle problems can beconstructed. For example. a person might try forcing a robot to comply with aninstruction to harm a human (and thereby violate the firstlaw) by threatening to kill himself unless the robot obeys.How is a robot to judge the trade-offbetween a high probability of lesser harm to one person versus a lowprobability of more serious harm to another? Asimov's stories refer to thisissue but are somewhat inconsistent with each other and with the strict wordingof the first law.More serious difficulties arise in relation to the valuation of multiplehumans. The first law does not even contemplate the simple case of a singleterrorist threatening many lives. In a variety of stories, however, Asimovinterprets the law to recognize circumstances in which a robot may have toinjure or even kill one or more humans to protect one or moreothers: "The Machine cannot harm a human being more than minimally, and thatonly to save a greater number" 23(emphasis added). And again: "The First Law is not absolute.What if harming a human being saves the lives of two others, or three others,or even three billion others? The robot may have thought that saving theFederation took precedence over the saving of one life."24These passages value humans exclusively on the basis of numbers. A later storyincludes this justification: "To expect robots to make judgments of fine pointssuch as talent, intelligence, the general usefulness to society, has alwaysseemed impractical. That would delay decision to the pointwhere the robot is effectively immobilized. So we go bynumbers."18A robot's cognitive powers might be sufficient for distinguishingbetween attacker and attackee, but the first law alone does not provide a robotwith the means to distinguish between a "good" person and a "bad" one. Hence, arobot may have to constrain a "good" attackee's self-defenseto protect the "bad" attacker from harm. Similarly, disciplining children andprisoners may be difficult under the laws, which would limitrobots' usefulness for supervision within nurseries and penalinstitutions.22 Only after many generations of self-developmentdoes a humanoid robot learn to reason that "what seemed likecruelty [to a human] might, in the long run, be kindness."12The more subtle life-and-deathcases, such as assistance in the voluntary euthanasia of a fatally ill orinjured person to gain immediate access to organs that would save several otherlives, might fall well outside a robot's appreciation. Thus, thefirst law would require a robot to protect the threatened human,unless it was able to judge the steps taken to be the least harmful strategy.The practical solution to such difficult moral questions would beto keep robots out of the operating theater.22The problem underlying all of these issues is that mostprobabilities used as input to normative decision models are not objective;rather, they are estimates of probability based on human (or robot) judgment.The extent to which judgment is central to robotic behavior is summed up in thecynical rephrasing of the first law by the major (human) character in the fournovels: "A robot must not hurt a human being, unless he can think of a way toprove it is for the human being's ultimate good afterall."19*The sheer complexityTo cope with the judgmental element in robot decisionmaking, Asimov's later novels introduced a further complication:"On......[worlds other than Earth], . . . the Third Law is distinctly strongerin comparison to the Second Law. . . . An order for self-destructionwould be questioned and there would have to be a truly legitimate reason for itto be carried through - a clear and present danger."16And again, "Harm through an active deed outweighs, in general, harmthrough passivity - all things being reasonably equal. . . .[A robot is] always to choose truth over nontruth, if the harmis roughly equal in both directions. In general, that is."16The laws are not absolutes, and their force varies with the individualmachine's programming, the circumstances, the robot's previous instructions,and its experience. To cope with the inevitable logical complexities, a humanwould require not only a predisposition to rigorous reasoning, and aconsiderable education, but also a great deal of concentration and composure.(Alternatively, of course, the human may find it easier to defer to a robotsuitably equipped for fuzzy-reasoning-based judgment.)The strategies as well as the environmental variables involve complexity. "Youmust not think . . . that robotic response is a simple yes or no, up or down,in or out. ... There is the matter of speed ofresponse."16 In some cases (for example, when a human must bephysically restrained), the degree of strength to be applied must also bechosen.*The scope for dilemma and deadlockA deadlock problem was the key feature of the short story in which Asimovfirst introduced the laws. He constructed the type of stand-offcommonly referred to as the "Buridan's ass" problem. It involved a balancebetween a strong third-lawself-protectiontendency, causing the robot to try to avoid a source of danger, and a weaksecond-laworder to approach that danger. "The conflict between the various rules is[meant to be] ironed out by the different positronic potentials in the brain,"but in this case the robot "follows a circle around [the source of danger],staying on the locus of all points of ... equilibrium."5Deadlock is also possible within a single law. An example under thefirst law would be two humans threatened with equal danger and the robot unableto contrive a strategy to protect one without sacrificing the other. Under thesecond law, two humans might give contradictory orders of equivalent force. Thelater novels address this question with greater sophistication:What was troubling the robot was what roboticists called an equipotentialof contradiction on the second level. Obedience was the Second Law and [therobot] was suffering from two roughly equal and contradictory orders. Robot-blockwas what the general population called it or, more frequently, roblock forshort . . . [or] `mental freeze-out.' No matter how subtle and intricate a brain might be, there is always someway of setting up a contradiction. This is a fundamental truth ofmathematics.16Clearly, robots subject to such laws need to be programmed to recognizedeadlock and either choose arbitrarily among the alternative strategies orarbitrarily modify an arbitrarily chosen strategy variable (say, move a shortdistance in any direction) and reevaluate the situation: "If A and not-Aare precisely equal misery-producersaccording to his judgment, he chooses one or the other in a completelyunpredictable way and then follows that unquestioningly. He does notgo into mental freeze-out."16The finite time that even robot decision making requires could cause anothertype of deadlock. Should a robot act immediately, by "instinct," to protect ahuman in danger? Or should it pause long enough to more carefully analyzeavailable data - or collect more data - perhaps thereby discovering a bettersolution, or detecting that other humans are in even greater danger? Suchsituations can be approached using the techniques of information economics, butthere is inherent scope for ineffectiveness and deadlock, colloquiallyreferred to as "paralysis by analysis."Asimov suggested one class of deadlock that would not occur: If in a givensituation a robot knew that it was powerless to prevent harm to a human, thenthe first law would be inoperative; the third law would become relevant, and itwould not self-immolate in a vain attempt to save the human.25 It doesseem, however, that the deadlock is not avoided by the laws themselves, butrather by the presumed sophistication of the robot's decision-analyticalcapabilities.A special case of deadlock arises when a robot is ordered to wait. For example,"[Robot] you will not move nor speak nor hear us until I say your name again.'There was no answer. The robot sat as though it were cast out of one piece ofmetal, and it would stay so until it heard its name again."26 Aswritten, the passage raises the intriguing question of whether passive hearingis possible without active listening. What if the robot's name is next used inthe third person rather than the second?In interpreting a command such as "Do absolutely nothing until I call you!" ahuman would use common sense and, for example, attend to bodily functions inthe meantime. A human would do nothing about the relevant matter untilthe event occurred. In addition, a human would recognize additional terminatingevents, such as a change in circumstances that make it impossible for the eventto ever occur. A robot is likely to be constrained to a more literalinterpretation, and unless it can infer a scope delimitation to the command, itwould need to place the majority of its functions in abeyanceThe faculties that would need to remain in operation are the:sensory-perceptivesubsystem needed to detect the condition;the recommencement triggering function;one or more daemons to provide a time-outmechanism (presumably the scope of the command is at least restricted to theexpected remaining lifetime of the person who gave the command); and ability to play back the audit trail so that an overseer can discover thecondition on which the robot's resuscitation depends.Asimov does not appear to have investigated whether the behavior of a robotin wait-mode is affected by the laws. If it isn't, then it will not only failto protect its own existence and to obey an order, but will also stand by andallow a human to be harmed. A robotic security guard could therefore benullified by an attacker's simply putting it into a wait-state.*Audit of robot complianceFor a fiction writer, it is sufficient to have the laws embedded in robots'positronic pathways (whatever they may be). To actually apply such a set oflaws in robot design, however, it would be necessary to ensure that every robot:had the laws imposed in precisely the manner intended; and was at all times subject to them - that is, they could not be overriddenor modified.It is important to know how malprogramming and modification of the laws'implementation in a robot (whether intentional or unintentional) can heprevented, detected, and dealt with.In an early short story, robots were "rescuing" humans whose work requiredshort periods of relatively harmless exposure to gamma radiation. Officialsobtained robots with the first law modified so that they were incapable ofinjuring a human but under no compulsion to prevent one from coming to harm.This clearly undermined the remaining part of the first law, since, forexample, a robot could drop a heavy weight toward a human, knowing that itwould be fast enough and strong enough to catch it before it harmed the person.However, once gravity had taken over, the robot would be free to ignore thedanger.25 Thus, a partial implementation was shown to be risky, andthe importance of robot audit underlined. Other risks include trapdoors, Trojanhorses, and similar devices in the robot's programming.A further imponderable is the effect of hostile environments and stress on thereliability and robustness of robots' performance in accordance with the laws.In one short story, it transpires that "The Machine That Won the War" had beenreceiving only limited and poor-qualitydata as a result of enemy action against its receptors and had been processingit unreliably because of a shortage of experienced maintenance staff. Each ofthe responsible managers had, in the interests of national morale, suppressedthat information, even from one another, and had separately and independently"introduced a number of necessary biases" and "adjusted" the processingparameters in accordance with intuition. The executive director, even thoughunaware of the adjustments, had placed little reliance on the machine's output,preferring to carry out his responsibility to mankind by exercising his ownjudgment.27A major issue in military applications generally28 is theimpossibility of contriving effective compliance tests for complex systemssubject to hostile and competitive environments. Asimov points out that thedifficulties of assuring compliance will be compounded by the design andmanufacture of robots by other robots.22*Robot autonomySometimes humans may delegate control to a robot and find themselves unableto regain it, at least in a particular context. One reason is that to avoiddeadlock, a robot must be capable of making arbitrary decisions. Another isthat the laws embody an explicit ability for a robot to disobey an instruction,by virtue of the overriding first law.In an early Asimov short story, a robot "knows he can keep [the energy beam]more stable than we [humans] can, since he insists he's the superior being, sohe must keep us out of the control room [in accordance with the firstlaw]."29 The same scenario forms the basis of one of the most vividepisodes in science fiction, HAL's attempt to wrest control of the spacecraftfrom Bowman in 2001: A Space Odyssey. Robot autonomy is also reflectedin a lighter moment in one of Asimov's later novels, when a character says tohis companion, "For now I must leave you. The ship is coasting in for alanding, and I must stare intelligently at the computer that controls it, or noone will believe I am the captain."14In extreme cases, robot behavior will involve subterfuge, as themachine determines that the human, for his or her own protection, must betricked. In another early short story, the machines that manage Earth's economyimplement a form of "artificial stupidity" by making intentional errors,thereby encouraging humans to believe that the robots are fallible and thathumans still have a role to play.23*Scope for adaptationThe normal pattern of any technology is that successive generations showincreased sophistication, and it seems inconceivable that robotic technologywould quickly reach a plateau and require little further development. Thusthere will always be many old models in existence, models that may haveinherent technical weaknesses resulting in occasional malfunctions and henceinfringement on the Laws of Robotics. Asimov's short stories emphasize thatrobots are leased from the manufacturer, never sold, so that old models can bewithdrawn after a maximum of 25 years.Looking at the first 50 years of software maintenance, it seems clear thatsuccessive modification of existing software to perform new or enhancedfunctions is one or more orders of magnitude harder than creating a newartifact to perform the same function. Doubts must exist about the ability ofhumans (or robots) to reliably adapt existing robots. The alternative -destruction of existing robots - will be resisted in accordance with the thirdlaw, robot self-preservation.At a more abstract level, the laws are arguably incomplete because the frame ofreference is explicitly human. No recognition is given to plants, animals, or as-yet-undiscovered(for example, extraterrestrial), intelligent life forms. Moreover, some futurehuman cultures may place great value on inanimate creation, or on holism. If,however, late twentieth-centuryvalues have meanwhile been embedded in robots, that future culture may havedifficulty wresting the right to change the values of the robots it hasinherited. If machines are to have value sets, there must be a mechanism foradaptation, at least through human-imposedchange. The difficulty is that most such value sets will be implicit ratherthan explicit; their effects will be scattered across a system rather thanimplemented in a modular and therefore replaceable manner.At first sight, Asimov's laws are intuitively appealing, but their applicationencounters difficulties. Asimov, in his fiction, detected and investigated thelaws' weaknesses, which this article (Part 1 of 2) has analyzed and classified.Part 2, in the next issue of Computer, will take the analysis furtherby considering the effects of Asimov's 1985 revision to the laws. It will thenexamine the extent to which the weaknesses in these laws may in fact be endemicto any set of laws regulating robotic behavior. Part2 (IEEE Computer, January 1994) RecapitulationIsaac Asimov's Laws of Robotics, first formulated in 1940, were primarily aliterary device intended to support a series of stories about robot behavior.Over time, he found that the three laws included enough apparentinconsistencies, ambiguity, and uncertainty to provide the conflicts requiredfor a great many stories. In examining the ramifications of these laws, Asimovrevealed problems that might later confront real roboticists and informationtechnologists attempting to establish rules for the behavior of intelligentmachines.With their fictional "positronic" brains imprinted with the mandate to (inorder of priority) prevent harm to humans, obey their human masters, andprotect themselves, Asimov's robots had to deal with great complexity. In agiven situation, a robot might be unable to satisfy the demands of two equallypowerful mandates and go into "mental freezeout." Semantics is also a problem.As demonstrated in Part 1 of this article (Computer, December 1993,pp. 53-61),language is much more than a set of literal meanings and Asimov showed us thata machine trying to distinguish, for example, who or what is human mayencounter many difficulties that humans themselves handle easily andintuitively. Thus, robots must have sufficient capabilities for judgment -capabilities that can cause them to frustrate the intentions of their masterswhen, in a robot's judgment, a higher order law applies.As information technology evolves and machines begin to design and build othermachines, the issue of human control gains greater significance. In time. humanvalues tend to change; the rules reflecting these values, and embedded inexisting robotic devices. may need to be modified. But if they are implicitrather than explicit, with their effects scattered widely across a system, theymay not be easily replaceable. Asimov himself discovered many contradictionsand eventually revised the Laws of Robotics.Asimov's1985 revised Laws of Robotics TheZeroth lawAfter introducing the original three laws, Asimov detected. as early as1950, a need to extend the first law, which protected individual humans, sothat it would protect humanity as a whole. Thus, his calculating machines "havethe good of humanity at heart through the overwhelming force of theFirst Law of Robotics"1 (emphasis added). In 1985 he developed thisidea further by postulating a "zeroth" law that placed humanity's interestsabove those of any individual while retaining a high value on individual humanlife.2 The revised set of laws is shown in the sidebar.Asimov pointed out that under a strict interpretation of the first law, a robotwould protect a person even if the survival of humanity as a whole was placedat risk. Possible threats include annihilation by an alien or mutant humanrace, or by a deadly virus. Even when a robot's own powers of reasoning led itto conclude that mankind as a whole was doomed if it refused to act, it wasnevertheless constrained: "I sense the oncoming of catastrophe . . . [but} Ican only follow the Laws."2In Asimov's fiction the robots are tested by circumstances and mustseriously consider whether they can harm a human to save humanity. The turningpoint comes when the robots appreciate that the laws are indirectly modifiableby roboticists through the definitions programmed into each robot: "If the Lawsof Robotics, even the First Law, are not absolutes, and if human beings canmodify them, might it not be that perhaps, under proper conditions, weourselves might mod - "2 Although the robots are prevented byimminent "roblock" (robot block, or deadlock) from even completing thesentence, the groundwork has been laid.Later, when a robot perceives a clear and urgent threat to mankind, itconcludes, "Humanity as a whole is more important than a single human being.There is a law that is greater than the First Law: `A robot may not injurehumanity, or through inaction, allow humanity to come to harm."2 Defining"humanity"Modification of the laws, however, leads to additional considerations.Robots are increasingly required to deal with abstractions and philosophicalissues. For example, the concept of humanity may be interpreted in differentways. It may refer to the set of individual human beings (a collective), or itmay be a distinct concept (a generality, as in the notion of "the State").Asimov invokes both ideas by referring to a tapestry (a generality) made up ofindividual contributions (a collective): "An individual life is one thread inthe tapestry, and what is one thread compared to the whole?.....Keep your mind fixed firmly on the tapestry and do not let thetrailing off of a single thread affect you."2A human roboticist raised a difficulty with the zeroth law immediately afterthe robot formulated it: "What is your `humanity' but an abstraction'? Can youpoint to humanity? You can injure or fail to injure a specific human being andunderstand the injury or lack of injury that has taken place. Can you see theinjury to humanity? Can you understand it? Can you point to it?"2The robot later responds by positing an ability to "detect the hum of themental activity of Earth's human population, overall. . . . And, extendingthat, can one not imagine that in the Galaxy generally there is the hum of themental activity of all of humanity? How, then, is humanity an abstraction? Itis something you can point to." Perhaps as Asimov's robots learn to reason withabstract concepts, they will inevitably become adept at sophistry and polemic. Theincreased difficulty of judgmentOne of Asimov's robot characters also points out the increasing complexityof the laws: "The First Law deals with specific individuals and certainties.Your Zeroth Law deals with vague groups and probabilities."2At this point, as he often does, Asimov resorts to poetic licenseand for the moment pretends that coping with harm to individuals does notinvolve probabilities. However, the key point is not affected: Estimatingprobabilities in relation to groups of humans is far more difficult than withindividual humans.It is difficult enough, when one must choose quickly . . . ,to decide whichindivi dual may suffer, or inflict, the greater harm. To choose between anindividual and humanity, when you are not sure of what aspect of humanity youare dealing with, is so difficult that the very validity of Robotic Laws comesto be suspect. As soon as humanity in the abstract is introduced, the Laws ofRobotics begin to merge with the Laws of Humanics which may not evenexist.2 RobotpaternalismDespite these difficulties, the robots agree to implement the zeroth law,since they judge themselves more capable than anyone else of dealing with theproblems. The original laws produced robots with considerable autonomy, albeita qualified autonomy allowed by humans. But under the 1985 laws, robots weremore likely to adopt a superordinate, paternalistic attitude towardhumans.Asimov'sRevised Laws of Robotics (1985)ZerothLaw:A robot may not injure humanity, or, through inaction, allow humanity tocome to harm.FirstLaw:A robot may not injure a human being, or, through inaction, allow a humanbeing to come to harm, unless this would violate the Zeroth Law of Robotics.SecondLaw:A robot must obey orders given it by human beings, except where such orderswould conflict with the Zeroth or First Law.ThirdLaw:A robot must protect its own existence as long as such protection does notconflict with the Zeroth, First, or Second Law. Asimov suggested this when he first hinted at the zeroth law, because he hadhis chief robotpsychologist say that "...we can no longer understand our owncreations. . . . [Robots] have progressed beyond the possibility of detailedhuman control."1 In a more recent novella, a robotproposes to treat his form "as a canvas on which I intend to draw a man." butis told by the roboticist, "It's a puny ambition. ... You're better than a man.You've gone downhill from the moment you opted for organicism."3In the later novels, a robot with telepathic powers manipulates humans to actin a way that will solve problems,4 although itspowers are constrained by the psychological dangers of mind manipulation.Naturally, humans would be alarmed by the very idea of a mind-reading robot; therefore, under the zeroth and first laws, such a robot wouldbe permitted to manipulate the minds of humans who learned of its abilities,making them forget the knowledge, so that they could not be harmed by it. Thisis reminiscent of an Asimov story in which mankind is an experimentallaboratory for higher beings5 and Adams' altogether more flippantHitchhiker's Guide to the Galaxy, in which the Earth is revealed as alarge experiment in which humans are being used as laboratory animals by, ofall things, white mice.6 Someday those manipulatorsof humans might be robots.Asimov's The Robots of Dawn is essentially about humans, with robotsas important players. In the sequel Robots and Empire, however, thestory is dominated by the two robots, and the humans seem more like theirplaythings. It comes as little surprise, then, that the robots eventuallyconclude that "it is not sufficient to be able to choose [among alternativehumans or classes of human] . . . ; we must be able to shape."2Clearly, any subsequent novels in the series would have beenabout robots, with humans playing "bit" parts.Robot dominance has a corollary that pervades the novels: History "grew lessinteresting as it went along; it became almost soporific."4 With life'schallenges removed, humanity naturally regresses into peace and quietude,becoming "placid, comfortable, and unmoving" - and stagnant. Sowho's in charge? As we have seen, the term human can be variously defined, thus significantlyaffecting the first law. The term humanity did not appear in the original laws,only in the zeroth law, which Asimov had formulated and enunciated by arobot.2 Thus, the robots define human and humanity to refer tothemselves as well as to humans, and ultimately to themselves alone. Another ofthe great science fiction stories, Clarke's Rendezvous withRama,7 also assumes that an alien civilization, much older thanmankind, would consist of robots alone (although in this case Clarke envisionedbiological robots). Asimov's vision of a robot takeover differs from those ofprevious authors only in that force would be unnecessary.Asimov does not propose that the zeroth law must inevitably result inthe ceding of species dominance by humans to robots. However, some concepts maybe so central to humanness that any attempt to embody them in computerprocessing might undermine the ability of humanity to control its own fate.Weizenbaum argues this point more fully.8The issues discussed here, and in Part 1, have grown increasinglyspeculative, and some are more readily associated with metaphysics than withcontemporary applications of information technology. However, they demonstratethat even an intuitively attractive extension to the original laws could havevery significant ramifications. Some of the weaknesses are probably inherent inany set of laws and hence in any robotic control regime. Asimov'slaws extendedThe behavior of robots in Asimov's stories is not satisfactorily explainedby the laws he enunciated. This section examines the design requirementsnecessary to effectively subject robotic behavior to the laws. In so doing. itbecomes necessary to postulate several additional laws implicit in Asimov'sfiction.Perceptual and cognitive apparatusClearly, robot design must include sophisticated sensorycapabilities. However, more than signal reception is needed. Many of thedifficulties Asimov dramatized arose because robots were less than omniscient.Would humans, knowing that robots cognitive capabilities are limited, beprepared to trust their judgment on life-and-deathmatters? For example, the fact that any single robot cannot harm a human doesnot protect humans from being injured or killed by robotic actions. In onestory, a human tells a robot to add a chemical to a glass of milk and thentells another robot to serve the milk to a human. The result is murder bypoisoning. Similarly, a robot untrained in first aid might move an accidentvictim and break the person's spinal cord. A human character in TheNaked Sun is so incensed by these shortcomings that he accusesroboticists of perpetrating a fraud on mankind by omitting key words from thefirst law. In effect, it really means "A robot may do nothing that to itsknowledge would injure a human being, and may not, through inaction,knowingly allow a human being to come to harm."9Robotic architecture must be designed so that the laws caneffectively control a robot's behavior. A robot requires a basic grammar andvocabulary to "understand" the laws and converse with humans. In one shortstory, a production accident results in a "mentally retarded" robot. Thisrobot, defending itself against a feigned attack by a human, breaks itsassailant's arm. This was not a breach of the first law, because it did notknowingly injure the human: "In brushing aside the threatening arm . . . itcould not know the bone would break. In human terms,no moral blame can be attached to an individual who honestly cannotdifferentiate good and evil."10 In Asimov's stories, instructionssometimes must be phrased carefully to be interpreted as mandatory. Thus, someauthors have considered extensions to the apparatus of robots, for example,a "button labeled `Implement Order' on the robot'schest,"11 analogous to the Enter key on acomputer's keyboard.A set of laws for robotics cannot be independent but must be conceived as partof a system. A robot must also he endowed with data collection, decision-analytical,and action processes by which it can apply the laws.Inadequate sensory, perceptual, or cognitive faculties would underminethe laws' effectiveness. Additionalimplicit lawsIn his first robot short story, Asimov stated that "long before enough cango wrong to alter that First Law, a robot would be completely inoperable. It'sa mathematical impossibility [for Robbie the Robot to harm a human]."12For this to be true, robot design would have to incorporate a high-ordercontroller (a "conscience"?) that would cause a robot to detect any potentialfor noncompliance with the laws and report the problem or immobilize itself.The implementation of such a meta-law("A robot may not act unless its actions are subject to the laws of robotics")might well strain both the technology and the underlying science. (Given themeta-languageproblem in twentieth-century philosophy, perhaps logic itself would be strained.) This difficultyhighlights the simple fact that robotic behavior cannot be entirely automated;it is dependent on design and maintenance by an external agent.Another of Asimov's requirements is that all robots must be subject to the lawsat all times. Thus, it would have to be illegal for human manufacturers tocreate a robot that was not subject to the laws. In a future world that makessignificant use of robots, their design and manufacture would naturally beundertaken by other robots. Therefore, the Laws of Robotics must include thestipulation that no robot may commit an act that could result in any robot'snot being subject to the same laws.The words "protect its own existence" raise a semantic difficulty. In TheBicentennial Man, Asimov has a robot achieve humanness by taking its ownlife. Van Vogt, however, wrote that "indoctrination against suicide" wasconsidered a fundamental requirement.13 The solution might be tointerpret the word protect as applying to all threats, or to amend the wordingto explicitly preclude self-inflictedharm. Having to continually instruct robot slaves would be both inefficient andtiresome. Asimov hints at a further, deep-nestedlaw that would compel robots to perform the tasks they were trained for:Quite aside from the Three Laws, there isn't a pathway in those brains thatisn't carefully designed and fixed. We have robots planned for specific tasks,implanted with specific capabilities.'14 (Emphasisadded.)So perhaps we can extrapolate an additional, lower priority law: "A robotmust perform the duties for which it has been programmed, except where thatwould conflict with a higher order law." Asimov's laws regulate aroundrobots' transactions with humans and thus apply where robots haverelatively little to do with one another or where there is only one robot.However, the laws fail to address the management of large numbers of robots. Inseveral stories, a robot is assigned to oversee other robots. This would bepossible only if each of the lesser robots were instructed by a human to obeythe orders of its robot overseer. That would create a number of logical andpractical difficulties, such as the scope of the human's order. It would seemmore effective to incorporate in all subordinate robots an additional law, forexample, "A robot must obey the orders given it by superordinate robots exceptwhere such orders would conflict with a higher order law." Such a law wouldfall between the second and third laws.Furthermore, subordinate robots should protect their superordinate robot. Thiscould be implemented as an extension or corollary to the third law; that is, toprotect itself, a robot would have to protect another robot on which itdepends. Indeed, a subordinate robot may need to be capable of sacrificingitself to protect its robot overseer. Thus, an additional law superior to thethird law but inferior to orders from either a human or a robot overseer seemsappropriate: "A robot must protect the existence of a superordinate robot aslong as such protection does not conflict with a higher order law."The wording of such laws should allow for nesting, since robot overseers mayreport to higher level robots. It would also be necessary to determine the formof the superordinate relationships:a tree, in which each robot has precisely one immediateoverseer, whether robot or human;a constrained network, in which each robot may haveseveral overseers but restrictions determine who may act as an overseer; oran unconstrained network, in which each robot may haveany number of other robots or persons as overseers.This issue of a command structure is far from trivial, since it is centralto democratic processes that no single entity shall have ultimate authority.Rather, the most senior entity in any decision-makinghierarchy must be subject to review and override by some other entity,exemplified by the balance of power in the three branches of government and theauthority of the ballot box. Successful, long-livedsystems involve checks and balances in a lattice rather than a mere treestructure. Of course, the structures and processes of human organizations mayprove inappropriate for robotic organization. In any case, additional laws ofsome kind would be essential to regulate relationships among robots.The sidebar shows an extended set of laws, one that incorporates the additionallaws postulated in this section. Even this set would not alway's ensureappropriate robotic behavior. However, it does reflect the implicit laws thatemerge in Asimov's fiction while demonstrating that any realistic set of designprinciples would have to be considerably more complex than Asimov's 1940 or1985 laws. This additional complexity would inevitably exacerbate the problemsidentified earlier in this article and create new ones. AnExtended Set of the Laws of RoboticsTheMeta-LawA robot may not act unless its actions are subject to the Laws of RoboticsLawZeroA robot may not injure humanity, or, through inaction, allow humanityto come to harmLawOneA robot may not injure a human being, or, through inaction, allow ahuman being to come to harm, unless this would violate a higher-order LawLawTwoA robot must obey orders given it by human beings, except wheresuch orders would conflict with a higher-order LawA robot must obey orders given it by superordinate robots, exceptwhere such orders would conflict with a higher-order LawLawThreeA robot must protect the existence of a superordinate robot as longas such protection does not conflict with a higher-order LawA robot must protect its own existence as long as such protectiondoes not conflict with a higher-order LawLawFourA robot must perform the duties for which it has been programmed,except where that would conflict with a higher-order lawTheProcreation LawA robot may not take any part in the design or manufacture of a robotunless the new robot's actions are subject to the Laws of Robotics While additional laws may be trivially simple to extract and formulate, theneed for them serves as a warning. The 1940 laws' intuitive attractiveness andsimplicity were progressively lost in complexity, legalisms, and semanticrichness. Clearly then, formulating an actual set of laws as a basis forengineering design would result in similar difficulties and require a much moreformal approach. Such laws would have to be based in ethics and human morality,not just in mathematics and engineering. Such a political process wouldprobably result in a document couched in fuzzy generalities rather thanconstituting an operational-level,programmable specification. Implicationsfor information technologistsMany facets of Asimov's fiction are clearly inapplicable to real informationtechnology or too far in the future to be relevant to contemporaryapplications. Some matters, however, deserve our consideration. For example,Asimov's fiction could help us assess the practicability of embedding someappropriate set of general laws into robotic designs. Alternatively, thesubstantive content of the laws could be used as a set of guidelines to beapplied during the conception, design, development, testing, implementation,use, and maintenance of robotic systems. This section explores the secondapproach.Recognitionof stakeholder interestsThe Laws of Robotics designate no particular class of humans (not even arobot's owner) as more deserving of protection or obedience than another. Ahuman might establish such a relationship by command, but the laws give such acommand no special status: another human could therefore countermand it. Inshort, the laws reflect the humanistic and egalitarian principles thattheoretically underlie most democratic nations.The laws therefore stand in stark contrast to our conventional notions about aninformation technology artifact, whose owner is implicitly assumed to be itsprimary beneficiary. An organization shapes an application's design and use forits own benefit. Admittedly, during the last decade users have been givengreater consideration in terms of both the human-machineinterface and participation in system development. But that trend has beenjustified by the better returns the organization can get from its informationtechnology investment rather than by any recognition that users arestakeholders with a legitimate voice in decision making. The interests of otheraffected parties are even less likely to be reflected.In this era of powerful information technology, professional bodies ofinformation technologists need to consider:identification of stakeholders and how they are affected;prior consultation with stakeholders;quality assurance standards for design, manufacture, use, and maintenance;liability for harm resulting from either malfunction or use in conformancewith the designer's intentions; andcomplaint-handlingand dispute-resolution procedures.Once any resulting standards reach a degree of maturity, legislatures in themany hundreds of legal jurisdictions throughout the world would probably haveto devise enforcement procedures.The interests of people affected by modern information technology applicationshave been gaining recognition. For example, consumer representatives are nowbeing involved in the statement of user requirements and the establishment ofthe regulatory environment for consumer electronic-funds-transfersystems. This participation may extend to the logical design of such systems.Other examples are trade-unionnegotiations with employers regarding technology-enforced change, and the publication of software quality-assurancestandards.For large-scaleapplications of information technology, governments have been called upon toapply procedures like those commonly used in major industrial and socialprojects. Thus, commitment might have to be deferred pending dissemination andpublic discussion of independent environmental or social impact statements.Although organizations that use information technology might see this asinterventionism, decision making and approval for major information technologyapplications may nevertheless become more widely representative. Closed-systemversus open-systemthinkingComputer-basedsystems no longer comprise independent machines each serving a single location.The marriage of computing with telecommunications has produced multicomponentsystems designed to support all elements of a widely dispersed organization.Integration hasn't been simply geographic, however. The practice of informationsystems has matured since the early years when existing manual systems wereautomated largely without procedural change. Developers now seek payback viathe rationalization of existing systems and varying degrees of integrationamong previously separate functions. With the advent of strategic andinterorganizational systems, economies are being sought at the level ofindustry sectors, and functional integration increasingly occurs acrosscorporate boundaries.Although programmers can no longer regard the machine as an almost entirelyclosed system with tightly circumscribed sensory and motor capabilities, manyhabits of closed-systemthinking remain. When systems have multiple components, linkages to othersystems, and sophisticated sensory and motor capabilities, the scope needed forunderstanding and resolving problems is much broader than for a merehardware/software machine. Human activities in particular must be perceived aspart of the system. This applies to manual procedures within systems (such asreading dials on control panels), human activities on the fringes of systems(such as decision making based on computer-collatedand -displayedinformation), and the security of the user's environment (automated tellermachines, for example). The focus must broaden from mere technology totechnology in use.General systems thinking leads information technologists to recognize thatrelativity and change must he accommodated. Today, an artifact may be appliedin multiple cultures where language, religion, laws, and customs differ. Overtime, the original context may change. For example, models for a criminaljustice system - one based on punishment and another based on redemption - mayalternately dominate social thinking. Therefore, complex systems must becapable of adaptation. Blindacceptance of technological and other imperativesContemporary utilitarian society seldom challenges the presumption that whatcan be done should be done. Although this technologicalimperative is less pervasive than people generally think, societiesnevertheless tend to follow where their technological capabilities lead.Related tendencies include the economic imperative (what can be done moreefficiently should be) and the marketing imperative (any effective demandshould be met). An additional tendency might be called the "informationimperative," the dominance of administrative efficiency, information richness,and rational decision making. However, the collection of personal data hasbecome so pervasive that citizens and employees have begun to object.The greater a technology's potential to promote change, the more carefully asociety should consider the desirability of each application. Complementarymeasures that may be needed to ameliorate its negative effects should also beconsidered. This is a major theme of Asimov's stories, as he explores thehidden effects of technology. The potential impact of information technology isso great that it would be inexcusable for professionals to succumb blindly tothe economic, marketing, information, technological, and other imperatives.Application software professionals can no longer treat the implications ofinformation technology as someone else's problem but must consider them as partof the project.15 Humanacceptance of robotsIn Asimov's stories, humans develop affection for robots, particularlyhumaniform robots. In his very first short story, a little girl is too closelyattached to Robbie the Robot for her parents' liking.'12 In anotherearly story, a woman starved for affection from her husband and sensitivelyassisted by a humanoid robot to increase her self confidence entertainsthoughts approaching love toward it/him.16Nonhumaniforms, such as conventional industrial robots and large, highlydispersed robotic systems (such as warehouse managers. ATMs, and EFT/POSsystems) seem less likely to elicit such warmth. Yet several studies have founda surprising degree of identification by humans with computers.17,18Thus, some hitherto exclusively human characteristics are beingassociated with computer systems that don't even exhibit typical roboticcapabilities. Users must be continually reminded that the capabilities of hardware/softwarecomponents are limited:they contain many inherent assumptions;they are not flexible enough to cope with all of the manifold exceptionsthat inevitably arise; andthey do not adapt to changes in their environment;authority is not vested in hardware/ software components but rather in theindividuals who use them.Educational institutions and staff training programs must identify theselimitations; yet even this is not sufficient: The human-machineinterface must reflect them. Systems must be designed so that users arerequired to continually exercise their own expertise, and system output shouldnot be phrased in a way that implies unwarranted authority. These objectiveschallenge the conventional outlook of system designers. Humanopposition to robotsRobots are agents of change and therefore potentiallyupsetting to those with vested interests. Of all the machines so far inventedor conceived of, robots represent the most direct challenge to humans.Vociferous and even violent campaigns against robotics should not besurprising. Beyond concerns of self interest is the possibility that somehumans could be revulsed by robots, particularly those with humanoidcharacteristics. Some opponents may be mollified as robotic behavior becomesmore tactful. Another tenable argument is that by creating and deployingartifacts that are in some ways superior. humans degrade themselves.System designers must anticipate a variety of negative reactions against theircreations from different groups of stakeholders. Much will depend on the numberand power of the people who feel threatened - and on the scope of the changethey anticipate. If, as Asimov speculates,9 a robot-basedeconomy develops without equitable adjustments, the backlash could beconsiderable.Such a rejection could involve powerful institutions as well as individuals. Inone Asimov story, the US Department of Defense suppresses a projectintended to produce the perfect robot-soldier.It reasons that the degree of discretion and autonomy needed for battlefieldperformance would tend to make robots rebellious in other circumstances(particularly during peace time) and unprepared to suffer their commanders'foolish decisions.19 At a more basic level, product lines andmarkets might be threatened, and hence the profits and even the survival ofcorporations. Although even very powerful cartels might not be able to impederobotics for very long, its development could nevertheless be delayed oraltered. Information technologists need to recognize the negative perceptionsof various stakeholders and manage both system design and project politicsaccordingly. Thestructuredness of decision makingFor five decades there has been little doubt that computers hold significantcomputational advantages over humans. However, the merits of machine decisionmaking remain in dispute. Some decision processes are highly structured and canbe resolved using known algorithms operating on defined data items with definedinterrelationships. Most structured decisions are candidates for automation,subject, of course, to economic constraints. The advantages of machines mustalso be balanced against risks. The choice to automate must be made carefullybecause the automated decision process (algorithm, problem description. problem-domaindescription, or analysis of empirical data) may later prove to be inappropriatefor a particular type of decision. Also, humans involved as data providers,data communicators, or decision implementers may not perform rationally becauseof poor training, poor performance under pressure, or willfulness.Unstructured decision making remains the preserve of humans for one or moreof the following reasons:humans have not yet worked out a suitable way to program (or teach) amachine how to make that class of decision;some relevant data cannot be communicated to the machine;"fuzzy" or "open-textured"concepts or constructs are involved;such decisions involve judgments that system participants feel should notbe made by machines on behalf of humans.One important type of unstructured decision is problem diagnosis. As Asimovdescribed the problem, "How..... can we send a robot to find a flaw in amechanism when we cannot possibly give precise orders, since we know nothingabout the flaw ourselves'? `Find out what's wrong' is not an order you can giveto a robot; only to a man."20 Knowledge-basedtechnology has since been applied to problem diagnosis, but Asimov's insightretains its validity: A problem may be linguistic rather than technical,requiring common sense, not domain knowledge. Elsewhere, Asimov calls robots"logical but not reasonable" and tells of household robots removing importantevidence from a murder scene because a human did not think to order them topreserve it.9The literature of decision support systems recognizes an intermediate case,semistructured decision making. Humans are assigned the decision task, andsystems are designed to provide support for gathering and structuringpotentially relevant data and for modeling and experimenting with alternativestrategies. Through continual progress in science and technology, previouslyunstructured decisions are reduced to semistructured or structured decisions.The choice of which decisions to automate is therefore provisional, pendingfurther advances in the relevant area of knowledge. Conversely, because ofenvironmental or cultural change, structured decisions may not remain so. Forexample, a family of viruses might mutate so rapidly that the reference datawithin diagnostic support systems is outstripped and even the logic becomesdangerously inadequate.Delegating to a machine any kind of decision that is less than fully structuredinvites errors and mishaps. Of course. human decision-makersroutinely make mistakes too. One reason for humans' retaining responsibilityfor unstructured decision making is rational: Appropriately educated andtrained humans may make more right decisions and/or fewer seriously wrongdecisions than a machine. Using common sense, humans can recognize whenconventional approaches and criteria do not apply, and they can introduceconscious value judgments. Perhaps a more important reason is the arationalpreference of humans to submit to the judgments of their peers rather than ofmachines: If someone is going to make a mistake costly to me, better for it tobe an understandably incompetent human like myself than a mysteriouslyincompetent machine.8Because robot and human capabilities differ, for the foreseeable future atleast, each will have specific comparative advantages. Informationtechnologists must delineate the relationship between robots and people byapplying the concept of decision structuredness to blend computer-basedand human elements advantageously. The goal should be to achieve complementaryintelligence rather than to continue pursuing the chimera of unneededartificial intelligence. As Wyndham put it in 1932: "Surely man and machine arenatural complements: They assist one another."21 RiskmanagementWhether or not subjected to intrinsic laws or designguidelines, robotics embodies risks to property as well as to humans. Theserisks must be managed; appropriate forms of risk avoidance and diminution needto be applied, and regimes for fallback, recovery, and retribution must beestablished.Controls are needed to ensure that intrinsic laws, if any, are operational atall times and that guidelines for design, development, testing, use, andmaintenance are applied. Second-ordercontrol mechanisms are needed to audit first-order control mechanisms. Furthermore, those bearing legal responsibility for harmarising from the use of robotics must be clearly identified. Courtroomlitigation may determine the actual amount of liability, but assigning legalresponsibilities in advance will ensure that participants take due care.In most of Asimov's robot stories, robots are owned by the manufacturer evenwhile in the possession of individual humans or corporations. Hence legalresponsibility for harm arising from robot noncompliance with the laws can beassigned with relative ease. In most real-worldjurisdictions, however, there are enormous uncertainties, substantial gaps inprotective coverage, high costs, and long delays.Each jurisdiction, consistent with its own product liability philosophy, needsto determine who should bear the various risks. The law must be sufficientlyclear so that debilitating legal battles do not leave injured parties withoutrecourse or sap the industry of its energy. Information technologists need tocommunicate to legislators the importance of revising and extending the lawsthat assign liability for harm arising from the use of information technology. Enhancementsto codes of ethicsAssociations of information technology professionals, such as the lEEEComputer Society, the Association for Computing Machinery, the British ComputerSociety, and the Australian Computer Society, are concerned with professionalstandards, and these standards almost always include a code of ethics. Suchcodes aren't intended so much to establish standards as to express standardsthat already exist informally. Nonetheless, they provide guidance concerninghow professionals should perform their work, and there is significantliterature in the area.The issues raised in this article suggest that existing codes of ethics need tobe reexamined in the light of developing technology. Codes generally fail toreflect the potential effects of computer-enhancedmachines and the inadequacy of existing managerial, institutional, and legalprocesses for coping with inherent risks. Information technology professionalsneed to stimulate and inform debate on the issues. Along with robotics. manyother technologies deserve consideration. Such an endeavor would meanreassessing professionalism in the light of fundamental works on ethicalaspects of technology.Asimov's Laws of Robotics have been a very successful literary device. Perhapsironically, or perhaps because it was artistically appropriate, the sum ofAsimov's stories disprove the contention that he began with: It is not possibleto reliably constrain the behavior of robots by devising and applying a set ofrules.The freedom of fiction enabled Asimov to project the laws into many futurescenarios; in so doing, he uncovered issues that will probably arise someday inreal-worldsituations. Many aspects of the laws discussed in this article are likely tobe weaknesses in any robotic code of conduct. Contemporary applications ofinformation technology such as CAD/CAM, EFT/POS, warehousing systems, andtraffic control are already exhibiting robotic characteristics. Thedifficulties identified are therefore directly and immediately relevant toinformation technology professionals.Increased complexity means new sources of risk, since each activity dependsdirectly on the effective interaction of many artifacts. Complex systems areprone to component failures and malfunctions, and to intermoduleinconsistencies and misunderstandings. Thus, new forms of backup, problemdiagnosis, interim operation, and recovery are needed. Tolerance andflexibility in design must replace the primacy of short-termobjectives such as programming productivity. If information technologists donot respond to the challenges posed by robotic systems. as investigated inAsimov's stories, information technology artifacts will be poorly suited forreal-worldapplications. They may be used in ways not intended by their designers, orsimply be rejected as incompatible with the individuals and organizations theywere meant to serve. IsaacAsimov, 1920-1992Born near Smolensk in Russia, Isaac Asimov came to the United States withhis parents three years later. He grew up in Brooklyn, becoming a US citizenat the age of eight. He earned bachelor's, master's, and doctoral degrees inchemistry from Columbia University and qualified as an instructor inbiochemistry at Boston University School of medicine, where he taught for manyyears and performed research in nucleic acid.As a child, Asimov had begun reading the science fiction stories on the racksin his family's candy store, and those early years of vicarious visits tostrange worlds had filled him with an undying desire to write his own adventuretales. He sold his first short story in 1938 and after wartime service as achemist and a short hitch in the Army, he focused increasingly on hiswriting.Asimov was among the most prolific of authors, publishing hundreds of books onvarious subjects and dozens of short stories. His Laws of Robotics underliefour of his full-length novels as well as many of his short stories. The WorldScience Fiction Convention bestowed Hugo Awards on Asimov in nearly everycategory of science fiction, and his short story "Nightfall" is often referredto as the best science fiction story ever written. The scientific authoritybehind his writing gave his stories a feeling of authenticity, and his workundoubtedly did much to popularize science for the reading public. Referencesto Part 11. I. Asimov, The Rest of the Robots (a collection ofshort stories originally published between 1941 and 1957), Grafton Books,London, 1968.2. N. Frude, The Robot Heritage, Century Publishing. London.1984.3. I. Asimov, I, Robot (a collection of short stories originallypublished between 1940 and 1950), Grafton Books, London, 1968.4. I. Asimov, P.S. Warrick, and M.H. Greenberg, eds., Machines That Think,Holt, Rinehart. and Wilson, London. 1983.5. I. Asimov, "Runaround" (originally published in 1942), reprinted inReference 3, pp. 33-51.6. L. Del Rey, "Though Dreamers Die" (originally published in 1944). reprintedin Reference 4, pp. 153-174.7. I. Asimov, "Evidence" (originally published in 1946). reprinted inReference 3. pp. 159-182.8. H.M. Geduld and R. Gottesman. eds.. Robots, Robots, Robots, NewYork Graphic Soc., Boston. 1978.9. P.B. Scott. The Robotics Revolution: The Complete Guide.Blackwell, Oxford. 1984.10. I. Asimov, Robot Dreams (a collection of short storiesoriginally published between 1947 and 1986), Victor Gollancz, London. 1989.11. A. Chandor, ed., The Penguin Dictionary of Computers, 3rd ed..Penguin, London, 1985.12. I. Asimov, "The Bicentennial Man" (originally published in 1976),reprinted in Reference 4, pp. 519-561.Expanded into I. Asimov and R. Silverberg. The Positronic Man, VictorGollancz, London, 1992.13. A.C. Clarke and S. Kubrick, 2001: A Space Odyssey, GraftonBooks. London, 1968.14. I. Asimov, Robots and Empire, Grafton Books, London, 1985.15. I. Asimov, "Risk" (originally published in 1955), reprinted in Reference1. pp.122-155.16. I. Asimov, The Robots of Dawn, Grafton Books, London, 1983.17. I. Asimov, "Liar!" (originally published in 1941), reprinted in Reference3, pp.92-109.18. I. Asimov, "That Thou Art Mindful of Him" (originally published in 1974),reprinted in The Bicentennial Man, Panther Books, London, 1978, pp. 79-107.19. I. Asimov. The Caves of Steel (originally published in1954). Grafton Books. London. 1958.20. T. Winograd and F. Flores. Understanding Computers and Cognition.Ablex. Norwood, N.J., 1986.21. I. Asimov. "Robbie" (originally published as "Strange Playfellow" in1940). reprinted in Reference 3. pp. 13-32.22. I. Asimov, The Naked Sun (originally publishedin 1957), Grafton Books. London, 1960.23. I. Asimov. "The Evitable Conflict" (originally publishedin 1950). reprinted in Reference 3, pp. 183-706.24. I. Asimov. "The Tercentenary Incident" (originally published in 1976).reprinted in The Bicentennial Man, Panther Books, London, 1978, pp. 229-247.25. I. Asimov, "Little Lost Robot" (originally published in 1947). reprintedin Reference 3, pp. 110-136.26. I. Asimov, "Robot Dreams," first published in Reference 10, pp. 51-58.27. I. Asimov, "The Machine That Won the War" (originallypublished in 1961), reprinted in Reference 10. pp. 191-197.28. D. Bellin and G. Chapman. eds., Computers in Battle: Will They Work?Harcourt Brace Jovanovich, Boston, 1987.29. I. Asimov, "Reason" (originally published in 1941), reprinted in Reference3, pp. 52-70.Referencesto Part 21. I. Asimov, "The Evitable Conflict" (originally' published in 1950),reprinted in I. Asimov, I Robot, Grafton Books. London.1968. pp. l83-206.2. I. Asimov, Robots and Empire, Grafton Books. London.1985.3. I. Asimov, "The Bicentennial Man" (originally published in 1976). reprintedin I. Asimov, P.S. Warrick, and M.H. Greenberg, eds., Machines ThatThink. Holt. Rinehart, and Wilson, 1983, pp 519-561.4. I. Asimov, The Robots of Dawn, Grafton Books. London,1983.5. I. Asimov, "Jokester" (originally' published in 1956), reprinted in 1.Asimov, Robot Dreams, Victor Gollancz, London. 1989 pp 278-94.6. D. Adams. The Hitchhikers Guide to the Galaxy, Harmony Books.New York. 1979.7. A.C. Clarke. Rendezvous with Rama, Victor Gollancz, London.1973.8. J. Weizenbaum. Computer Power and Human Reason, W.H. Freeman. SanFrancisco, 1976.9. I. Asimov, The Naked Sun, (originally' published in 1957).Grafton Books. London. 1960.10. I. Asimov, "Lenny" (originally published in 1958), reprinted in I. Asimov.The Rest of the Robots. Grafton Books, London. 1968, pp. 158-177.11. H. Harrison. "War With the Robots" (originally published in 1962),reprinted in I, Asimov, P.S. Warrick, and M.H. Greenberg, eds., MachinesThat Think, Holt, Rinehart, and Wilson, 1983, pp.357-379.12. I. Asimov, "Robbie" (originally published as "Strange Playfellow" in1940). reprinted in I. Asimov, I, Robot. Grafton Books.London, 1968, pp. 13-32.13. A.E. Van Vogt, "Fulfillment" (originally published in 1951). reprinted inI. Asimov, P.S. Warrick, and M.H. Greenberg. eds., Machines That Think,Holt, Rinehart, and Wilson, 1983, pp.175-205.14. I. Asimov. "Feminine Intuition" (originally published in 1969), reprintedin I. Asimov, The Bicentennial Man, Panther Books, London, 1978, pp. 15-41.15. R.A. Clarke, "Economic, Legal, and Social Implications of InformationTechnology," MIS Q,uarterly, Vol 17 No. 4, Dec. 1988, pp. 517-519.16. I. Asimov, "Satisfaction Guaranteed" (originally published in 1951),reprinted in I. Asimov, The Rest of the Robots, Grafton Books,London, 1968, pp.102-120.17. J. Weizenbaum, "Eliza," Comm. ACM, Vol. 9, No. 1, Jan. 1966, pp.36-45.18. S. Turkle, The Second Self' Computers and the Human Spirit, Simon& Schuster, New York, 1984.19. A. Budrys, "First to Serve" (originally published in 1954), reprinted inI. Asimov, M.H. Greenberg, and C.G. Waugh, eds., Robots, Signet, NewYork, 1989, pp. 227-244.20. I. Asimov, "Risk" (originally published in 1955), reprinted in I.Asimov, The Rest of the Robots, Grafton Books, London, 1968, pp. 122-155.21. J. Wyndham, "The Lost Machine" (originally published in1932), reprinted inA. Wells, ed., The Best of John Wyndham, Sphere Books, London, 1973,pp. 13-36,and in I. Asimov, P.S. Warrick, and M.H. Greenberg, eds.,Machines ThatThink, Holt, Rinehart, and Wilson, 1983, pp. 29-49. AuthorAffiliationsRoger Clarke is Principal of XamaxConsultancy Pty Ltd, Canberra. He is also a Visiting Professor in the CyberspaceLaw & Policy Centre at the Universityof N.S.W., a Visiting Professor in the E-CommerceProgramme at the Universityof Hong Kong, and a Visiting Professor in the Departmentof Computer Science at the AustralianNational University. PersonaliaPhotographsPresentationsVideosAccessStatistics The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcockand their drummer Xamax Consultancy Pty LtdACN: 002 360 45678 Sidaway St, Chapman ACT 2611 AUSTRALIATel: +61 2 6288 6916 Created: 16 May 1997 - Last Amended: 16 May 1997 by Roger Clarke - Site Last Verified: 15 February 2009This document is at www.rogerclarke.com/SOS/Asimov.htmlMail to Webmaster - Xamax Consultancy Pty Ltd, 1995-2022 - Privacy Policy 2ff7e9595c
Commentaires