Preface (2026)
This paper was written in partnership with my friend and collaborator Ian Peters as part of an independent study in Sociology and Anthropology at Olivet College (now the University of Olivet) in the Fall of 2011. We presented it together at the 2011 Michigan Sociological Conference, hosted by the Michigan Sociological Association. I am sharing it here not because it reflects my current conclusions in full, but because it captures how I was already thinking about technology, ethics, and society at a time when these questions were far less present in public conversation.
The language and examples reflect their time. Some claims are more speculative than I would frame them today, and a great deal of work has been published since in artificial intelligence, governance, and ethics. If I were writing this now, I would be more precise about technical capabilities, more explicit about uncertainty, and more attentive to institutions, incentives, and power alongside philosophical questions.
What I still stand by is the central concern. Technological capability often advances faster than our ethical, cultural, and institutional frameworks can adapt. This paper represents an early attempt to think seriously about that gap, drawing from biology, sociology, and moral philosophy rather than treating technology as an isolated or purely technical problem.
I am publishing it as a snapshot of my intellectual development. It is not offered as a final answer, but as an example of how I approach complex systems, the kinds of questions I tend to ask first, and how I reason when the implications are uncertain.
What follows is the paper as it was originally written. Minor bracketed corrections have been added where necessary to clarify obvious typographical errors.
Machine Ethics
Introduction
Society as it exists in the present day has been shaped in every manner by technological development and progress. Written human history catalogs the testament of human technological innovation spanning many millennia. One can survey the manner in which technology progressed and subsequently affected human society through the ages. History illustrates that progress in technology marks societal change.
For the massive sum of human existence technological development progressed at such a sluggish pace that changes in society would have been hardly noticed in one's short lifespan. A person living in the Middle Ages would have lived a life that was strikingly similar to that of their parents, as too would their children. New technologies would have been introduced gradually, so that both technological and social changes occurred over a prolonged course of time. Mores developed and were integrated over generational time spans. (Effland, 1998)
The interesting and exciting nature of technology is that invention precipitously effects innovation—as new tools were developed, those tools were used to invent newer tools in an even more prompt and efficient fashion. The growth of technology that occurred beginning with the Scientific Revolution in the 16th century ultimately shaped modern society as it stands today. Within the last century, technological innovation became exponential. The innovations of the 20th century were profound, rapid, and had a massive impact on global society. (Saper, 1997)
The progress of technology within the last decade is like that of the previous century, only that the progress of the last decade has been even more abrupt. The technological advancements made within the first decade of the 21st century have surpassed the rate at which society can acclimatize with them. Many of the moral and ethical issues that have arisen out of the last decade have only recently come to the attention of mainstream society. Technologies such as personal computing devices; the Internet; social networks and media; cellular information networks; as well as biomedical and pharmaceutical advancements, are altering social mores and challenging traditional individual and societal constrains. (Keyes, September 2006) That is not to say that these advancements in technology are detrimental to society; however, their integration is becoming commonplace before properly evaluated into societal acceptance and proper usage can be determined. Truly, society is experiencing ethical "growing pains" on the brink of technological advancement.
The degree at which technological progress is made will only continue to expand at an ever increasing rate, even more abruptly than any technological gains made previously. The research conducted by the brilliant minds of current day engineers, scientists, and innovators will pave the way for quicker bounds of technological development than ever previously imagined. An individual living a century ago could have hardly fathomed the world as it stands today, much as individuals of today can hardly fathom how the world will exist a few decades from now. (Keyes, September 2006)
The problem then arises with the ever increasing rate of technological progress combined with the inability of society to keep pace ethically and morally. With great technological development comes responsibility of use; such responsibility would not have had time to properly develop socially. Mores are developed slowly over time, often through trial and error, and typically to protect the society as a whole. With powerful new high technologies coming into existence so quickly, societies stand a massive risk. (Tomasello, 1999)
By examining past historical examples, technological progress, trends, and current research, hypotheses can be derived about the speed and nature of future developments. The nature of technological advancement can be applied to current ethical understandings in order to evaluate the impact that it would pose on society. Many of the technologies examined may lead to the obsolescence of man through the prevalence of high technological progress. Obviously, in the light of such grave consequences, it behooves society to properly and promptly evaluate the integration of every technology before its public availability. (Moor, 2006)
High technologies on the development ramp such as sophisticated artificial intelligence, and lifespan extension technologies pose a massive threat to societal morals, ethics, and cohesion. Ethical understandings as they currently stand are not able to adequately weigh the impact that these technologies pose on society. The consequences of technological progress to society could be possibly averted if properly examined prior to the point of proliferation. If ethical concerns of technology were integrated into and addressed during the development process, future ethical disasters could be potentially obviated.
Eras of Technological Development
The nature of such assumptions on the future of society and technology may seem absurd unless properly examined from a well based perspective. To properly analyze the forecast of technological development, and the esoteric nature of future ethical concerns, one must closely examine both the present and the past. The past can give insight into the development of technology through the ages, as well as the ethical concerns of previous generations in dealing with new technology. The present can give important information on the current research and information that will ultimately lead into the future of humanity.
Biological Development
Ever since the first organic molecules formed and began to replicate themselves over 4 billion years ago; life has been cultivating more complexity in order to increase survivability, a self advancing biological technology, known as evolution. Evolution allowed for increasing forms of specialization and intricacy, cellular membranes, advanced chemistries and metabolisms to adapt with the changing environment. (Davies, 2005)
Organismal Culture
Organisms continued to develop complexity over the span of billions of years until the development of truly advanced organisms with a discernable sense of culture. Organismal culture, which is typically only thought of existing between humanoids and primates, has recently begun to be viewed among other species. Voegelin (1951) mentions that if culture were to be defined as learned behavior then all advanced animal species have some form of culture. Humanoids have a culture far surpassing that of primates while having remarkably similar biology. Therefore, there existed a time period where biological development necessitated the construct of culture in order to increase survivability, as humanoids and primates shared a common ancestor as close as 6 million years ago. Advanced biology gave way to primitive culture. Culture exists because evolution favored it.
McGrew (1998) notes culture as a process of passing learned behavior onward to successive generations. The McGrew process involves six steps: a new pattern of behavior is invented, or an existing behavior is modified; the innovator transmits this pattern to others, or others observe this pattern and adapt to it; the pattern is consistent among others, and has recognizable features to distinguish it; those who have adapted the pattern are able to [perform] it long afterwards; the pattern then spreads across social groups within a population; and the pattern then passes through generations and onward. Whilst these steps are stringent in terms of determining culture among non humanoids, it is necessary to have a discernable idea of how biology influences culture. (McGrew, 1998)
This process began with the biological advancement of social animals that existed within groups (packs, bands, herds, clans, or families). Biology could only adapt across long term generational spans. Culture facilitated adaptation that could be transmitted among members of a social group within a generation. This adaptation increased survivability of the group at large. While biology favored culture, culture influenced biological evolution. Those organisms that developed culture were able to adapt and advance at a faster rate than those without. The shared biological cultural relationship influenced cognitive development, categorization ability, and creative problem solving, as these traits are necessary for social units to function. (Voeglein, 1951)
Technological Development and Societal Constructs
Culture influenced another type of advancement in animals—the usage and adaptation of tools and the environment in order to increase survivability, which was the beginning of the development of technology. Biology favored culture, and culture favored technology—as technology introduced a method of furthermore increasing the survivability of an animal species. Although, it was once thought that only humanoids possessed the intelligence for the usage of tools, it is now known that there are many species of animals that use tools: other primates, dolphins, birds, and otters. (Emery, 2006)
As soon as 200,000 years ago modern Homo sapiens evolved from their closest biological ancestor, and for the first time cultural adaptation began to outpace biological evolution. The proper biological traits were present in Homo sapiens for this to occur. Modern humans possessed abstract reasoning, creative problem solving, introspectiveness, and communication through language. Humans rapidly developed culture and advanced social constructs. The development of technology further facilitated the advancement of culture. (Goodman, 1990)
Agricultural technology allowed for the development of civilization, which further pushed forward technological progress and development. Human society and culture progressed in tandem with technological progression, allowing for an integration of the two—ethical considerations of technology were integrated into society. Civilization allowed for the exponential growth in human culture, as civilization allowed for the increased specialization of individuals. With this increased specialization came a more rapid growth in technology. While cultural advancement allowed for the development of technology, technology pushed the bounds of culture, through the development of systems of writing and mathematics. (Tomasello, 1999)
Biological evolution advanced organisms with increasing levels of complexity and social orders, which facilitated the formulation of organismal culture. Cultural invention, that is the innovative constructs of social groups outside of technology, seems to have necessitated the creation of advanced technology. Advanced technology exponentially pushed the bounds of cultural invention, and ultimately advanced beyond the bound by which culture could keep pace. (King, 1975)
The progress of technology surpassed the development of culture in the first decade of the 21st century. For the first time technologies [were] introduced, proliferated, and reached mass acceptance before societal approval could be determined. If the rate and scope of technological progress within the millennial decade is an indication of the development to come, the societal consequences of inaction in lieu of societal assessment and valuation could be dire. The trends in technological progress indicate that the research and development path that will most likely influence culture the most is the advancement of artificial and synthetic intelligences. (Keyes, 2006)
The development of high technologies that would result in the creation of synthetic human level intelligence, and eventually synthetic intelligence that would rival the collective intelligence of all humans; poses an immense risk to ethical and societal systems as they currently stand. While these ethical considerations reflect human valuation, will too machines develop an organismal culture of their own? Would the machine organismal culture resemble or mimic the development of culture by advanced animal life? The adaptation of human society, ethics, and culture to the advancement of high technologies should be examined prior exponential climb of synthetic intelligence to computational superiority. Also, the limitation or regulation of synthetic intelligences should be evaluated and deemed as acceptable or unjust. These issues could pose to be the most difficult to answer and the most important questions that human society as a whole has ever had to answer.
Machine Ethics
As we hurtle into the second decade of the Twenty-first Century, we're faced with many new, daunting challenges, especially in the field of technology. For more than sixty years we've dreamed of building robots that could perform our bidding, analyze and calculate information at blinding speeds and carry out tasks we're unable to do ourselves. But as we look at the developing technologies of today, we begin to realize that this reality is not far away; a reality we may not be ready for yet. Many of the issues that will arise out of such technological creations can and need to be addressed today, issues like robots' rights, duties, and powers. This paper aims to breakdown the issue of machine ethics, and the idea that one day machines will possess free will, and be able to make the conscious choices we're able to make.
Implicit Ethics vs. Explicit Ethics
James Moor, in his article The Nature, Importance and Difficulty of Machine Ethics (2006), addressed the division of implicit and explicit choices of machines in their ethical systems. Implicit ethics dictates that the ethical agent (in this case, the machine) is constrained to follow ethical choices or actions, and could not even consider doing otherwise. This could be accomplished in several ways, from writing code containing ethical behavior, or simply programming the machine to only behave in an ethical manner. We may not realize it, but many of the machines we interact with on a daily basis are implicit ethical agents. Moor states, "Computers are implicit ethical agents when the machine's construction addresses safety or critical reliability concerns".
Moor explains that we rely on machines like Automated Tellers or Global Positioning Services to perform their duties accurately and ethically. For example, an ATM will process transactions and dispense the correct amount of money during a withdrawal, and not keep the money for itself. This may seem obvious to us, but the machine is following its embedded programming to perform the correct ethical action. Likewise, a GPS will navigate you to a desired destination to the best of its abilities. Of course, like many of us know, they do not always fulfill their duties in the most accurate or time effective manner, but their intentions are set to complete their task in the most accurate fashion, not to mislead or misguide you. Moor conjures up an even greater example, where a pilot of a commercial airliner relies on an automated pilot program to fly the airplane, making adjustments to the plane's speed, heading, altitude while constantly monitoring various aspects of the plane's condition, such as fuel. We trust the autopilot to correctly fly the plane to our destination while avoiding crashes, midair collisions, or any other possible accidents.
These examples are very basic ones; in the future we will face more complicated, in depth scenarios where machines will be put in positions of making more serious ethical decisions. In the future, robotic systems designed by the military could perform reconnaissance missions, or scout dangerous or hostile environments. Regarding these possibilities in technology, Chris Carroll writes,
Future military robots endowed with ethical programs might be able to decide on their own when, and at whom, to shoot. In the tumult of battle, robots wouldn't be affected by volatile emotions. Consequently, they'd be less likely to make mistakes under fire, [Robert Arkin, author of Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture (2005)] believes, and less likely to strike at noncombatants. In short they might make better ethical decisions than people.
In Arkin's system a robot trying to determine whether or not to fire would be controlled by an "ethical governor" built into its software. When a robot locks onto a target, the governor would check a set of preprogrammed constraints based on the rules of engagement and the laws of war. An enemy tank in a large field, for instance, would quite likely get the go-ahead; a funeral at a cemetery attended by armed enemy combatants would be off-limits as a violation of the rules of engagement. (Carroll, 2011, pp 82-84)
Of course these machines do not have a choice to make an unethical decision, so they are not free ethical agents. This is the aim of explicit ethics; to create a machine that analyzes a scenario and is free to make decisions based on its own standard of ethics, which has not been specifically programmed into it. Currently, this is a purely hypothetical situation, as we do not yet have Artificial Intelligence sophisticated enough to mimic the depth and complexity of the human brain. However, there is some debate about whether machines will ever be considered full ethical agents. Moor addresses one argument,
[T]hat only full ethical agents can be ethical agents. To argue this is to regard the other senses of machine ethics as not really ethics involving agents. However, although these others' senses are weaker, they can be useful in identifying more limited ethical agents. To ignore the ethical component of ethical-impact agents, implicit ethical agents, and explicit ethical agents is to ignore an important aspect of machine. What might bother some is that the ethics of the lesser ethical agents is derived from their human developers. However, this doesn't mean that you can't evaluate machines as ethical agents. Chess programs receive their chess knowledge and abilities from humans. Still, we regard them as chess players. (Moor, 2006)
What this argument illustrates is that machines, such as the ATM, GPS or autopilot mentioned earlier, are not truly ethical agents because they are not full ethical agents. An explicit machine would break though such boundaries, capable of not only deciding its own ethical choices, but also validating them against a legitimate ethical system. But how, or from where, does an advanced intelligence develop a system of ethics?
Morality of Machines
One of the earliest structures of machine ethics we can find is Issac Asimov's Three Laws of Robotics. The Laws are as follows,
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Asimov, 1942)
Asimov lays out a very general set of laws that, for the most part, are absent of loopholes or contradictions, while still allowing for robots to exist independently, or subserviently, or fulfilling whatever their means or aspirations may be. They may allow for some levels of independent ethical values to begin to form, but there will always be the Laws of Robotics that they must adhere to. In many ways, this is similar a human ethical system known as Divine Command Ethics, a Christian theological ethics system that dictates the agent follows the biblical laws and God's commands. Spohn (1995) explains, "To the basic moral question 'What ought I do?' there is a direct answer: You should listen to God's command and obey it without question or reservation" (pp. 21). While it should be stated that these systems are only similar at the most basic level, another distinction should also be made clear.
Although a robot may follow all of the Laws of Robotics, is it really a full ethical agent? It certainly did not choose to abide by the laws, at least as Asimov writes of it. The laws are simply programmed into the robots' hardwire, not the conscious decisions of a sentient being. While the nature of Divine Command Ethics requires practitioners to follow laws and commands without reservation, they have still elected to follow that ethics system, they have the choice to obey or not to obey; Asimov's robots do not have this choice. So if a robot programmed with ethical laws that it must obey is not a full ethical being, are there other ways of achieving this goal?
Michael Anderson and Susan Leigh Anderson, in their article Machine Ethics, Creating an Ethical Intelligent Agent (2011), suggest that robots could potentially compute ethics, using an algorithm. They propose a "utilitarian" system, a teleological ethics theory where "rightness and wrongness of actions is determined entirely by the consequences of the actions" (Anderson and Anderson, 2011). Utilitarian ethicists answer the basic moral question "What ought I do?" by deciding which outcome does the most good for the most people.
Anderson and Anderson (2011) lay out a very detailed algorithm, and approach the obstacle of understanding how to calculate "good" versus "bad," where they examine a subset of utilitarianism called "hedonistic act utilitarianism" which "would have us consider the pleasure and displeasure that those affected by each possibility actions are likely to receive" (pp. 9). Already we begin to see that this proposed algorithm is not a simple one, but rather would look like a very long, complex set of coding and composite databases. A machine would not only need to run calculations to assess the "pleasure and displeasure" of those affected by its choices, but run the outcomes against a database of understood reactions and consequences. Anderson and Anderson go on to explain how this would be conducted,
With the requisite information, a machine could be developed that is just as able to follow the theory as a human being.
Hedonistic act utilitarianism can be implemented in a straightforward manner. The algorithm is to compute the best action, that which derives the greatest net pleasure, from all alternative actions. It requires as input the number or people affected and, for each person, the intensity of the pleasure or displeasure (for example, on a scale of 2 to -2), the duration of the pleasure or displeasure (for example, in days), and the probability that this pleasure or displeasure will occur, for each possible action. For each person, the algorithm computes the product of the intensity, the duration, and the probability, to obtain the net pleasure for that person. It then adds the individual net pleasure to obtain the total net pleasure:
Total net pleasure = [sigma](intensity x duration x probability) for each affected individual.
This computation would be performed for each alternative action. The action with the highest total net pleasure is the right action. (Anderson and Anderson, pp. 10)
We begin crossing into a grey area with this approach: a robot would be able to assess and calculate scenarios, at much faster speeds than even our own instincts, and perform the action that it decides is most ethically right. This is not any different that how we act in a scenario where we have a complex set of choices: we decide which outcome would be best, and try to conform to our ethical principles. And, just as we are apt to make mistakes and misperceptions of outcomes, the machines will also be prone to this, although significantly less often.
We must be hesitant about jumping to give such machines full ethical agent credentials though, as it will depend entirely on the manner in which the algorithm program is used. Even though a machine can make the most ethical choice from many options, if its programming requires it to perform the most ethical choice, the machine is not truly a full ethical agent. A full ethical agent must possess the choice of inaction or to not pursue the best ethical choice, even if they know what the right and wrong choices are. However, if the programmed algorithm is simply given to machines as a tool for them to assess ethical choices and they retain the ability to choose inaction or the unethical choice, then they could be considered a full ethical agent.
Another way to approach creating a machine that is a full ethical agent is to create an intelligence that is capable of learning, one that is constructed with no preconceived notions of right or wrong, ethics or behavior, like a newborn child. This intelligence would observe the world that we live in and create its own sense of self and mind from this. But creating intelligence that can learn in the way humans can has proved to be a daunting tasks, as Michio Kaku writes in his book Physics of the Future (2011), and of visit he took to New York University to observe the LAGR (learning applied to ground robots) system,
LAGR is an example of the bottom up approach: it has to learn everything from scratch, by bumping into things. It is the size of a small golf cart and has two stereo color cameras that scan the landscape, identifying objects in its path. It then moves among these objects, carefully avoiding them, and learns with each pass. It is equipped with GPS and has two infrared sensors that can detect objects in front of it. It contains three high power Pentium chips and is connected to a gigabit Ethernet network. We went to a nearby park, where LAGR robot could roam around various obstacles in its path. Every time it went over the course, it got better.
[…] Every time LAGR bumps into something, it moves around the object and learns to avoid that object the next time. […] LAGR has hardly any images in its memory but instead creates a mental map of all the obstacles it meets, and constantly refines that map with each pass. Unlike the driverless car, which is programmed and follows a route set previously by GPS, LAGR moves all by itself, without any instructions from a human. (Kaku, pp. 76)
Even though this is an incredible start down the path towards artificial intelligence learning the way humans can, Kaku reminds us that, "Even cockroaches can identify objects and learn to go around them." But even if robotic intelligence was capable of learning to the extent of human capacity, and rivaled our complexity and depth of thought, what lessons would influence their ethical basis? From where would they derive their ethical systems?
Our ethics come from many places: our families, our friends, our communities, our religions, our traditions and our history. But a machine would not have these things, or at least not feel as connected to them as we do. Would machines serve to reinforce our ethical norms, or would they help us to refine them? Or, would they branch away from our ethical systems, creating robot specific morals of their own?
These questions are important to understand for several reasons. First, machine ethics could serve to be incredibly useful in understanding our own ethical structures, how we arrive at them and how we utilize them. They could help us to better understand the differences and similarities between our ethical theories, and potentially let us observe the creation of a new ethical system. Second, these distinctions can help us to define what it means to be human. As technology advances, in the fields of artificial intelligence, biotechnology and nanotechnology, the lines between man and machine will begin to blur, until one day it may be unclear what separates the two.
Works Cited
Anderson, Michael & Anderson, Susan Leigh (2011). Machine Ethics: Creating an Ethical Intelligent Agent, AI [Magazine].
Asimov, Issac (1942). Runaround. New York, NY: Smith & Street.
Carroll, Chris (2011, August). Us. And Them. National Geographic, 220 (2), 66-85.
Davies, P.. (2005). A Quantum Recipe for Life. Nature. 437, 7067:819.
Effland, R.. (1998). The Cultural Evolution of Civilizations. Accessed on 1 December 2011 from http://www.mesacc.edu/dept/d10/asb/anthro2003/glues/model_complex.html
Emery, Nathan J.. (2006). Cognitive Ornithology: the Evolution of Avian Intelligence. Princeton University Press. 361, 23-43
Goodman, M., Tagle, D., Fitch, D., Czelusniak, J. Koop, B., Benson, P., Sligtom, J.. (1990). Primate Evolution at the DNA Level and a Classification of Hominoids. Journal of Molecular Evolution. 30:3, 260-266.
Kaku, Michio (2011). Physics of the Future. New York, NY: Random House.
Keyes, R.. (September, 2006). The Impact of Moore's Law. Solid State Circuits Newsletter. Accessed on 1 December 2011 from http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?reload=true&arnumber=4785857
King, M., & Wilson, A.. (1975). Evolution at Two Levels: In Humans and Chimpanzees. Science. 188: 107-116
McGrew, W.C.. (1998). Culture in Nonhuman Primates?. Annual Review of Anthropology. 27:323.
Moor, James H. (2006). The Nature, Importance, and Difficulty of Machine Ethics, IEEE Computer Society.
Saper, Craig J. (1997). [Artificial] Mythos: A Guide to Cultural Invention. Minneapolis, MN: University of Minnesota Press.
Spohn, William C. (1995). What Are They Saying About Scripture and Ethics? Mahwah, NJ: Paulist Press.
Tomasello, [Michael]. (1999). The Human Adaptation for Culture. Annual Review of Anthropology. 28:511.
Voeglein, C.F.. (1951). Culture, Language, and the Human Organism. Southwestern Journal of Anthropology. 7:370.
Afterword (2026)
Revisiting this paper more than a decade later, what stands out to me is not any specific prediction, but the shape of the questions being asked. I was less interested in what technology would do next than in how societies respond when change arrives faster than our norms, institutions, and shared understanding can adjust.
The technical landscape has evolved since this was written. Artificial intelligence has advanced in ways that are more uneven and constrained than I imagined, shaped as much by markets, incentives, and governance as by research alone. At the same time, the ethical tension identified here has become more concrete. Systems are now deployed widely before their social consequences are fully understood, and questions once framed as theoretical are now practical and immediate.
If I were writing this today, I would place greater emphasis on institutions and power. Ethics does not emerge in isolation, and technology does not shape society on its own. It is mediated through organizations, economic pressure, regulation, and human behavior, all of which influence outcomes as much as philosophical models of agency or morality.
What has not changed is my belief that careful thinking matters. Questions of responsibility, alignment, and long term consequences are not obstacles to progress. They are prerequisites for progress that lasts. This early work continues to inform how I approach complex systems, governance, and decision making in my professional and civic life today.