By Carson Weitnauer
Prominent atheist Richard Dawkins says emphatically, “What are all of us but self-reproducing robots? We have been put together by our genes and what we do is roam the world looking for a way to sustain ourselves and ultimately produce another robot child.”
Sam Harris, a leading atheist public intellectual, has provided the argument for this understanding. Here's what Sam Harris wrote on his blog: "You seem to be an agent acting of your own free will. The problem, however, is that this point of view cannot be reconciled with what we know about the human brain. All of our behaviour can be traced to biological events about which we have no conscious knowledge: this has always suggested that free will is an illusion."
What's important here is the argument; this isn't an appeal to authority, as if "what Sam Harris says about atheism must be true." Instead, we're looking to understand his logic and reasoning.
Harris is saying that the evidence from neuroscience is that our brains control us, which eliminates the idea that "we" control our brains.
So, if atheism is true, then we need to re-imagine how we understand human beings.
Atheism requires us to stop thinking of humans as independent, free actors who make their own choices, who decide to love people, and who take courageous stands for noble causes. Instead, we need to think about human beings in pretty much the same way that we think about robots.
How do we understand robots? Here's a definition from Merriam-Webster: "A machine that looks like a human being and performs various complex acts (as walking or talking), of a human being."
From Wikipedia: "A robot is a mechanical contraption which can perform tasks on its own, or with guidance. In practice a robot is usually an electro-mechanical machine which is guided by computer and electronic programming."
Why, if atheism is true, should we think that humans are like robots? Think through this:
For humans, all of our behaviour is caused by biological events which we have no knowledge of and no ability to change.
For robots, all of their behaviour is caused by software/hardware events which they have no knowledge of and no ability to change.
Sam Harris is quite clear on this point. For instance, he considers the argument that quantum mechanics implies that there is indeterminacy in our brains and this means there might be some kind of free will. Here's his devastating response:
"The indeterminacy specific to quantum mechanics offers no foothold. Even if our brains were quantum computers, the brains of chimps, dogs, and mice would be quantum computers as well. (I don't know of anyone who believes that these animals have free will.)"
When it comes to free will, what's the difference between chimps, dogs, mice, humans, and robots? The only difference is the kind of physical cause. For organisms, their choices are determined by a biological cause like DNA; for robots, their choices are determined by a digital cause like software. But the "thoughts" and actions of both humans and robots are determined by causes outside of their control. Therefore, when it comes to what determines our thoughts, desires, and actions, we are very similar to robots.
What are the implications of this? Let's consider what we know about robots.
Does a robot have free will? No. Robots act in strict accordance with the software programs embedded in their hardware and within the limits of their mechanical capacity.
Can a robot act morally? No. Robots just do what they are programmed to do. A robot might kill an innocent person, using a gun, but we wouldn't blame the robot. We might destroy the robot to keep other people safe, but the blame belongs to the person who designed the robot and its software.
Does a robot have hope? No. Any individual robot's existence is trivial, temporary, and insignificant in the grand scheme of things. It is hard to even know what it would mean for a robot to have hope.
Can a robot love? No. A robot has no soul. If a robot acts in a beneficial way towards a human, this was not the robot's choice, and wasn't prompted by "love", but was predetermined by the robot's software. We can't credit the robot for the good action, since the robot didn't choose to do it and couldn't have done otherwise.
Can a robot reason? No. It can run through various algorithms to arrive at the right answer to questions. This may facilitate playing chess or Jeopardy better than humans can. But no thinking originates from the robot itself; the robot is only running the software program that its creators developed. Nor is the robot coming to its conclusions having carefully weighed various reasons and choosing the most rational idea; instead, it is deterministically acting upon whichever algorithm its program is designed to select in that particular circumstance.
Does a robot have purpose? At first glance, it might seem like they do. After all, the designer of a robot creates the robot for a certain purpose, and people make and buy robots in order to accomplish certain goals. There is a reason and a goal to their existence.
But would human robots have purpose? No. We are an accidental byproduct of the cosmos. It just so happened that our earth was conducive to life, that life began on this planet, and that random mutation and natural selection led to our existence. But there is no plan to this or any rationale to our existence; there is simply no purpose for our lives. There is no goal for our lives, besides perhaps the propagation of our genes, but our genes don't intend to do that. In addition, whatever purpose we choose to adopt for our lives is entirely arbitrary. There is no transcendent standard by which to measure various "purposes" for human existence as more or less worthy.
Therefore, the causal chain leading up to the creation of robots is a purposeless one. There's the DNA script which automates the human actions, which causes the design and assembly of the robots, and then the software automates the robot's actions. But at no point in the chain did a person choose to build a robot – the forces of nature compelled a human organism to build one. And all of the purposes for which humans build robots are themselves arbitrary. So our lack of purpose erodes even robots having purpose.
So, to conclude: robots have no free will, no moral ability, no hope, no love, no rational capacity, and no purpose. Since, if atheism is true, we are like robots in the relevant way (our thoughts, desires and actions are just determined by DNA instead of by software), humans also lack free will, moral ability, hope, love, the ability to reason, and have no purpose.
Therefore, if atheism is true, then the rational conclusion is that humans are very much like robots. This is a profoundly cheerless and dreary perspective on humans.
To the degree, then, that you have reason to think that humans are more than loveless, hopeless, purposeless robots, you have a reason to believe that atheism is false.
Courtesy Reasons for God, www.ReasonsForGod.org