The legend or rumor says that some very wealthy businessmen in Silicon Valley hope to live forever by means of artificial technology. The idea is that a very old man will die but before his death will have constructed a robot that is sufficiently like him that he will not die. The robot will be him.
To see the problems here, consider that the simplest way to produce a being that is biologically identical with you is by cloning. If you have the ability to create a much younger clone of you, and you believe you are identical with a clone, the if you can create a much younger clone of yourself at the moment of your death then you have achieved your goal.
But now consider further (I promise this will be one of the last things to consider) that an identical twin is a clone. So if your mother used IVF technolgoy and created more than one frozen embryos, and one of those was an identical twin, you could at the moment of your death cause that embryo to be brought to term. Would that mean you were still alive? No, it would mean that you were dead and now had an identical twin.
What if you took steps to ensure that that identical twin was raised in an environment so it saw the world the way you did, and had your beliefs and answered to your name? Would you still be alive? Another way of asking the question — fi you were very old, would you sacrifice your life to bring about this state of affairs? Aside from the fact that this twin would have a very different history than you did — you were raised by humans and allowed to form your beliefs freely (more or less) the clone would be the victim of a brainwashing program intent to make him like you — would its life compensate you for your death?
Somebody might think that. Somebody might take all his money and resources and time away from his family and community and devote them to creating the identical twin who answers to his name and votes the same way he does. It’s hard to prove people wrong who do things to save their lives. We consider “preserve yourself” to be a rock-bottom axiom of practical reasoning. Beyond “it’s good for me” (or perhaps “it’s good for us”) how do persuade?
That means you could not prove such a person wrong., but it also means you could not prove such a person right either. They would have no reason for believing the future twin was worth spending money on. The hypothetical silicon valley businessman who sacrificed his own concerns could not I think be shown to be inconsistent, but neither could the person who chose to commit the resources not to creating the brain-washed twin but to a local hospital with his name on it.
This is analogous to those of us who make sacrifices for a group. During the nuclear arms race the philosopher Bertrand Russell argued “better red than dead”. In other words he thought it would be better for the US to stop being the US than to risk global catastrophe. Most Americans (or at least politically powerful Americans) disagreed. Yet it was a hard question to argue because as in the case of uploading its hard to argue people out of an identity.
If somebody says they will risk a nuclear war that destroys the human race in order to preserve the United States because “I’ll do what it takes to keep us alive” you might think that was rational. What if the United States that they preserved had no humans in it and consisted of just a bunch of intelligent chimps who wave American flags? It would seem pretty strange that this hypothetical patriot was willing to risk global annihilation to keep America alive if America wouldn’t even have any human beings in it, but how could you prove such a person wrong? He considers the future flag-waving chimps part of his “we” and a Russian not part of his “we”. I think you could induce him through love and friendship to see the world differently.
What counts as “me” and what counts as “good” are questions that we don’t answer separately. We answer them together. What I desire for me depends upon what I care about enough to consider as “me”. And vice versa. The “good”, “me” and “what is to be desired” seem to connect on a very deep level.
So for example a person can kill himself because he thinks he’s no good. A person can hate himself. And yet — he does think he is himself.
It’s schizophrenic that our civilization holds simultaneously
a)the self is an illusion
b)the most rational people care most about themselves.
You can’t have both.
It makes you wonder whether the problem with bad people is that they love themselves too much or not enough.