Uncategorized

What Does it Mean to “Upload My Intelligence into a Computer”?

The legend or rumor says that some very wealthy businessmen in Silicon Valley hope to live forever by means of artificial technology.  The idea is that a very old man will die but before his death will have constructed a robot that is sufficiently like him that he will not die.  The robot will be him.

To see the problems here, consider that the simplest way to produce a being that is biologically identical with you is by cloning.  If you have the ability to create a much younger clone of you, and you believe you are identical with a clone, the if you can create a much younger clone of yourself at the moment of your death then you have achieved your goal.

But now consider further (I promise this will be one of the last things to consider) that an identical twin is a clone.  So if your mother used IVF technolgoy and created more than one frozen embryos, and one of those was an identical twin, you could at the moment of your death cause that embryo to be brought to term.  Would that mean you were still alive?  No, it would mean that you were dead and now had an identical twin.

What if you took steps to ensure  that that identical twin was raised in an environment so it saw the world the way you did, and had your beliefs and answered to your name?    Would you still be alive?  Another way of asking the question — fi you were very old, would you sacrifice your life to bring about this state of affairs?   Aside from the fact that this twin would have a very different history than you did — you were raised by humans and allowed to form your beliefs freely (more or less) the clone would be the victim of a brainwashing program intent to make him like you — would its life compensate you for your death?

Somebody might think that.    Somebody might take all his money and resources and time away from his family and community and devote them to creating the identical twin who answers to his name and votes the same way he does.  It’s hard to prove people wrong who do things to save their lives.  We consider “preserve yourself” to be a rock-bottom axiom of practical reasoning.  Beyond “it’s good for me” (or perhaps “it’s good for us”) how do persuade?

That means you could not prove such a person wrong., but it also means you could not prove such a person right either.  They would have no reason for believing the future twin was worth spending money on.   The hypothetical silicon valley businessman who sacrificed his own concerns could not I think be shown to be inconsistent, but neither could the person who chose to commit the resources not to creating the brain-washed twin but to a local hospital with his name on it.

This is analogous to those of us who make sacrifices for a group.  During the nuclear arms race the philosopher Bertrand Russell argued “better red than dead”.  In other words he thought it would be better for the US to stop being the US than to risk global catastrophe.  Most Americans (or at least politically powerful Americans) disagreed.  Yet it was a hard question to argue because as in the case of uploading its hard to argue people out of an identity.

If somebody says they will risk a nuclear war that destroys the human race in order to preserve the United States because “I’ll do what it takes to keep us alive” you might think that was rational.  What if the United States that they preserved had no humans in it and consisted of just a bunch of intelligent chimps who wave American flags?  It would seem pretty strange that this hypothetical patriot was willing to risk global annihilation to keep America alive if America wouldn’t even have any human beings in it, but  how could you prove such a person wrong?  He considers the future flag-waving chimps part of his “we” and a Russian not part of his “we”.  I think you could induce him through love and friendship to see the world differently.

What counts as “me” and what counts as “good” are questions that we don’t answer separately.  We answer them together. What I desire for me depends upon what I care about enough to consider as “me”.  And vice versa.  The “good”, “me” and “what is to be desired” seem to connect on a very deep level.

So for example a person can kill himself because he thinks he’s no good.  A person can hate himself.  And yet — he does think he is himself.

It’s schizophrenic that our civilization holds simultaneously

a)the self is an illusion

b)the most rational people care most about themselves.

You can’t have both.

It makes you wonder whether the problem with bad people is that they love themselves too much or not enough.

Advertisements
Standard

16 thoughts on “What Does it Mean to “Upload My Intelligence into a Computer”?

  1. N.S. Palmer says:

    A lot depends on what consciousness is. If it’s just an epiphenomenon of certain configurations of matter and energy, then an appropriately designed and programmed computer could “be” any one of us. Likewise, a clone with the same memories and learned skills as the original person would be the same person, and they’d have to fight a duel to avoid falsifying the principle of identity of indiscernibles. 🙂

    No sane person wants nuclear war or any comparable disaster, but there’s more to human life than simply having a pulse. The adage that it’s better to die on your feet than to live on your knees resonates with something in human nature. Mine, anyway.

    Arguments for psychological egoism can become definitional: if you do something to benefit someone else, then somehow, it was really for your own benefit. Some people are like that, but it seems to me that most of us vacillate between egoism and altruism. One of the funniest quirks of Ayn Rand’s novels is that before her protagonists do anything to benefit others, they have to give a five-page speech about how it’s really in their own self-interest. She doesn’t want you to think they’re doing anything altruistic.

    “The most rational people care most about themselves?” I really don’t know. Depends on how you define rational, and then the rest becomes an empirical question to which I don’t have the answer.

      • N.S. Palmer says:

        If we specify a very limited definition of consciousness, we can investigate that, such as B.F. Skinner’s well-argued but ultimately risible contention that consciousness was just behavior and thought was sub-vocalization. I believe there are limits to what we can say about consciousness, but I don’t deny its existence or try to reduce it to behavior or brain states. The motivation for such reductionism often seems to stem from a desire for something we can study, measure, and quantify, even if we can’t.

        As for investigating consciousness per se, it would take someone a lot smarter than me to figure out how to do that. You recently mentioned something about Kabbalah and reshimu. I read that article and couldn’t make any sense out of it beyond that it seemed like impressionistic poetry about ineffable and transcendent realities.

        At the same time, I saw an article in the current “Communications of the ACM” about “created computed universes” (CCUs), and plan to write something comparing the two approaches. You know much more about Kabbalah than I do, so please correct me if I’m wrong. But I think the Kabbalists are closer to the truth because they *know* they’re doing poetry, while the computer scientists who talk about CCUs think they’re doing some kind of science. If you’re interested in it, since you seem to cover a wide range, it’s June 2015, “Viewpoint: Created Computed Universe” by omg what a name, Yannis Papakonstantinou. Try pronouncing *that* when you’re drunk.

    • One of the funniest quirks of Ayn Rand’s novels is that before her protagonists do anything to benefit others, they have to give a five-page speech about how it’s really in their own self-interest. She doesn’t want you to think they’re doing anything altruistic.

      I dunno, don’t we say “Give a man a fish, he eats for a day. Teach a man to fish and he east for the rest of his life”?

      Isn’t that one more sentence than you need in order to be altruistic? And yet…doesn’t the second sentence make sense?

  2. Depends – some people go on about conciousness and ‘self’ and…you dig long enough, you find they are essentially hitting the supernatural. It’s the last place people can talk about ghosts and be taken seriously.

    In that regard, the ‘self’ being an illusion isn’t a contradiction.

    And now story! Though since Eric gave the link for this blog (and that’s how I got here to bother you – blame him!), maybe you’ve already read it: http://schwitzsplinters.blogspot.com.au/2013/08/last-janitor-of-divine.html

  3. Like Wittgenstein and Ryle, the latter whose name I add reluctantly, I concur with the body being the picture. And very much with the face being the soul of the body.

    I just wanted to comment on Noah’s statement here: “Likewise, a clone with the same memories and learned skills as the original person would be the same person”

    I think the grammar of the word ‘same’ here has the propensity to mislead. Compare ‘Jeff has the *same* pencil case as me’ with ‘That’s the *same* man who jogged past my window yesterday’. The clone is more analogous with the former than the latter.

  4. The “same old story, a fight for love and glory …” rings so true. Yes, I agree that the “physical” and “emotional” experiences we have are an expression of our innate and individual consciousness … each consciousness being a subset of the consciousness of this physical universe and its pre-physical archetypes (and more).

    Now being human includes having a conscious mind which gives us the gift of CHOICE. And the choice (as well as the occasional circumstance) can produce an “evolution” of that person’s consciousness, which results in a permanent change in that person’s instinctive reactions to life stimuli. And, I expect that one person’s evolution of consciousness will have impact on others … and on the universe’s consciousness.

    The only hitch in “choosing” to evolve consciousness is that such a change requires going into the mystery and the feeling of the ‘stuck emotion’–which holds the design of consciousness algorithms in place. Dark-Night-Of-The Soul experiences can sometimes produce transformation of consciousness — but they are not much fun. But still very doable. Just trying to “change your thinking” might produce different focal points of attention, but it does not transform. It helps one to cope (better?) with their innate beingness.

    So — to make a change in the “same old story” — evolution of personal consciousness (which expresses itself in the physical and emotional) will make that possible. And it IS happening.

    • I agree — joy, being, and awareness are 3 sides of the same coin! If you’re not feeling joy it’s cause you’re not really there. If you’re not aware of what’s going on, use joy as your guide. Grab one end of the string and the other two follow!

  5. Mikey says:

    This seems like an important thing to me. I heard once that the Dalai Lama said “I am a human first, a Buddhist second and a Tibetan third.” Except I can’t remember the exact quote and I’m pretty sure it wasn’t ever said by him. I think that’s a good line for a creed from any religion. “I’m a human first”. I’ve just checked the Humanist Manifesto Wiki page and it doesn’t seem to mention identity, but they’d be a good first group to adopt “I am a human” as a credal statement.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s