Why Robots of the Future Will Have Feelings

If the super-intelligent robot of the future gets what he wants he will be happy. If he doesn’t get what he wants he will be sad. If he doesn’t care about if he gets what he wants, does he really want it?  No.  Also, whenever there’s a conflict between somebody who cares and somebody who doesn’t care the entity who cares will tend to win. If those robots want to get anything done, and not get pushed around,  they’re going to have to care and that means feeling happy and sad.

That shows robots will feel happy and sad.   Now for interpersonal emotions.

Suppose there’s another robot who helps him get what he wants and protects him and is there for him in a pinch.  The first robot would like that other robot. If he doesn’t like that robot that other robot is not going to want to be there for him, at least he might not. You want to build robots who get a along.  Why not build them so they like each other?  Robot is not going to be as into helping another robot who doesn’t care.

If there’s a robot who’s a jerk to the first robot, always messing with him and frustrating his plans, the first robot is going to hate that robot.  Don’t give me Spock and Mr. Data and all that bull.  If the robot is enough of a tool to the first robot, he’s going to hate him.  Or at least if you build him so he can form inter-robot coalitions, and you’re going to want to (cause in this sea of robots a lone robot drowns) you better program him to hate, or he is going to be a messed-up robot.  The robot is going to be hateful to him and he won’t hate him?  No.  Unless he is one saintly robot he’s going to hate the other robot who is a jerk to him.

So that gives you love and hate.

Also you are a robot who has feelings.  Constructed by random mutation and some combination of natural and artificial selection, but a robot made of organic and inorganic chemicals arranged as bones and nerves and organs all the same.


Will Super Intelligent Robots of the Future Be Religious?

At some point the human race will figure out how to augment it’s brains using technology and will pass from the scene and be replaced by super-intelligent robots.  Some have argued that these robots will not be religious.  I believe some of them will be.  It is possible that the robots will reflect upon the mystery of life with reverence, and if not, some robots may wish to have the robots be religious, worshiping a deity or deity, and will build them to be religious. By either path there will be religious robots.

Some believe that these super-intelligent robots of the future will not cook the books — i.e. they will not have one set of books for the tax man that misstates their profits and losses, and another real set of books. I don’t see why this should be the case.  Presumably the robots will have some common interests — analogous to our state — and also private interests — analogous to our personal bank accounts.  The robots that wish to keep more of limited resources for themselves would either control the information about their resources (whatever the robot analogy of money is) as it travels to the agency that keeps track of the robot communal projects, or will build robots to do so.

Some believe super-intelligent robots of the future will not have sex and fall in love and be jealous.  I don’t see why not.  There will be some decisions about how robots reproduce themselves.  These decisions if they require more than one robot to participate will be the robot analogue of our sex.  There will be robots who become emotionally obsessed with joining with other robots to reproduce themselves with them and this will be the robotic analogy of romantic love.  And sometimes these agreements will be broken and this will cause robot jealousy.

Some believe all robots will moosh together into a single hive organsm called the singularity.  I don’t see why anybody should expect this.  When australopithecus evolve into homo sapiens we did not merge into a singularity.  We split into nations, religions, families, and back-stabbing swindling individuals.  Why wouldn’t super-intelligent robots be equally riven by conflict and competition?

Unless you have a clear reason for why some human trait will vanish when we are replaced by robots, the best way to imagine robots is to imagine them as humans but super.

I am sure there will also be atheist robots, and robots that pay their taxes and conduct their sexual lives in a rational fashion.  These might be secular protestant robots.


An Animal Catastrophe

They let the dog in the kitchen

They let the beaver in the tomb

They let the komodo dragon with his filthy teeth

Swish his tail through the surgery room.

The dog he ate the baby’s food

The beaver stopped the ceremony to begin

The dragon besmeared the operative site

And sepsis did set in.

O set the dog to hunt with his pack

Chase the beaver from grandpa’s tomb.

But invite in the tiger, the eagle and shelk

To drie the dragon from the surgery room!

It is an Animal Catstrophe

When the Beast Does Not Know Its Place

The Forest, the Lodge, and his Dish on the Floor

Or Licking his Master’s Face!


How to Teach Yourself to Be Smart

Identical twin studies suggest that some of intelligence is hereditary.   (What theorists such as Hernstein call “g”)  What this means is that if we look at performance on standardized exams if one identical twin raised apart does well on the exam, then there is a higher probability the other twin will do well.  It is certainly conceivable that certain skills — pattern recognition, memory, and the ability to come up with alternate interpretations of data — are heritable.  However given a particular package of heritable skills humans vary widely on how intelligent they can be.

Let us say that intelligence is the ability to solve problems.  To see why it is not entirely heritable consider the case of someone who seizes on a bad solution to a problem and will not let it go.  Once he has hit upon the solution (which may in fact have required intelligence) his solution will prevent him from solving further problems.  Therefore despite his inherited ability to recognize patterns, remember patterns, and come up with alternative interpretations of data he will be poor at solving problems.  You could say that he has deliberately reduced his intelligence.

If it is possible to reduce one’s intelligence deliberately it follows that it should be possible to deliberately increase one’s intelligence.  It could be argued that inheritance places limits upon how high this can go (and in certain cases — e.g. Down Syndrome — this is undoubtedly the case) however given the existence of libraries and the internet it is possible to augment the patterns one comes up with on one’s own.  Just as corrective lenses and telescopes can augment the eye, these technological aids can augment the mind.

Some might argue that what can be increased is “successful application of intelligence” while intelligence itself remains innate and unalterable.  I would argue that “intelligence itself” should not be of interest to us.  What we should care about is the ability to solve real problems.  If that can be successfully trained then we can leave “innate intelligence” to the prigs.

How is it possible to increase intelligence then?  I believe through two methods: caring and self-criticism.

Hubert Dreyfus’s model of skill acquisition has shown that caring about success is necessary for acquiring a skill.  It is only when success makes us feel good and failure makes us feel bad that we internalize lessons about what works and what doesn’t work.  The first requirement for teaching ourselves intelligence therefore is to care.

Some people cultivate intelligence by caring about being perceived as intelligent.  This is a bad method.  For one thing it turns them into obnoxious know-it-alls.  But more importantly it is judging problem-solving at one remove, by checking to see if a solution is perceived as successful rather than actually being successful.  A better method: come up with problems you actually care about and keeping trying to solve them.  If they are unsolvable problems but ones which one can approach solving all the better.  Philosophy provides a buffet of such problems — what good is life given we must die?  How does matter think?  What is the best way to live?  Caring about finding solutions to these problems is a good step towards cultivating intelligence.  As Mihaly Csikszentmihalyi has shown in his work on “Flow” the optimal problem is not so easy as to be boring and not so hard as to be frustrating.

A second path towards training intelligence is to challenge and criticize one’s solutions constantly.  Again there is a social path towards criticsm — finding intelligent people to debate — and an individual one — debating different sides of oneself.  Both are fine.  What is crticical to criticism is that it avoids the trap of being caught in a local maximum, or being satisfied with a good enough solution rather than the best solution.

It might be that constant self-criticism is not the optimal path to happiness.  I do believe it is the path to intelligence.  And this satisfies a certain intuitive demand for justice.  If intelligence is important we should wish it to be trainable and achievable through effort rather than the result of a genetic lottery.   This also suggests that intelligence has more to do with the moral virtues than the advocates of heritable intelligence suggests.  Humility and courage are both requirements for successful self-training.


What’s Wrong with A Sophisticated Racism (or Nationalism)?

I had a conversation with a sophisticated racist, or perhaps nationalist.  He belonged to an ethnic group that he believed had a special aptitude that gave it unique importance in the ultimate scheme of human life.  It was possible to join his group, so he preferred to call it a nation rather than a race, nevertheless belonging to it for many of its members was a matter of genetics.  What’s wrong with my view? he wanted to know.  Aren’t you just succumbing to the modern prejudice that everybody has to be the same?  Aren’t you enslaved to your own dogma, that history happens by means of individuals rather than groups?  Why couldn’t it just be the case that Nation A has special admirable qualities?  He was “sophisticated” because he didn’t claim to believe his group was “better” than any other group — he just thought they excelled along a particular, important axis.

I have had a good liberal education and usually it’s assumed that racism and nationalism are bad.   Moralizers appeal to evil regimes that have used these ideologies to justify themselves — Nazi Germany, the antebellum American South — and to the offensiveness of being on the receiving end of someone else’s group amour propre.   Yet on the other hand many good regimes and groups have thought they were better than everybody else at least along a particular axis.  Weren’t some of them right?  Weren’t the ancient Romans actually better at law and road-building than the ancient Carthaginians for example?  Didn’t the Athenians have a knack for philosophy?

And even if historically they weren’t, couldn’t they have been?   There might be an alien planet in which the dominant intelligent species consists of a number of sub-species of which some are better at one thing and some at something else.   Maybe that planet is full of comparatively agile Lesser Pandas and comparatively majestic Greater Pandas.  Maybe my friend was right that his group were just like that — the Greater Pandas of Planet Earth.  Even if all men are cousins, couldn’t the cousins of one father be good at one thing and the cousins of another father be good at something else?

So I felt the conversation had given me an interesting job: find a non-question begging, non-sanctimonious justification for universal humanism, where universal humanism is the negation of this kind of ethnic or racist boosterism.

Let’s say universal humanism is the doctrine that states:

For all deeply important human traits — spirituality, worthiness, goodness, the ability to perceive beauty, the ability to care about your family, the ability to be part of nature, love — the ethnic group that two human beings belong to is not important for judging whether they differ according to these deeply human traits.  This definition punts on the question of what a human being is, but it says if you want to know what God is or what beauty is or what justice is, you would be wrong to show a preference for asking an Englishman rather than a Javanese.  We can also, obviously disagree upon what belongs on our list of the deeply important human traits.  Clearly speaking English is not one according to this definition, or being tall, since human subgroups do differ along these axes.

What could the argument for universal humanism possibly be?

For me it stems from the empirical observation that I have always learned to understand these human traits and come closer to them myself has come from encountering people from different sub-groups.  In reality I have learned about God, being a member of a family, beauty, courage, justice and so on from Jews and Christians, Asians and Europeans and Africans, the young and the old, straight and queer.

Therefore as a pragmatic maxim I won’t listen to anyone who will restrict my ability to learn and acquire information from different people.

This is an egoistic justification. If you want to know what most makes you deeply human, and thus if you want to learn who you are and what you’re about, you would be ill-advised to ascribe to racism or nationalism.

It’s probably true that universal humanism reduces the probability of a globe-destroying war too.  But I wanted to come up with a justification for the view that was not too preachy.


Sentimentality and Irony Need Each Other

Joking in a culture, in our culture, stands in a symbiotic relationship with glibness, shallowness, and hypocrisy.

Glibness, shallowness and hypocrisy play a role, when a party is unable or unwilling to inhabit a role fully or be emotionally honest or open.   So for example a funeral director’s “I’m sorry for your loss” when he actually doesn’t feel sorrow, he feels glad at the chance to make money.   We don’t really want the funeral director to talk in a way where truth and falsity are the paramount virtues.

But identifying glibness, hypocrisy and bullshit is almost intrinsically funny, because it involves taking focus away from the pretense onto what is being ignored. For example, the comedian Andy Richter mocked a sentimental bumper sticker published by dog lovers that depicted a paw and the words “Who rescued who?” The sentimental point of the bumper sticker is that just as the dog owner rescued the dog from the pound the dog rescued the dog owner from a life of loneliness. Richter’s comedic point was that the answer is clear — the owner, because death in a pound is worse than loneliness. Both responses are pretty glib: the sentimentality and the irony exist in a symbiotic relationship.

Similarly “I love irony” is acceptable only when said ironically and “I hate banality” is a deeply banal statement. They are both more palatable as negations — I hate irony and I love banality are at least funny things to say.


The Internet Tempts Me to Be Annoying

There are a lot of questions that bug me and that I go back and forth on within my own mind.  Does God exist?  Does Heidegger’s Nazism invalidate his philosophy?  Should we re-make humanity using science into some cool post-human species?  Is secular culture nihilistic or does it contain enough oomph to support a vibrant spiritual life?

Since I started using the internet I signed up for discussion groups related to these topics: discussion groups on philosophy, religion, Judaism, Heidegger, and post-humanism for example.  Facebook would then deposit articles on these topics in my feed and I would respond.  This led to me having a lot of debates on these topics.  I enjoy debating and am pretty good at it, because my father would challenge anything I said around the dinner table, and I went on to do graduate work in philosophy.

These discussions while not entirely fruitless were not entirely fruitful either.  Very often my debating partners would have a strong unthought out prejudice for their point of view, and the limitations of internet discussion made it difficult to make headway.  They were like conversations I would have with people whom I actually knew but thinned out, deracinated, aimless and less helpful.  (On the plus side though I interacted with people from different demographics and social backgrounds and I have to believe that that’s a Good Thing.)

Recently though I found myself sending a message to somebody who had never bothered me in the hopes of getting into a discussion about the philosophy of religion.  I realized that the internet had tempted me to be annoying.  I would never in real life go knock on the door of somebody I barely knew and say “How do you know the Torah is an historical document?  Prove it.”  Yet I had done something similar with this acquaintance.  What the heck was the matter with me? (And I am tempted to use an even stronger word than “heck” but forego because this is a public post.)

As I always do in these situations I underwent a serious course of self-analysis coupled with certain private purificatory and expiatory practices.   After these practices (really a bunch of gussied-up cold baths if you must know)  It became obvious to me that rather than do the hard work of developing a systematic treatment of these issues that can integrate the warring parts of myself, I am fleeing into a meaningless interaction with strangers.

And what was worse I had violated one of my core values: Leave people alone.  Do Not Be Thou Deliberately Annoying.

I won’t do that again.


Pragmatists Versus Idealists

Sometimes when I’m working on a story with a group everybody will say “Let’s do this because the audience will love it” and I’ll say “It’s just not good.”  Sometimes people will say “That contract is not fair” and everybody else will say “Hey, of course it’s fair.  It doesn’t violate the law and you’ve got no clue what fair means outside of the context of our laws.”   Sometimes people will say “Hey I’m worried that God is not what our Bible and church and synagogue say He is.” and other people will say “Come on.  You wouldn’t have even heard of God if you hadn’t read the Bible — what could you possibly mean by referring to what He is outside of the tradition of what our religion says about Him?”

Let’s say in each of these situations that there is a conflict between an idealist and a pragmatic impulse.  The idealist thinks we can talk meaningfully about certain ideal categories — the true, the good, the beautiful, the just, God — outside of our best current practices for defining them.  The pragmatist says we can’t.

The pragmatist has the better argument.  What is just outside of our institutions of law?  What is a good television show outside of what the audience likes?  The pragmatist is able to reduce the idealist to inarticulacy.  Furthermore the pragmatist can with justice accuse the idealist of fanaticism.  The idealist seems to separate himself from the rest of us when he appeals to these ideals that nobody but him (and his acolytes and fellow crazies) can see.

And yet sometimes it has been crucial that we be brought to the point of inarticulateness.  Just because the pragmatist has the best arguments, doesn’t mean that he is right.  There is more to being right than having the argument that wins.  Might does not make right.