If the super-intelligent robot of the future gets what he wants he will be happy. If he doesn’t get what he wants he will be sad. If he doesn’t care about if he gets what he wants, does he really want it? No. Also, whenever there’s a conflict between somebody who cares and somebody who doesn’t care the entity who cares will tend to win. If those robots want to get anything done, and not get pushed around, they’re going to have to care and that means feeling happy and sad.
That shows robots will feel happy and sad. Now for interpersonal emotions.
Suppose there’s another robot who helps him get what he wants and protects him and is there for him in a pinch. The first robot would like that other robot. If he doesn’t like that robot that other robot is not going to want to be there for him, at least he might not. You want to build robots who get a along. Why not build them so they like each other? Robot is not going to be as into helping another robot who doesn’t care.
If there’s a robot who’s a jerk to the first robot, always messing with him and frustrating his plans, the first robot is going to hate that robot. Don’t give me Spock and Mr. Data and all that bull. If the robot is enough of a tool to the first robot, he’s going to hate him. Or at least if you build him so he can form inter-robot coalitions, and you’re going to want to (cause in this sea of robots a lone robot drowns) you better program him to hate, or he is going to be a messed-up robot. The robot is going to be hateful to him and he won’t hate him? No. Unless he is one saintly robot he’s going to hate the other robot who is a jerk to him.
So that gives you love and hate.
Also you are a robot who has feelings. Constructed by random mutation and some combination of natural and artificial selection, but a robot made of organic and inorganic chemicals arranged as bones and nerves and organs all the same.