Blog || Politics || Philosophy || Science || Fiction || Quotes
Now, that does not mean that given a choice between robbing the next store a person comes upon and not robbing it, they choose the former because the money will make them happy. Obviously people don't act like that - they have foresight, they take into account future repercussions. Choices are weighed based on the short and long term benefit or harm, on the quality as well as duration of such benefit or harm, and on other factors. Thus, it is not claimed that humans simply choose instant gratification in any situation, but that they choose whatever they perceive will give them the most overall benefit (or least overall harm).
On first hearing it most people cringe and are quick to point out their favorite altruistic counter-example. They might cite the humble monk who dedicates his life to feeding the poor, himself living only on alms with no possessions; or they might cite the man who, walking alone on the roadside, spies a wild dog drowning in a swamp off the road and takes his time to get dirty and risk a bite or two to save the dog; or they might cite the woman who heroically risks her own life to save her child.
However, the psychological egoist can generally account for even these seemingly selfless examples. The egoist might claim that the monk is spending his life helping the poor in order to receive happiness or avoid pain in the afterlife, or to become respected by his fellow monks and perhaps be granted the sainthood he dreams of. The egoist might explain that the man who helps the drowning wild dog does so out of empathy, to avoid the pain he would feel at knowing the helpless animal had drowned. The egoist might say that the woman risked her life to help her child because life without the child would have held so much pain for her that it would not have been worth living; thus it was better for her to risk losing her own life than to be stuck in a life without her beloved child.
In fact, just about any situation a person can come up with can be explained away - sometimes quite plausibly, sometimes less so - by the psychological egoist. The more extreme or convoluted the example, the harder it becomes for the egoist to explain the behavior, but it is hard to imagine any situation which can be stated definitively as a counter-example (and thus disproof, since the egoist claims that everyone always acts this way). The fact that the egoist has a hard time explaining certain situations is taken by opponents as evidence that the theory is probably wrong; but it also seems possible that the theory is right, and that human perceptions of benefit-versus-harm weightings are more complex than people on either side expect.
Thus, what appears to be only plausibly explainable as a truly altruistic act may in fact be based on selfish motivation, but outside observers (and perhaps even the person acting, where they may not recognize the subconscious effects present in their own behaviors) do not recognize all the factors at play. At any rate, the theory seems hard to falsify one way or the other. Perhaps we can never know exactly what lies at the heart of human motivation; and I think it is extremely likely that it is much more complex than most people think.
The Experience Machine
One particularly interesting attempt has been made to provide a solid counter-example to the psychological egoists' claim, and it is generally known as the experience machine thought experiment (I believe the idea was first stated in this particular form by Robert Nozick). Imagine a perfect virtual reality machine is created which can be programmed so that when a person hooks up to it, they can live out any life they want: have any career they choose, be as rich as they want, have the perfect mate, live in a world where there is no poverty, starvation or cancer, and so forth - whatever they would consider the best possible life.
The world the machine creates is so flawless it is indistinguishable from reality while the person is in it (for that matter, every one of us might be in such a machine right this moment, thinking we are in 'reality'). As such, they have no memory of their life before getting in the machine (including the memory of getting in, since that would spoil the 'illusion', so to speak). The machine would be so perfectly programmed as to create a world that gives the most possible happiness to the user (and if the user thinks some pain is required in order to really experience and enjoy happiness, there will be just enough to maximize the happiness and no more).
The experiences that a person has in the machine would seem just as real as the world we occupy now, so there is no reason to believe a person would get any less happiness out of the experiences inside it. Once hooked up to the experience machine, a person never unhooks - they live out their perfect life (or lives, since they could program it so that they get to experience all sorts of different lives and careers and loves, and each time think it is completely real) until the moment their body dies.
The people who put forth this thought experiment say that if psychological egoism is the case, and all people always seek to maximize their own happiness, then every person should be willing to hook up to this flawless machine. Sure, there might be a few moments of pain between making the decision and pushing the button to start the decision (when a person misses their loved ones and such); but moments later when the machine starts up, they lose all memories of such pain and from then on are as happy as possible - the net pleasure versus pain in the long run would come out far better if a person went in the machine.
Of course, explain the situation to some friends or a classroom of students and you will likely find at least one person who would choose not to go into the experience machine. Such people are claimed to be counter-examples to the psychological egoists' claim, and thus proof that the egoist is wrong. The egoist of course might counter that the people who do not go in do so because they do not understand the situation and that it would result in more net happiness for them; or because they would perceive the moment of pain between the decision and pushing the button as a stronger factor than even a lifetime of happiness in the machine. Perhaps because they do not feel, at the time they are still outside the machine, that such happiness would be real, they do not weigh it against the pain of entering the machine and missing out on the 'real' happiness of normal life.
Strangely enough, while I do tend toward a view of human motivation similar to psychological egoism, I do not personally agree with the reasons offered above for not going into the machine. I tend to think that if the machine is as flawless as it is described (and it is an entirely different question whether such a machine is even theoretically possible), where the experiences inside are indistinguishable from the experiences in my current life and the programming is also flawless (in other words, assume all the logistics are handled perfectly, and there would be no error where the world is a false utopia that turns out making me unhappy by being "too perfect", and so forth), then I see no reason to consider the experiences inside any less real than the experiences in my life right now.
After all, as stated before, it is possible we are all in such experience machines right now, and we just do not realize that all our experiences are simulated flawlessly yet are not 'real'. The line between what experiences are real and what are not begins to quickly blur in light of such possibilities. So I think such a machine (if possible) would in fact offer a life of more happiness, more real happiness, than my current life (assuming that is possible, which gets into the nature of happiness and other tangent subjects). Yet I still would not get in it - for a very different reason.
Memory, Experience and Happiness
I struggle with what exactly the 'I' is, what the 'self' is, and how these things relate to body, mind, consciousness, memory and other such things. But one thing I cannot help but shake is the feeling that without my memories, without the memories that have accumulated in me up until this point, I would not be the same person. For that reason I would decline, say, the chance to go back in time (on a wish, e.g.) to act differently in a situation that turned out terrible or to take advantage of a great opportunity I passed up. Were I to go back and act differently, everything that happened between that event and my current state would be altered; my memories would change, including the books I had read, conversations I had had, thoughts I had thought, and basically the things that seem so closely connected to my 'self'.
The person leading this new altered life from back at that earlier point would be a lot like me, but their life would turn out so differently, lead down so many different paths and thoughts and memories, that we would not be the same 'self', or so it seems to me. For this same reason, I would not get in the experience machine as it is proposed, simply because I would lose all of my current memories when I stepped into it. I would cease to exist, and some other person, perhaps similar to me in some ways, but certainly not the same 'self', would start to live out some great life. Thus I would not get into the experience machine, not because I felt the happiness inside was not real happiness, but because I would not be the one experiencing it. Thus, I do not think that my failure to get in is in any way a disproof of psychological egoism, but rather I find the situation perfectly compatible.
Someone might argue that when I sleep I could have dreams of living out some other life or experience without any attachment to my memories in waking life, yet I do not consider going to sleep suicide, so I must have other reasons for not getting in the machine (I do not disagree, of course, that there might be other reasons; in fact I think it is likely, though I have trouble pinpointing them).However, the difference I see with sleep is that I wake up from it, and when I do, my memories return. If I recall my dreams, they are integrated into my memories, and thus I consider them to be experiences that I had. However, if I do not recall any such dreams, I have no reason to think the experiences happened to me.
An analogy to the experience machine would be dying in my sleep without waking up, in which case I have trouble believing that it was I who had any dreams that took place that night in the hours before death. It really is a question of what the 'I' or 'self' consists of, which is a very complex question; but as I said before I cannot bring myself to separate my memory from such things, and so in the case of both the experience machine and dying in my sleep, I do not think the experiences would be happening to me because the experiences would not be integrated into my current life memories.
Of course, perhaps they are integrated in some unconscious or subconscious level, in which case I would change my opinion; but in the case of the experience machine, where by definition I would permanently lose all my memories of my current life, there is nothing for the new experiences to be integrated into. I see no way around this simple fact. Therefore, I see the experience machine, as defined, as a form of suicide. Of course, if I could get in the experience machine for a short time, and then get out of it later with my previous memories all intact as well as the new memories, I see no reason why I would not gladly hook up. Who would not want to experience new things free of the restrictions of everyday 'real' life if you could still remain yourself and be the one who creates and keeps those memories?