Smart: An Outline of a System of Utilitarian Ethics.
“…act utilitarianism is the view that the rightness or wrongness of an action depends only on the total goodness or badness of its consequences, i.e., on the effect of the action on the welfare of all human beings (or perhaps all sentient beings).” Smart and Williams, p. 4
(These issues are important later on.)
Smart: The reason we don’t like this is because sadists normally cause pain and pain is bad. But there are no intrinsically bad pleasures.
Average v. Total Happiness doesn’t make a practical difference, according to Smart.
He argues total is preferable: Suppose you had two universes, each of equal average happiness but universe B had higher total happiness than A because it had 2 million inhabitants and A had 1 million. Smart says B is preferable to A with a higher total.
What’s the best way to minimize suffering?
The distribution of pains/pleasures in a group doesn’t account for the fact there are different people.
Smart says there are reasons for accepting fairness as a ‘rule of thumb.’ [Meaning: A rule that one can break that we should generally follow because it usually leads to good consequences.]
The deontological objection: “it is my doctrine which is the humane one…it is these very rules which you regard as so cold and inhuman which safeguard mankind from the most awful atrocities…In the interests of future generations are we to allow millions to starve…” etc. (62) The objector points out the “consequentialist mentality” “at the root of vast injustices…today.” (63)
Smart suggests if we were really sure we’d save hundreds of millions in the future, it would be the right thing to do to let tens of millions die now.
But the utopian dictators, etc. aren’t right about the future.