1. Characterizing Moral Anti-realism
Traditionally, to hold a realist position with respect to X is to hold that X exists in a mind-independent manner (in the relevant sense of “mind-independence”). On this view, moral anti-realism is the denial of the thesis that moral properties—or facts, objects, relations, events, etc. (whatever categories one is willing to countenance)—exist mind-independently. This could involve either (1) the denial that moral properties exist at all, or (2) the acceptance that they do exist but that existence is (in the relevant sense) mind-dependent. Barring various complications to be discussed below, there are broadly two ways of endorsing (1): moral noncognitivism and moral error theory. Proponents of (2) may be variously thought of as moral non-ojectivists, or idealists, or constructivists. Using such labels is not a precise science, nor an uncontroversial matter; here they are employed just to situate ourselves roughly. In this spirit of preliminary imprecision, these views can be initially characterized as follows:
Moral noncognitivism holds that our moral judgments are not in the business of aiming at truth. So, for example, A.J. Ayer declared that when we say “Stealing money is wrong” we do not express a proposition that can be true or false, but rather it is as if we say “Stealing money!!” with the tone of voice indicating that a special feeling of disapproval is being expressed (Ayer  1971: 110). Note how the predicate “…is wrong” has disappeared in Ayer's translation schema; thus the issues of whether the property of wrongness exists, and whether that existence is mind-dependent, also disappear.
The moral error theorist thinks that although our moral judgments aim at the truth, they systematically fail to secure it. The moral error theorist stands to morality as the atheist stands to religion. Noncognitivism regarding theistic discourse is not very plausible (though see Lovin 2005); rather, it would seem that when a theist says “God exists” (for example) she is expressing something that purports to be true. According to the atheist, however, the claim is untrue; indeed, according to her, theistic discourse in general is infected with error. The moral error theorist claims that when we say “Stealing is wrong” we are asserting that the act of stealing instantiates the property of wrongness, but in fact nothing instantiates this property (or there is no such property at all), and thus the utterance is untrue. (Why say “untrue” rather than “false”? See section 4 below.) Indeed, according to her, moral discourse in general is infected with error.
Non-objectivism (as it will be called here) allows that moral facts exist but holds that they are, in some manner to be specified, constituted by mental activity. The slogan version comes from Hamlet: “there is nothing either good or bad, but thinking makes it so.” Of course, the notion of “mind-independence” is problematically indeterminate: Something may be mind-independent in one sense and mind-dependent in another. Cars, for example, are designed and constructed by creatures with minds, and yet in another sense cars are clearly concrete, non-subjective entities. Much careful disambiguation is needed before we know how to circumscribe non-objectivism, and different philosophers disambiguate differently. Many philosophers question whether the “non-objectivism clause” is a useful component of moral anti-realism at all. Many advocate views according to which moral properties are significantly mind-dependent but which they are loath to characterize as versions of moral anti-realism. There is a concern that including the non-objectivism clause threatens to make moral anti-realism trivially true, since there is little room for doubting that the moral status of actions usually (if not always) depends in some manner on mental phenomena such as the intentions with which the action was performed or the episodes or pleasure and pain that ensue from it. The issue will be discussed below, with no pretense made of settling the matter one way or the other.
[The present discussion uses the label “non-objectivism” instead of the simple “subjectivism” since there is an entrenched usage in metaethics for using the latter to denote the thesis that in making a moral judgment one is reporting (as opposed to expressing) one's own mental attitudes (e.g., “Stealing is wrong” means “I disapprove of stealing”). So understood, subjectivism is a kind of non-objectivist theory, but, as we shall see below, there are many other kinds of non-objectivist theory, too.]
Non-objectivism must not be confused with relativism. See:
Supplement: Moral Objectivity and Moral Relativism
As a first approximation, then, moral anti-realism can be identified as the disjunction of three theses:
- moral noncognivitism
- moral error theory
- moral non-objectivism
One question that has exercised certain philosophers is whether realism (and thus anti-realism) should be understood as a metaphysical or as a linguistic thesis. (See Devitt 1991 and Dummett 1978 for advocacy of the respective viewpoints.) The “traditional view,” as initially expressed above, makes the matter solidly metaphysical: It concerns existence and the ontological status of that existence. But when the traditional terms of the debate were drawn up, philosophers did not have in mind 20th-century complications such as noncognitivism, which is usually defined as a thesis about moral language. Thus, most contemporary ways of drawing the distinction between moral realism and moral anti-realism begin with linguistic distinctions: It is first asked “Is moral discourse assertoric?” or “Are moral judgments truth apt?” It is not clear that starting with linguistic matters is substantively at odds with seeing the realism/anti-realism distinction as a metaphysical division. After all, if one endorses a noncognitivist view of moral language, it becomes hard to motivate the metaphysical view that moral properties (facts, etc.) exist. The resulting combination of theses, even if consistent, would be pretty eccentric. It may even be argued that noncognitivism implies that moral properties do not exist: The noncognitivist may hold that even to wonder “Does moral wrongness exist?” is to betray conceptual confusion—that the very idea of there being such a property is corrupt.
Another general debate that the above characterization prompts is whether the “non-objectivism clause” deserves to be there. Geoffrey Sayre-McCord, for example, thinks that moral realism consists of endorsing just two claims: that moral judgments are truth apt (cognitivism) and that they are often true (success theory). (See Sayre-McCord 1986; also his entry for “moral realism” in this encyclopedia.) His motivation for this is that to make “mind-independence” a requirement of realism in general would lead to counter-intuitive implications. “Independence from the mental may be a plausible requirement for realism when we're talking about macro-physical objects but it's a non-starter when it comes to realism in psychology (psychological facts won't be independent of the mental)” (1986: 3). Sayre-McCord is motivated by the desire for a realism/anti-realism “template,” which can be applied with equal coherence to any domain.
Two comments may be made against Sayre-McCord's proposal. First, note the we don't expect a univocal account of it “realism” across all uses. Consider, for example, the 19th-century French realist art movement: what does it have in common with Platonic realism about universals? We don't expect there to be a common ground of commitments made by Courbet and Plato (say), yet we hardly court confusion by calling them both “realists”. Perhaps the same holds within the discipline of philosophy. There may be little that David Brink's moral realism and R. W. Sellars' perceptual critical realism have in common, yet perhaps we may nonetheless legitimately call them both “realists.” The costs of occasional confusion when moral philosophers engage with other kinds of philosopher on the issue of “realism” may be modest in comparison to the disorder that would ensue within many disciplines if the traditional independence clause were dropped entirely.
Second, it is not clear that maintaining the “mind-independence” clause as a defining feature of the realism/anti-realism division really does make psychological realism a “non-starter.” Perhaps all that is needed is a more careful understanding of the type of independence relation in question. Certainly there is a trivial sense in which the truth or falsity of a psychological claim like “Mary believes that p” depends on a mental fact: whether Mary does believe that p. On the other hand, there is also a sense in which whether Mary has this belief is a mind-independent affair: The fact of Mary's believing that p is not constituted or determined by any of our practices of judging that she does so believe. We could all judge that Mary believes that p and be mistaken. Most people would accept that even Mary might be mistaken about this—erroneously judging herself to believe that p. In the same way, although the moral claim “Mary's action was morally wrong” may be true only in virtue of the pain that Mary's action caused (or because of Mary's wicked intentions), this may not be the right kind of mind-dependence to satisfy the non-objectivist clause.
In deference to the influence that Sayre-McCord's views have had on recent metaethics, perhaps the judicious terminological decision is to distinguish minimal moral realism—which denies (i) and (ii)—from robust moral realism—which in addition denies (iii). (See Rosen 1994 for this distinction.) In what follows, however, “moral realism” will continue to be used to denote the robust version.
2. Who Bears the Burden of Proof?
It is widely assumed that moral realism enjoys some sort of presumption in its favor that the anti-realist has to work to overcome. Jonathan Dancy writes that “we take moral value to be part of the fabric of the world; ... and we should take it in the absence of contrary considerations that actions and agents do have the sorts of moral properties we experience in them” (1986: 172). In a similar vein, David McNaughton claims “The realist's contention is that he has only to rebut the arguments designed to persuade us that moral realism is philosophically untenable in order to have made out his case” (1988: 40–41). David Brink concurs: “We begin as (tacit) cognitivists and realists about ethics. ... Moral Realism should be our metaethical starting point, and we should give it up only if it does involve unacceptable metaphysical and epistemological commitments” (1989: 23–24). Of course, anyone can issue a burden-of-proof challenge; philosophical opponents often trade blows in such terms, each trying to shift the burden onto the other. But on occasion such challenges are accepted; both parties acknowledge that one theory faces a special challenge, that it has extra work to do. Here we are interested in whether either moral realism or moral anti-realism bears a burden of proof in this latter sense—that is, whether either is widely acknowledged by both proponents and opponents to have a presumption in its favor.
There are certainly instances of participants in this debate accepting such prima facie burdens (and then attempting to discharge them). John Mackie, for instance, acknowledges that since his moral error theory “goes against assumptions ingrained in our thought and built into some of the ways in which language is used, since it conflicts with what is sometimes called common sense, it needs very solid support” (1977: 35). He seems to be saying that the very fact that it clashes with common sense represents a methodological handicap for his brand of moral skepticism, and thus that the arguments in its favor need to be even more convincing than do those of the opponent if they are to command assent. It is not clear, however, that Mackie was required to shoulder this burden. It appears that for any such charge that one party bears the burden of proof, there is plenty of argumentative space for denying the allegation.
We should delineate two ways that a philosophical position might bear a “burden of proof.” First, there may be a consensus of folk opinion (or “intuition”) that favors the opposing view. Second, there may be a phenomenon, or range of phenomena, for which the position in question appears to suffer a clear disadvantage when it comes to offering an explanation. That these two are distinct is brought out by considering that theory X might do a much better job than theory Y of explaining phenomenon P, even though X is more counter-intuitive than Y. Perhaps Newtonian physics is more intuitive than Einsteinian, but there are observable data—e.g., those gathered during the famous solar eclipse experiments of 1919—that the latter theory is much better equipped to explain.
Supplement: Moral Anti-realism vs. Realism: Intuitions
Supplement: Moral Anti-realism vs. Realism: Explanatory Power
In short, attempts to establish the burden of proof are as slippery and indecisive in the debate between the moral realist and the moral anti-realist as they tend to be generally in philosophy. The matter is complicated by the fact that there are two kinds of burden-of-proof case that can be pressed, and here they tend to pull against each other. On the one hand, moral realists face a cluster of explanatory challenges concerning the nature of moral facts (how they relate to naturalistic facts, how we have access to them, why they have practical importance)—challenges that simply don't arise for either the noncognitivist or the error theorist. On the other hand, it is widely assumed that intuitions strongly favor the moral realist. This tension between what is considered to be the intuitive position and what is considered to be the empirically, metaphysically, and epistemologically defensible position, motivates and animates much of the debate between the moral realist and moral anti-realist.
Let us now discuss in turn the three specific forms of moral anti-realism in more detail.
On the face of it, when we make a public moral judgment, like “That act of stealing was wrong,” what we are doing is asserting that the act of stealing in question instantiates a certain property: wrongness. This raises a number of extremely thorny metaethical questions: What kind of property is wrongness? How does it relate to the natural properties instantiated by the action? How do we have epistemic access to the property? How do we confirm whether something does or does not instantiate the property? (And so on.) The difficulty of answering such questions may lead one to reject the presupposition that prompted them: One might deny that in making a moral judgment we are engaging in the assignment of properties at all. Such a rejection, roughly speaking, is the noncognitivist proposal. Not only does the noncognitivist sidestep these nasty puzzles, but may also claim the advantages of doing a better job of explaining the apparent motivational efficacy of moral judgment (see Stevenson 1937; Blackburn 1984; Smith 1994a: chapters 1–2), of more readily accounting for certain aspects of moral disagreement (e.g., its vehemence and intractability) (see Stevenson 1944; 1963: essays 1 and 2), or of accommodating our unwillingness to defer to moral experts (see McGrath 2008).
It is impossible to characterize noncognitivism in a way that will please everyone. Etymologically speaking, moral noncognitivism is the view that there is no such thing as moral knowledge. But it is rarely considered in these terms. Traditionally, it is presented as the view that moral judgments are neither true nor false. This characterization is indeterminate and problematic in several ways. First, it leaves it unclear what category of thing a “moral judgment” is; in particular, is it a mental state or a linguistic entity? If moral judgments are considered to be mental states, then noncognitivism is the view that they are a type of mental state that is neither true nor false, which is equivalent (most assume) to the denial that moral judgments are beliefs. There are at least two ways of treating a moral judgment as a type of “linguistic entity”: We could think of it as a type of sentence (generally, one that involves a moral predicate, such as “…is morally good” or “…is evil”) or we could think of it as a type of speech act. On the former disambiguation, noncognitivism is the semantic view that moral judgments are a type of sentence that is neither true nor false, which is equivalent (most assume) to saying that the underlying grammar of the sentence—its logical form—is such that it fails to express a proposition (in the same way as, say, “Is the cat brown?” and “Shut the door!” are sentences that fail to express propositions). On the latter disambiguation, noncognitivism is the pragmatic view that moral judgments are a type of speech act that is neither true nor false, which is equivalent (most assume) to the denial that moral judgments are assertions (i.e., the denial that moral judgments express belief states). (For discussion of the semantic/pragmatic distinction, see the entry on pragmatics, section 4.) In all cases, note, noncognitivism is principally a view of what moral judgments are not—thus leaving open space for many different forms of noncognitivism claiming what moral judgments are.
There are also problems inherent in characterizing noncognitivism in terms of truth value—if for no other reason than that there is much deep and ongoing philosophical debate about the nature of truth and the nature of truth value. There are number of reasons for thinking that the category of “being neither true nor false” does not align as neatly as often assumed with the categories of “being something other than a belief” (when applied to mental states) or “being something that does not express a proposition” (when applied to sentence types) or “being something other than an assertion” (when applied to speech acts). For example, according to Strawson (1956), if someone were today to utter “The present king of France is wise,” she would have failed to say anything true or false, due to the referential failure of the subject term of the sentence. Yet surely the utterance is not barred from counting as an assertion, and surely the speaker, if she falsely believes that there exists a present king of France, can believe that he is wise. Similarly, it has frequently been argued (though also frequently denied) that sentences manifesting forms of sortal incorrectness (e.g., “The color of copper is forgetful”) are neither true nor false; yet these too are, arguably, assertible. It has also been claimed that vague predicates, when applied to gray-area objects, result in sentences neither true nor false; yet, again, such sentences seem assertible and believable. None of these is an unproblematic position to adopt, but together they at least indicate that it may be preferable to characterize noncognitivism in a manner that does not make essential reference to truth value gaps. There is also pressure in favor of this decision coming from the other direction. It is not unusual for modern versions of noncognitivism to acknowledge the possibility of moral truth and moral falsity via an embrace of a minimalist theory of truth (see Blackburn 1984, 1993a; Smith 1994b), according to which if one is licensed in uttering a sentence “S” with surface indicative grammar, then so too is one licensed in uttering “‘S’ is true.” Thus, regardless of whether the underlying grammar of the sentence “Stealing is wrong” expresses a proposition, regardless of whether the utterance of this sentence is typically used to express a belief, so long as someone is licensed in uttering the sentence then the appending of the truth predicate will not be inappropriate.
But if we cease to characterize noncognitivism by reference to truth value, how shall we do so? The above three characterizations can each be revised so as to drop mention of truth values, as follows:
- If moral judgments are considered to be mental states, then noncognitivism is the denial that moral judgments are beliefs.
- If moral judgments are considered to be sentence types, then noncognitivism is the denial that moral judgments have an underlying grammar that expresses a proposition.
- If moral judgments are considered to be speech acts, then noncognitivism is the denial that moral judgments are assertions.
How much progress this avoidance buys us remains to be seen. It would not be unreasonable to characterize noncognitivism as the conjunction of these three denials, though there would be something stipulative about insisting upon this. In fact, generally these different strands of noncognitivism simply aren’t sufficiently teased apart.
What, then, are the noncognitivist's options regarding positive views?
- If moral judgments are taken to be mental states, but not beliefs, then the likely contenders for being moral judgments are: desires, emotions, attitudes, and, in general, some specifiable kind of conative state. The noncognitivist may want to present something more specific, such as (dis)approval, or desire that the action in question (not) be performed, or subscription to a normative framework [to be specified], or desire that transgressors be punished, etc. The range of options is open-ended.
- If moral judgments are taken to be sentences, but ones whose underlying grammar is not proposition-expressing, then the noncognitivist must provide an account of the “true” logical structure of the moral sentence which reflects this. One traditionally dominant such form of noncognitivism once went by the name “the Boo/Hurrah” theory; it is now known as “emotivism.” According to this theory, the real meaning of a sentence like “Stealing is wrong” is something like the interjection “Stealing: Boo!” (It is important to distinguish this view—according to which moral sentences express one's feelings—from a view according to which moral sentences report one's feelings. Expressing one's disapproval toward X through saying “X: yuk!” is different from asserting “I feel disapproval of X.”) Another influential kind of noncognitivism called “prescriptivism” claims that this sentence is really a veiled command whose true meaning should be captured using the imperative mood: “Don't steal!” (see Carnap 1935: 24–25). R.M. Hare (1952, 1963) restricted this to commands that one is willing to universalize. Since there are many kinds of non-proposition-expressing sentence, there are many such possibilities for a noncognitivist. A certain kind of fictionalist might claim that the real meaning of “Stealing is wrong” should be rendered in the cohortative mood (which in English is not grammatically distinguished from imperative): “Let's pretend that stealing is wrong.” One might claim that the sentence really articulates a wish: “Would that no one would steal!” (optative-subjunctive mood). The thing to notice is that in all the translation schemata offered (but one) the predicate “…is wrong” gets translated away, thus obviating the philosophical puzzles surrounding the need to explain the nature of moral properties. This evasion of a cluster of thorny philosophical problems represents noncognitivism's greatest theoretical attraction. (The one view in which the predicate does not disappear is the fictionalist offering, but here the predicate is embedded in a “Let's pretend that…” context, thus removing any ontological commitment to the instantiation or even existence of the property. This fictionalist does, however, owe us some kind of account of what this property would be like, in order that the content of the fiction can be understood.)
- If moral judgments are taken to be speech acts, but not assertions, then the likely contenders for being moral judgments appear very similar to those described under (ii): Moral judgments may be used to express emotion, or to voice commands, or to initiate an act of make-believe, or to express a wish, etc. The difference is that this kind of noncognitivist sees these possibilities as in terms of what moral language is used for, not as a matter of the meaning or grammar of moral language, and thus has no need to offer a translation schema into a different grammatical mood. (Whether one uses the sentence “The frog was green” to make an assertion or utters it with assertoric force withheld in the course of telling a fairy tale, the meaning and grammar of the sentence remain the same.) The critical (and often overlooked) point is that assertion is not a grammatical or semantic category. It makes no sense to ask whether the sentence “The frog was green” is an assertion. It can certainly be used to make an assertion, but it might also be uttered as a line of a play, or dripping with tones of sarcasm, or as an example of a 4-word English sentence—and in none of these cases would it be asserted. The match between grammatical categories and speech acts is a rough one. One can assert something not only using the indicative mood, but also with the interrogative mood (“Is the pope Catholic?” meaning Yes) or the imperative mood (“Get outta here!” meaning No); one can command something not only with the imperative mood, but also with the interrogative mood (“Will you come here right now, young man?”) or the optative-subjunctive mood (“Would that you would come here!”); and so on. The noncognitivist making a claim about the use of moral sentences (as opposed to a claim about their meaning) can allow that the meaning of the sentence “Stealing is wrong” is just what it appears to be (here she can accept whatever the moral cognitivist says on the matter); but this noncognitivist maintains that the primary usage of this sentence is not to make an assertion, despite its being formed in the indicative mood. Since there are a great many kinds of speech act other than assertion (admonishing, commanding, exclaiming, promising, requesting, pretending, warning, undertaking, etc., etc.)—and since no one has yet proposed an exhaustive list—the noncognitivist has many positive options. (For more on speech act theory, see Austin 1962; Searle 1969.)
In short, the range of possible positive moral noncognitivist theories is large, though the level of plausibility among the members will vary greatly. (For futher discussion of noncognitivism, see the entry on moral cognitivism vs. non-cognitivism.) Modern noncognitivism is widely associated with the work of Blackburn, who also uses the terms “projectivism” and “quasi-realism” for the position he advocates. These three labels, however, can all be teased apart.
Supplement: Projectivism and Quasi-realism
Occasionally (though less so these days) one sees noncognitivism characterized as the view that moral judgments are meaningless. This is an inaccurate description, but it is instructive to recount why someone might be led to assert it. One of the first clear statements of moral noncognitivism came from Ayer in 1936. According to Ayer's influential brand of logical positivism, all meaningful statements are either analytic or empirically verifiable. Since moral utterances appear to be neither, Ayer concluded that they were not meaningful statements. But it does not follow that moral judgments are meaningless. Ayer's preferred conclusion is that they are not statements, but are, rather, ways of evincing one's emotions and issuing commands. (Ayer did claim that the moral predicates are not really predicates at all, that they do not pick out properties, and thus that they cannot logically be nominalized. Since wrongness, for Ayer, is a pseudo-concept, it may reasonably be claimed that Ayer took the word “wrongness,” and all other moral nouns, to be meaningless.)
[Historical aside: though Ayer is often credited with the first clear formulation of emotivism, it had been suggested to him earlier by Austin Duncan-Jones. (Duncan-Jones did not publish anything on the topic until his review of C.L. Stevenson's Ethics and Language in Mind 54 (1945); however, his views were described in C.D. Broad's article “Is goodness the name of a non-natural quality?” Proceedings of the Aristotelian Society 34 (1933–34).) Ayer admits his debt to Duncan-Jones in his autobiography. Emotivism had also been clearly presented in C.K. Ogden and I.A. Richards' 1923 book The Meaning of Meaning. Ogden and Richards write of a use of the word “good” which is...
...a purely emotive use. When so used the word stands for nothing whatsoever, and has no symbolic function. Thus, when we so use it in the sentence, ‘This is good’, we merely refer to this, and the addition of ‘is good’ makes no difference whatever to our reference … it serves only as an emotive sign expressing our attitude to this, and perhaps evoking similar attitudes in other persons, or inciting them to actions of one kind or another. (125)
Ayer later wrote: “I must confess that I had read The Meaning of Meaning some years before I wrote Language, Truth and Logic, but I believe that my plagiarism was unconscious” (1984: 28). Ogden and Richards were in turn picking up on a distinction between the denoting and emotive qualities of language that can be traced back at least to Frege's 1897 essay “Logic,” and even to J.S. Mill's 1843 System of Logic (book 6). Stephen Satris (1987) tracks the Continental origins of emotivism back to the work of Hermann Lotze in the 19th Century.]
Noncognitivism is generally presented as a descriptive characterization of moral thought or language, though occasionally it is presented in a prescriptive spirit: It may be held that moral cognitivism is as a matter of fact true, but that (for various reasons) it would be a good idea if we changed our attitudes and/or language in such a way that noncognitivism became true. (See Joyce 2001; West 2010.)
If noncognitivism is defined as the negation of cognitivism—as a theory about what moral judgments are not—then the two theories are not just contraries but contradictories. However, a degree of benign relaxation of criteria allows for the possibility of “mixed” theories. If we consider noncognitivism not as a purely negative thesis, but as a range of positive proposals, then it becomes possible that the nature of moral judgments combines both cognitivist and noncognitivist elements. For example, moral judgments (as speech acts) may be two things: They may be assertions and ways of issuing commands. (By analogy: To call someone a “kraut” is both to assert that he is German and to express a derogatory attitude toward people of this nationality.) C.L. Stevenson held such a mixed view; for modern versions, see Copp 2001; Schroeder 2010 chapter 10; Svoboda 2011.
4. Error Theory
Understanding the nature of an error theory is best done initially by example: It is the attitude that sensible people take toward phlogiston, that levelheaded people take toward astrology, that reasonable people take toward the Loch Ness monster, and that atheists take toward the existence of gods. An error theorist doesn't believe in such things; she takes talk of such things to be a load of bunk. The moral error theorist doesn't believe in such things as moral obligation, moral value, moral desert, moral virtue, and moral permission; she takes talk of such things to be bunk. This much allows one to get a fairly good intuitive grasp on the error theoretic position, though the details of how the stance should best be made precise are unresolved.
One might be tempted to express the error theory in negative existential terms: as the view that X doesn't exist. Some qualifications may be necessary depending on whether X is taken to be an object or a property. If it is an object, the error theorist simply denies its existence; but if it is a property it is somewhat less clear how to articulate the error theorist's denial. Does she deny that the property exists, or deny that it is instantiated at the actual world? It is a task for metaphysicians to decide the best way that we should speak of the status of the property of being phlogiston, say. One might allow that the property exists—even that it exists at the actual world—but deny that it is instantiated.
The problem with characterizing the error theory in negative existential terms is that it doesn't distinguish the position from noncognitivism, for the noncognitivist also denies that moral qualities exist (discounting the linguistic permissions that may be achieved via the quasi-realist program—see the supplementary document Projectivism and Quasi-realism. The difference between the noncognitivist and the error theorist is that the latter takes moral judgment as a mental phenomenon to be a matter of belief, and moral judgment as a linguistic phenomenon to be assertoric. Nobody thinks that when a 17th-century chemist said “Phlogiston resides in combustible materials” he was doing anything other than making an assertion; i.e., nobody is a noncognitivist about 17th-century phlogiston discourse. But we think that such assertions were systematically untrue, since there is no phlogiston. Similarly, the moral error theorist thinks that moral utterances are typically assertions (i.e., the error theorist is a cognitivist) but they are systematically untrue, since there are no moral properties to make them true. Strictly speaking, then, the object of an error theoretic stance is a discourse: We are error theorists about phlogiston discourse, not about phlogiston. In practice, however, philosophers often describe the error theory in the latter ontological manner, and this causes no obvious confusion. The common phrase “an error theory about morality” fudges this distinction.
Just as we obviously don't think that every sentence containing the word “phlogiston” is untrue (consider “Phlogiston doesn't exist” and “17th-century chemists believed in phlogiston”), nor does the moral error theorist hold that every sentence containing a moral term is untrue; indeed, the use of such terms is surely essential to articulating and advocating the error theory. Rather, the error theorist focuses on a proper subset of sentences containing the problematic terms: those that imply or presuppose the instantiation of a moral property. “Stealing pears is morally wrong” will be such a sentence; “Augustine believed that stealing pears is wrong” will not be. Let us call such sentences “atomic moral sentences.” The error theorist is typically characterized as holding that all atomic moral sentences are false. (See Pigden 2010.) As a quick characterization this is probably adequate, but speaking more carefully there may be grounds for revision. Consider, say, discourse about Babylonian gods, and consider in particular those sentences that imply or presuppose the existence of these gods (e.g., “Ishtar traveled to the underworld” but not “The Babylonians believed that Ishtar traveled to the underworld”). We rightly do not believe in Ishtar and all the rest of the Babylonian pantheon, and this should make us error theorists about this discourse. However, it is not obvious that a sentence like “Ishtar traveled to the underworld” comes out as false. As mentioned earlier, Strawson (1956) argued that such a sentence—where the subject term suffers from referential failure—is best considered neither true nor false. Were we to adopt this Strawsonian view, we should not be forced to accept noncognitivism about this erroneous discourse, for we saw in section 3 several reasons for rejecting the popular characterization of noncognitivism as the claim that moral judgments are neither true nor false. We can both maintain the distinction between the error theoretic position and noncognitivism, and accommodate the Strawsonian complication, if the error theoretic position is defined as the view that the relevant sentences of the discourse in question are, though typically asserted, untrue.
Not only is endorsing a moral error theory consistent with the continued use of moral terms (as in “Nothing is morally wrong”), it is even consistent with the continued use of atomic moral claims (such as “Stealing pears is wrong”). It is typically assumed that the moral error theorist must be a moral eliminativist: advocating the abolition of all atomic moral sentences. But in fact what the error theorist decides to do with the erroneous moral language is a matter logically independent of the truth of the moral error theory. Perhaps the moral error theorist will carry on asserting moral judgments although she believes none of them—in which case she will be lying to her audience (assuming her audience consists of moral believers). If lying is a fault only in a moral sense, the error theorist may remain unperturbed by this accusation. Or perhaps the moral error theorist carries on uttering moral sentences but finds some way of removing assertoric force from these utterances, in which case she is not lying, and need not be committing a moral or epistemological sin any more than does an actor reciting the lines of a play. (The error theorist who advocates maintaining moral language in this way is a kind of fictionalist. See Joyce 2001; Kalderon 2005; West 2010. See the entry on fictionalism.) Such possibilities suffice to show that the moral error theorist need not be an eliminativist about moral language, and counter the popular assumption that if we catch a professed moral error theorist employing moral talk then we can triumphantly cry “Aha!” Furthermore, even if it were true that by employing moral language the moral error theorist opens herself to accusations of hypocrisy, disingenuousness, bad faith, or vacillating between belief and disbelief, all such charges amount to criticisms of her—and to suppose that this somehow undermines the possibility of the moral error theory being true is to commit an ad hominem fallacy.
Although one could be a moral error theorist by implication—either because one endorses a radical global error theory (thus being skeptical of morality along with modality, colors, other minds, cats and dogs, etc.), or because one endorses an error theory about all normative phenomena—typically the moral error theorist thinks that there is something especially problematic about morality, and does not harbor the same doubts about normativity in general. The moral error theorist usually allows that we can still deliberate about how to act, she thinks that we can still make sense of actions harming or advancing our own welfare (and others' welfare), and thus she thinks that we can continue to make sense of various kinds of non-moral “ought”s, such as prudential ones (see Joyce 2007). Thus the moral error theorist can without embarrassment assert a claim like “One ought not harm others,” so long as it is clear that it is not a moral “ought” that is being employed. (In the same way, an atheist can assert that one ought not covet one's neighbor's wife, so long as it clear that this isn't an “…according to God” prescription.)
Holding a moral error theoretic position does not imply any degree of tolerance for those actions we generally abhor on moral grounds. Although the moral error theorist will deny (when pressed in all seriousness) that the Nazis' actions were morally wrong, she also denies that they were morally right or morally permissible; she denies that they were morally anything. This does not prevent her from despising and opposing the Nazis' actions as vehemently as anyone else. (See Joyce 2001, 2007; Garner 2010.) Thinking that the moral error theorist must be “soft on crime” is like thinking that the atheist must be.
Mackie, who coined the term “error theory” and advocated the view most clearly (1977), described it as a form of “moral skepticism.” Whether this label is acceptable depends on how broad or specific a definition of “skepticism” is being employed. If one thinks of skepticism as the state of being unsure, then Mackie is no skeptic: his position is not one of epistemic agnosticism with respect to moral claims, but rather of positive disbelief. (He is an “atheist” about morality, not an “agnostic.”) However, if one thinks of skepticism as the claim that there is no moral knowledge, and, moreover, thinks that a proposition must be true to be known, then Mackie's denial of moral truth can properly be called “skepticism.” (See the entry on skepticism.) Even so, the moral error theorist may still dislike the term “skeptic” for the connotations it brings that her position is somehow to be defined in opposition to a mainstream, and that she thus starts off shouldering a burden of proof. (Even the term “anti-realist” may be disliked for these reasons.) After all, if being “skeptical” is used in one of its vernacular modes to denote being in a state of disbelief, then the moral error theorist is no more deserving of the label than the moral realist, for the realist is a skeptic regarding the non-existence of moral properties. (Cf. definition of “theist”: “One who denies that God does not exist.”)
There are many possible routes to a moral error theory, and one mustn't assume that the metaethical position is refuted if one argumentative strategy in its favor falters. Perhaps the error theorist thinks that for something to be morally bad (for example) would imply or presuppose that human actions enjoy a kind of unrestricted autonomy, while thinking that in fact the universe supplies no such autonomy (see Caruso 2013; Blackmore 2013). Perhaps she thinks that for something to be morally bad would imply or presuppose a kind of inescapable, authoritative imperative against pursuing that thing, while thinking that in fact the universe supplies no such imperatives (Mackie 1977; Joyce 2001; Olson 2011, 2014). Perhaps she thinks that for something to be morally bad would imply or presuppose that human moral attitudes manifest a kind of uniformity, while thinking that in fact attitudes do not converge (Burgess  2007; see also Smith 1994a: 187–189, 2006, 2010). Perhaps she thinks that there exists no phenomenon whose explanation requires that the property of moral badness be instantiated, while thinking that explanatory redundancy is good ground for disbelief (Hinckfuss 1987). Perhaps she thinks that tracing the history of the concept moral badness back to its origins reveals a basis in supernatural and magical forces and bonds—a defective metaphysical framework outside which the concept makes no sense (Anscombe 1958; Hägerström 1953; see Petersson 2011). Perhaps she is both a Divine Command Theorist and an atheist. Perhaps she thinks all these things and more besides. Perhaps she is impressed by a number of little or medium-sized considerations against morality—none of which by itself would ground an error theory, but all of which together constitute sufficient grounds for skepticism.
Most opposition to the moral error theoretic position targets particular arguments in its favor, and since the range of such arguments is open-ended, so too is the opposition. Discussion has focused heavily on Mackie's 1977 presentation, and in particular on his two arguments in favor of the error theory: the Argument from Relativity and the Argument from Queerness.
Supplement: Mackie's Arguments for the Moral Error Theory
For discussion of Mackie's position, see papers in Honderich 1985 and in Joyce & Kirchin 2010. See also Brink 1984; Garner 1990; Daly & Liggins 2010; Miller 2013, ch.6; Olson 2011, 2014. It is important to remember, however, that Mackie's are not the only, nor necessarily the strongest, considerations in favor of the moral error theory.
The typical argument for the error theory has two steps: the conceptual and the ontological. First the error theorist tries to establish that moral discourse is centrally committed to some thesis X. The phrase “centrally committed” is supposed to indicate that to deny X would be to cease to participate competently in that discourse. Imagine a phlogiston theorist who, upon hearing of the success of oxygen theory, claims that his theory has been vindicated; he asserts that he has been talking about oxygen all along but just by a different name. When the important differences between the two substances are pointed out to him (that phlogiston is stored in flammable materials and released during combustion, while oxygen combines from the atmosphere with flammable materials and is destroyed during combustion), he admits that he's had some false beliefs about the nature of the substance, but remains adamant that he was still talking about oxygen all along. This seems unacceptable, roughly because the thesis about being stored and released is a “central commitment” of phlogiston talk; to deny this thesis with respect to some substance is to cease to talk about phlogiston.
The ontological step of the error theorist's argument is to establish that thesis X (whatever it may be) is false. This may be achieved either through a priori means (demonstrating X to be incoherent, say) or through a posteriori methods (investigating the world and coming to the conclusion that nothing satisfies X). Which method is appropriate depends on the nature of the error that has been attributed to moral discourse. Sometimes the moral error theorist will hold that there is something impossible or incoherent about moral properties, such that the error theory is necessarily true. But it suffices for being an error theorist to hold that the non-instantiation of moral properties is a merely contingent affair. (Mackie, for example, though often interpreted in the former way, seems to prefer the latter. He concedes that if theism were true, then “a kind of objective ethical prescriptivity could be introduced” (1977, p. 48), and, though an avowed atheist, Mackie did not, apparently, maintain that theism is necessarily false. Thus on the basis of this passage we must conclude that he took the moral error theory to be only contingently true.)
The error theorist pressing this form of argument thus faces two kinds of opponent. The challenger may acknowledge that the putatively problematic attribute that the error theorist assigns to morality really is problematic, but deny that this attribute is an essential component of morality; a normative framework stripped of the troublesome element will still count as a morality. Alternatively, the opponent may accept that the putatively problematic attribute is a non-negotiable component of anything deserving the name “morality,” but deny that it really is problematic. So, for example, if the error theorist claims that moral properties require a kind of pure autonomy which the universe does not supply, then one type of opponent will insist that morality requires nothing of the sort, while another will insist that the universe does indeed contain such autonomy.
The error theorist must be prepared to defend herself on both fronts. This job is made difficult by the fact that it may be hard to articulate precisely what it is that is so troubling about morality. This failure need not be due to a lack of clear thinking or imagination on the error theorist's part, for the thing that is troubling her may be that there is something deeply mysterious about morality. The moral error theorist may, for example, perceive that moral imperatives are imbued with a kind of mystical practical authority—a quality that, being mysterious, of course cannot be articulated in terms satisfactory to an analytic philosopher. Such an error theorist is forced to fall back on vague metaphors in presenting her case: Moral properties have a “to-be-pursuedness” to them (Mackie 1977: 40), moral facts would require that “the universe takes sides” (Burgess  2007), moral believers are committed to “demands as real as trees and as authoritative as orders from headquarters” (Garner 1994: 61), and so on. Indeed, it may be the vague, equivocal, quasi-mystical, and/or ineliminably metaphorical imponderabilia of moral discourse that so troubles the error theorist. (See Hussain 2004.)
Even if the error theorist can articulate a clear and determinate (putatively) problematic feature of morality, the dispute over whether this quality should count as a “non-negotiable component” of morality has a tendency to lead quickly to impasse, for there is no accepted methodology for deciding when a discourse is “centrally committed” to a given thesis. What is needed is a workable model of the identity criteria for concepts (allowing us confidently to either affirm or deny such claims as “The concept of moral obligation is the concept of an institution-transcendent requirement”)—but we have no such model, and there is no consensus even on what approximate shape such a model would take. It is also possible that the most reasonable account of conceptual content will leave many concepts with significantly indistinct borders. There may simply be no fact of the matter about whether the concept of moral obligation is, or is not, the concept of an institution-transcendent requirement (for example). Thinking along these lines, David Lewis makes use of the distinction between speaking strictly and speaking loosely: “Strictly speaking, Mackie is right: genuine values would have to meet an impossible condition, so it is an error to think there are any. Loosely speaking, the name may go to a claimant that deserves it imperfectly … What to make of the situation is mainly a matter of temperament” (Lewis  2000: 93).
Lewis's own temperament leads him to want to vindicate moral discourse, and he thinks that this can be done by supporting a kind of dispositional theory of value. He argues that certain dispositional properties, properly understood, are adequate contenders for being identified with values, and he applies this account to the moral realm (Lewis 2005: 320), thus defending the existence of moral facts (though not mind-independent moral facts). But he admits that this works only if one is willing to “speak loosely” about morality. If, on the other hand, one insists on speaking strictly, then (Lewis admits) one is forced to acknowledge that there are desiderata of moral values (such as the authoritative practical oomph that Mackie goes to such efforts to articulate) that these dispositions do not satisfy. And what is wrong with insisting on speaking strictly, or wrong with antecedently preferring to support theories that disrupt and challenge rather than vindicate ordinary belief systems? Nothing, according to Lewis. If this is correct, then the dispute between the moral error theorist and her many detractors may in fact be fundamentally undecidable—there may simply be no fact of the matter about who is correct. (See Joyce 2012.)
To deny both noncognitivism and the moral error theory suffices to make one a minimal moral realist. Traditionally, however, moral realism has required the denial of a further thesis: the mind-dependence of morality. There is no generally accepted label for theories that deny both noncognitivism and the moral error theory but maintain that moral facts are mind-dependent; here I shall use the term “non-objectivism.” Thus, “moral non-objectivism” denotes the view that moral facts exist and are mind-dependent (in the relevant sense), while “moral objectivism” holds that they exist and are mind-independent. (Note that this nomenclature makes the two contraries rather than contradictories; the error theorist and the noncognitivist count as neither objectivists nor non-objectivists. The error theorist may, however, be an objectivist in a different sense: in holding that moral facts are conceptually objective facts.) Let us say that if one is a moral cognitivist and a moral success theorist and a moral objectivist, then one is a robustmoral realist. In this section, the third condition will be discussed.
Yet this third condition, even more than the first two, introduces a great deal of messiness into the dialectic, and the line between the realist and the anti-realist becomes obscure (and, one might think, less interesting). The basic problem is that there are many non-equivalent ways of understanding the relation of mind-(in)dependence, and thus one philosopher's realism becomes another philosopher's anti-realism. At least one philosopher, Gideon Rosen, is pessimistic that the relevant notion of objectivity can be sharpened to a useful philosophical point:
To be sure, we do have “intuitions” of a sort about when the rhetoric of objectivity is appropriate and when it isn't. But these intuitions are fragile, and every effort I know to find the principle that underlies them collapses. We sense that there is a heady metaphysical thesis at stake in these debates over realism … [b]ut after a point, when every attempt to say just what the issue is has come up empty, we have no real choice but to conclude that despite all the wonderful, suggestive imagery, there is ultimately nothing in the neighborhood to discuss. (1994: 279. See also Dworkin 1996.)
As Rosen says, metaphors to mark subjectivism from objectivism are easy to come by and easy to motivate in the uninitiated. The objectivist about X likens our X-oriented activity to astronomy, geography, or exploration; the subjectivist likens it to sculpture or imaginative writing. (These are Michael Dummett's metaphors (1978: xxv).) The objectivist sees the goal of our inquiries as being to “carve the beast of reality at the joints” (as the popular paraphrase of Plato's Phaedrus puts it); the subjectivist sees our inquiries as the application of a “cookie cutter”: imposing a noncompulsory conceptual framework onto an undifferentiated reality (to use Hilary Putnam's equally memorable image (1987: 19)). The objectivist sees inquiry as a process of detection, our judgments aiming to reflect the extension of the truth predicate with respect to a certain subject; the subjectivist sees inquiry as a process of projection, our judgments determining the extension of the truth predicate regarding that subject.
The claim “X is mind-(in)dependent” is certainly too coarse-grained to do serious work in capturing these powerful metaphors; it is, perhaps, better thought of as a slogan or as a piece of shorthand. There are two conspicuous points at which the phrase requires precisification. First, we need to decide what exactly the word “mind” stands for. It can be construed strictly and literally, to mean mental activity, or it can be understood in a more liberal manner, to include such things as conceptual schemes, theories, methods of proof, linguistic practices, conventions, sentences, institutions, culture, means of epistemic access, etc. Were the moral facts to depend on any of these anthropocentric things, the anti-realist imagery of moral judges qua inventors may seem more apt than that of moral judges qua discoverers. Second, we need to decide what kind of relation is denoted by “(in)dependent.” Consider the following possibilities, concerning any of which it might be claimed that it makes goodness depend on mental activity (in this case, for simplicity, John's attitude of approval):
X is good iff John approves of X
X is good iff John would approve of X (in such-and-such circumstances)
X is good iff X merits John's approval
The catalog can be made longer, depending on whether the “iff” is construed as necessary or contingent, conceptual, a priori, or a posteriori.
To illustrate further the ubiquity of and variation among mind-dependence relations on the menu of moral theories, consider the following:
- According to classic utilitarianism, one is obligated to act so as to maximize moral goodness, and moral goodness is identical to happiness. Happiness is a mental phenomenon.
- According to Kant, one's moral obligations are determined by which maxims can be consistently willed as universal laws; moreover, the only thing that is good in itself is a good will. Willing is a mental activity, and the will is a mental faculty.
- According to John Rawls (1971), fairness is determined by the results of an imaginary collective decision, wherein self-interested agents negotiate principles of distribution behind a veil of ignorance. Decision-making, negotiation, and agency all require mental activity.
- According to Michael Smith (1994a), the morally right action for a person to perform is determined by what advice would be given to that person by her epistemically and rationally idealized counterpart. (See also Railton 1986.) Epistemic improvement and rational improvement are mental phenomena.
- According to Richard Boyd (1988), moral goodness is identical to a cluster of properties conducive to the satisfaction of human needs, which tend to occur together and promote each other. Human needs may not all be mental, but the needs that depend in no way on the existence of mental activity are surely few.
- According to Frank Jackson (1998), ethical terms pick out properties that play a certain role in the conceptual network determined by mature folk morality. “The folk” necessarily have minds, and the relevant process of “maturing” is presumably one that implicates a variety of psychological events.
Indeed, it is difficult to think of a serious version of moral success theory for which the moral facts depend in no way on mental activity. Yet to conclude that the distinction between minimal and robust realism cannot be upheld would be hasty. Many metaethicists who reject noncognitivism and the error theory, and thus count as minimal realists, continue to define their position (often under the label “constructivism”) in contrast to a realist view. (See Bagnoli 2002; Ronzoni 2010; Street 2010, 2012. See also the entry on constructivism in metaethics.) The challenge is to pick among the various mind-(in)dependence relations in the hope of drawing a distinction that is philosophically interesting and meshes satisfactorily with our preexisting philosophical taxonomy, such that some success theorists count as realists and some do not. Whether this aspiration can be satisfied remains to be seen, and thus Rosen's challenge is a real one. Answering this challenge is certainly not something that is aspired to here, though some preliminary thoughts will be offered.
There are unquestionably forms of mind-dependence that need to be excluded. Consider 21st-century global warming, and assume, as the scientific consensus declares, that this phenomenon is caused largely by human activity. The activities in question—driving vehicles, heavy industry, etc.—are largely intentional behaviors, hence had our minds been different—had humanity been inclined to lead a pastoral existence involving solar electricity and lots of bicycles—there would be no global warming. Thus the sentence “Global warming is occurring” is true thanks in part to human minds. And, indeed, to the extent that our actions might yet still reverse the phenomenon of global warming, by changing our minds we can render the sentence false. Yet, for all this, there certainly would seem to be something wrongheaded in claiming that global warming is “just subjective.” The straightforward kind of causal connection between mental activity and global warming (or, for that matter, airplanes, books, computers, drycleaners, etc.) is evidently not the right kind of mind-dependence that determines the objectivism/non-objectivism divide.
Compare a different case. Suppose I have a nugget of gold in one hand and a thousand dollar bill in the other. Let us say that it is a fact that (here and now) the nugget of gold is worth the same as the rectangular flat object, just as it is a fact that the thing in my left hand is made of metal and the thing in my right hand is made of paper. Yet the status of these facts seems different. The former fact, concerning the comparative value of the held objects, is not merely causally dependent on human mental activity, but seems somehow sustained and perhaps even constituted by that activity. Were the relevant authorities to decide that the nugget is worth twice the piece of paper, then it would cease to be true that they are worth the same—and it would, plausibly, cease to be true immediately, not via this decision having set into motion various worldly events that will eventually cause the value to change. By comparison, were we all to come to believe that the nugget is not made of gold, or that the rectangular flat object is not made of paper, this would have no effect on material constitution of the items. Were we all to die tomorrow, the nugget would carry on being made of gold, the flat rectangular object would carry on being made out of paper, but it would cease to be true that the nugget of gold is worth the same as the thousand dollar bill.
But this is all more suggestive than edifying. The exact nature of the mind-dependence relation exemplified by the value-of-gold example is obscure, and it remains to be seen whether this relation would be a reasonable explication of the one invoked by moral non-objectivists. Rosen would doubt that the example illustrates a useful notion of mind-dependence at all. His argument might be reconstructed as follows. First, we need to avoid being distracted by the indexical elements, so let us consider the sentence “Nugget of gold X is worth the same as this piece of paper Y, at noon, January 1, 2014, in the USA.” Rosen would argue that investigating whether this sentence is true should be a perfectly straightforward empirical pursuit, that in no sense have we abrogated “the Realist's rhetoric of objectivity, already-thereness, discovery and detection” (293). An anthropologist from another world who wanted to know whether the sentence is true would set about investigating a set of sociological facts; from the anthropologist's perspective, facts about the monetary value of gold are mind-dependent “only in the sense that they supervene directly on facts about our minds … [but] this has no tendency to undermine their objectivity” (302).
It is, of course, a truism that whenever one can talk of something's being invented, talk of discovery comes along for free, for it is always possible for someone else to make discoveries about any act of invention. (One could discover that the pavlova dessert was invented in Australia rather than New Zealand.) But it would be a mistake to allow this platitude to lead us to doubt that a distinction can be upheld, for although invention-talk entails the possibility of discovery-talk, the reverse is not true. (The chemical constitution of Jupiter is something we discover and is in no sense invented by anyone.) One might, therefore, still contrast non-objective facts—for which the imageries of both invention and discovery are available—with objective facts—for which only the imagery of discovery is appropriate. It might be complained that this distinction is somehow metaphysically uninteresting, but, even if this is true, it might nevertheless be a distinction that (for whatever reason) metaethicists choose to employ as a guiding piece of basic taxonomy. And indeed it would seem that by and large they do. The conviction that there is a distinction between objectivist and non-objectivist accounts of moral facts motivates a great deal of metaethical debate.
One popular way of clarifying the mind-dependence relation is to see certain properties and/or concepts as response-dependent. (In the interests of brevity and of bringing some varying theories into conformity, in what follows I reluctantly fudge the distinction between whether the issue concerns concepts or properties.) Roderick Firth's version of ideal observer theory (1952) is a good example of such a theory, but in more recent times the idea has been discussed at length by Mark Johnston (1989, 1991, 1992, 1993), David Lewis ( 2000), Crispin Wright (1988b), Philip Pettit (1991), and Ralph Wedgwood (1997). There are different formulations, but Johnston's can be considered canonical.
For Johnston, a property is response-dependent if it can be “adequately represented only by concepts whose conditions of application essentially involve conditions of human response” (Johnston 1991: 143). Response-dependent concepts are understood as follows:
Concept F = the concept of the disposition to produce R in S under C
where R is some response that essentially involves mental activity, S is some subject or group of subjects, and C are some specified conditions under which R is produced in S. (Further, it is stipulated that this identity may not hold trivially in virtue of R or S or C being given a “whatever it takes” specification (e.g., the concept cat = the concept of the disposition to produce the response “It's a cat” in perfect cat-spotters in optimal cat-spotting conditions).)
Johnston denies that our moral concepts are in fact response-dependent. He thinks that we should adopt an error theory about our actual defective response-independent moral discourse. However, he believes that we have available to us an array of response-dependent “surrogate” moral concepts regarding which we may hold a success theory. Echoing Lewis on speaking strictly versus loosely, Johnston claims that “ever so inclusively speaking” the moral error theorist is correct; but “more or less inclusively speaking” moral values exist. (These phrases are from his 1992 paper concerning color, but he makes it quite clear in his 1989 and 1993 articles that the same pattern is supposed to hold for moral value.) I will not discuss the details of Johnston's version here, but to note the general point that the acceptability of the surrogates must depend on how “close” they are to the original response-independent concepts. One may grant that nothing satisfies all of our desiderata regarding moral concepts, but the question remains whether any response-dependent concepts will satisfy enough of those desiderata to count as worthy and practicable surrogates.
Response-dependent concepts may or may not be relativistic: “S” may be replaced by an indexical (e.g., “us”) or by a non-indexical referring term (e.g., “Julius Caesar”). An example of a relativistic response-dependent moral theory is Jesse Prinz's (2007), while an example of a non-relativistic response-dependent moral theory is Firth's (1952) ideal observer theory. Here I will focus on the latter. Put in Johnston's terms, Firth's analysis of moral goodness is as follows:
The concept moral goodness = the concept of the disposition to produce approval in the ideal observer (in adequate viewing conditions)
The ideal observer is defined as having the following characteristics: He is omniscient with respect of the non-ethical facts, omnipercipient, disinterested, dispassionate, consistent, and in all other respects normal. See Firth 1952 for discussion of these qualities; see also Brandt 1954 and Firth 1954. (Note that Firth doesn't actually mention “viewing conditions,” since all the necessary properties are attributed to the observer himself. Often it doesn't make any difference whether a quality is predicated of the subject or the viewing conditions—e.g., the two descriptions “the approval felt by a fully-informed agent” and “the approval felt by an agent in circumstances that provide him with full information” are co-referential. I have here harmlessly included the parenthetical reference to “adequate viewing conditions” just to bring Firth's analysis more explicitly into line with Johnston's format.)
Not only is Firth's analysis non-relativistic (since it contains no ineliminable indexical element), but it is also, he declares, objectivist. He claims this on the grounds that it construes ethical statements in such a way that it is not the case “that they would all be false by definition if there existed no experiencing subjects (past, present, or future)” (1952: 322). In other words, Firth draws the objectivist/non-objectivist line according to an existential dependence that holds “by definition.” This claim to a certain kind of objectivity is a feature of all response-dependence theories. Response-dependent properties do not depend for their instantiation on the existence of a single conscious entity in the whole universe; what they depend upon is the presence of a disposition. Just as a vase may remain fragile in virtue of having a disposition to break (in C) even if it never has been, and never will be, broken, so too the disposition to produce R in S in C may be instantiated even if no token of R ever occurs (past, present or future), no token of S ever exists (past, present or future), and no token of C ever obtains (past, present or future). Thus Firth's theory at no point implies that any character with the idealized qualities exists.
Advocates of response-dependent theories for moral properties/concepts are eager to make much of this claim to objectivity. Pettit (1991) sets out to reassure realists that embracing response-dependency will upset little of their traditional desiderata, while Johnston claims that response-dependency promises to be a good candidate “for an appropriately qualified realism” (1993: 106). Nevertheless, although analyzing morality in a response-dependent manner without doubt makes morality existentially mind-independent, it with equal certainty renders it conceptually mind-dependent. And this may be enough to leave those with realist leanings uneasy. Although it may be true that the non-objectivist has traditionally expressed her commitments by reference to an existential relation, this may simply be due to the paucity of well-formed alternatives having been articulated in that tradition. Once conceptual mind-dependence is elucidated, the realist may find herself equally opposed. After all, in a sense all that has been altered is a modal variable: Instead of
X is good iff the ideal observer approves of X,
X is good iff the ideal observer would approve of X.
If one's opposition to the former was based on an intuitive hostility to the mind-dependence relation it embodies, it seems unlikely that the tweaking of that relation in the manner of the latter will make one less inclined to balk.
Firth's and Johnston's versions of a response-dependent morality may be categorized as non-normative, in contrast with a rather different way of understanding the response-dependent relation. Normative response-dependent theories of morality (also known as “fitting attitude accounts”) claim something like the following:
The concept moral goodness = the concept of something's warranting R in S under C
The key change is the presence of the normative notion of warranting (or meriting or justifying or some such similar notion). The principal challenge for such theories is to explicate this normative notion in a non-circular way that does not undermine the need for a response-dependent theory in the first place. Normative response-dependent theories are advocated by McDowell 1985, Wiggins 1987, and McNaughton 1988.
Critics of response-dependent theories of morality include Wright 1988b; Blackburn 1993b, 1998, ch.4; Cuneo 2001; Zangwill 2003; Koons 2003; LeBar 2005; Miller 2013, ch.7. See also papers in Casati and Tappolet 1998.
A quite different way of drawing the objective/non-objective distinction comes from Crispin Wright (1992). He discriminates between phenomena that play a wide cosmological role and those that play only a narrow role. A subject matter has wide cosmological role if the kinds of things with which it deals figure in a variety of explanatory contexts—specifically, if they explain things other than (or other than via) our judgments concerning them. So, for example, the rectangular shape of my door can explain many things: my tendency to think “The door is rectangular,” the shape of the shadow it casts, the absence of drafts in the office when the door is shut, etc. By comparison, something with narrow cosmological role fails to figure in explanations except concerning our judgments. Perhaps funniness is such a property. It need not be denied that there are facts about which things have this property, but it is hard to imagine that the funniness of something can explain the occurrence of any other phenomenon in the world without our tendencies to think it funny playing an intermediary role.
Wright doubts that moral facts have wide cosmological role, and thus—in this respect at least—comes down on the side of the moral anti-realist (1992: 197-8). It must be noted, however, that Wright's broader project is to establish a certain complex pluralism regarding realism and objectivity, and thus he allows that there are other equally valid objective/non-objective partitions (which won't be discussed here) that may tilt matters back in the moral realist's favor. (For another pluralistic approach to realism, see Pettit 1991.)
There are other ways one might try to cash out the objective/non-objective distinction in a manner that makes interesting sense of the traditional realist/anti-realist division in the moral realm. One might, for example, understand moral objectivity using the template provided by Michael Dummett (1978 and 1993): Atomic moral sentences are such that, though we think of them as determinately true or false, we nevertheless know of no method that represents either a proof or a disproof of such sentences; they are potentially “recognition transcendent.” The robust moral realist, accordingly, would think that a sentence like “Stealing is morally wrong” is either true or false, but that its truth value potentially outruns any means we know of for ascertaining it. There are several objections to this way of understanding realism (see Devitt 1991), but perhaps the most salient in the present context is that many philosophers who think of themselves as robust moral realists—and, indeed, are categorized so by such a consensus of their fellows that they must be considered almost canonical examples of the view—would reject Dummett's semantic construal.
One important thing to notice about these ways of drawing the objective/non-objective distinction is that they promise to defuse Sayre-McCord's contention that “mind-dependence” has no place as a criterion of anti-realism since it would make psychological realism a non-starter (see section 1). Suppose what is under contention is a mental state like pain. Consider, first, non-objectivism as response-dependence. Perhaps a response-dependent account of pain could be advocated, but it certainly doesn't seem mandatory (to say the least). Even if it were true that for any x, x is in pain if and only x believes/judges/etc. herself to be in pain, it would not follow that the concept pain is response-dependent. Therefore, under these terms the debate between the pain objectivist and the pain non-objectivist (and, more broadly, the psychological realist and the psychological anti-realist) can be a substantive one. Consider, second, non-objectivism as narrow cosmological role. It suffices here to note that pain may or may not have wide cosmological role—the question requires delicate discussion—therefore, again, psychological anti-realism is by no means trivially excluded just in virtue of the subject matter in question concerning a psychological phenomenon. Consider, last, non-objectivism in Dummettian terms. It is not trivially false that sentences of the form “X is in pain” are determinately true or false but potentially “recognition transcendent,” therefore it is not trivially the case that such sentences fail Dummett's test of objectivity, therefore the non-objectivism clause, so understood, does not render psychological realism a “non-starter,” as Sayre-McCord fears.
So many debates in philosophy revolve around the issue of objectivity versus non-objectivity that one may be forgiven for assuming that someone somewhere understands this distinction. There certainly exists a widespread intuitive imagery associated with the duality that is sufficiently vivid to motivate heartfelt philosophical commitments, but, once approached directly, the distinction nevertheless proves extremely difficult to nail down. It is likely that part of what is causing confusion is that there are a number of non-equivalent ways of drawing the distinction, some of which are better suited to certain subject areas than others. Expecting a monolithic theory that applies to all cases is probably an unreasonable aspiration. Perhaps, in the end, Rosen's pessimism will be borne out, in which case we will face a choice about how to confront the realism/anti-realism debate: Either we can go Sayre-McCord's route—dropping the muddled non-objectivism clause from our understanding of anti-realism (thus insisting that minimal realism is the only realism there is)—or we can accept that the weight of tradition makes non-objectivism an essential component of anti-realism, thus acknowledging that the realism/anti-realism debate is itself muddled (thus, presumably, adopting Rosen's quietist attitude). But either conclusion, at present, seems premature, since there is enough interesting work on the topic underway to provide hope that sensible and viable versions of the objectivism/non-objectivism distinction may yet be drawn up.
One further comment that should be made is to voice the suspicion that much of the knee-jerk opposition to non-objectivism is based on an impoverished understanding of the kind of resources available to a sophisticated non-objectivist. It is often assumed that “moral non-objectivism” must denote a kind of lumpish relativism according to which whatever sentiments an individual happens to have determine the moral truth for that person; it is often assumed that moral non-objectivism would therefore render incoherent the ideas of moral improvement, moral criticism, and moral disagreement. It is feared that such a stance would force upon us a kind of tolerance of all manner of undesirable behaviors, from acts of rudeness to Nazi atrocities.
There is much that is confused in such apprehensions. Moral non-objectivism is not the view the wrongness of genocide, say, is just a matter of opinion (in the way that preferring chocolate ice cream over vanilla is a matter of opinion), and an undue focus on that sort of silly subjectivism—whether explicitly or tacitly and unthinkingly—has injected a fair degree of straw-mannishness into proceedings. Moral non-objectivism need not be relativistic (see the supplementary document Moral Objectivity and Moral Relativism), and even when it is so, it need not be tied to the whims of the individual. There are sophisticated versions of moral relativism that make sense of moral improvement, moral criticism, and moral disagreement (see Harman 1975, 1996; Wong 2006; Prinz 2007). Furthermore (as has been noted on numerous occasions), there is no obvious route from relativism—no matter how rampant—to an attitude of tolerance. If relativism is true, then the value of tolerance is no more absolute than any other. Consider a kind of tolerance we think desirable: say, allowing other adults to decide what clothes they will wear. If I happen to find myself with sentiments in favor of this kind of tolerance, then, according to naive individualistic moral relativism, it is true (relative to me) that choosing one's own clothes is permissible. Were I, however, to find myself with vehemently intolerant attitudes toward other people's clothing autonomy, an individualistic moral relativism would be no less supportive of my values. (Indeed, if someone were to say to me “You should be more tolerant of people's choice of dress; don't you know that moral relativism is true?”, relativism would provide me with the resources to counter “But my perspective happens to be an intolerant one, and there is no perspective-transcendent viewpoint from which this point of view may be legitimately criticized.”) Alternatively, consider a kind of tolerance we think undesirable: say, that of feeling no compulsion to take action against Nazi genocide. If I happen to find myself with sentiments opposed to this kind of tolerance—if I think that Nazi savagery is a crime that must be prevented by extreme intervention—then, according to naive individualistic moral relativism, it is true (relative to me) that an indifferent attitude toward Nazism is unacceptable. Were I, however, to find myself unresponsive when confronted with Nazi genocidal programs—were I, indeed, to find myself with sympathetic leanings—an individualistic moral relativism would be no less supportive of my values. In short, whether we are drawn to relativism in the hope that it will encourage desirable kinds of tolerance, or we are repelled by relativism for fear that it will promote undesirable kinds of tolerance, both the hope and the fear are misplaced.
This entry has not attempted to adjudicate the rich and noisy debate between the moral realist and moral anti-realist, but rather has attempted to clarify just what their debate is about. But even this much more modest task is doomed to lead to unsatisfactory results, for there is much confusion—perhaps a hopeless confusion—about how the terms of the debate should be drawn up. It is entirely possible that when subjected to acute critical examination, the traditional dialectic between the moral realist and the moral anti-realist will crumble into a bunch of evocative metaphors from which well-formed philosophical theses cannot be extracted. If this is true, it would not follow that metaethics is bankrupt; far from it—it may be more accurate to think that modern metaethics has prospered to such an extent that the old terms no longer fit its sophisticated landscape.
But for present, at least, the terms “moral realist” and “moral anti-realist” seem firmly entrenched. With so much ill-defined, however, it would seem close to pointless to conduct metaethical debate under these terms. If someone tells us that she is a moral cognitivist then we comprehend, roughly, what she means. If someone presents an argument designed to support a moral error theory then we know what to expect. If someone articulates an objection to the ideal observer theory then we understand what we are dealing with. But if someone purports to be a moral anti-realist, or a moral realist, then although we can immediately exclude certain possibilities, a great deal of indeterminacy remains. This latitude means that the terms “moral realist” and “moral anti-realist” are free to be bandied with rhetorical force—more as badges of honor or terms of abuse (as the case may be) than as useful descriptive labels. Rather like tiresome arguments over whether some avant-garde gallery installation does or does not count as “art ” taxonomic bickering over whether a given philosopher is or is not a “moral realist” is an activity as unsightly as it is fruitless.
Just as important as gaining a clear and distinct understanding of these labels is gaining an appreciation of what of real consequence turns on the debate. This seems particularly pressing here because a natural suspicion is that much of the opposition to moral anti-realism develops from a nebulous but nagging practical concern about what might happen—to individuals, to the community, to social order—if moral anti-realism, in one guise or another, were widely adopted. The embrace of moral anti-realism, it is assumed, will have a pernicious influence. This concern presupposes that most of the folk are already pretheoretically inclined toward moral realism—an assumption that was queried in the supplementary document Moral Anti-realism vs. Realism: Intuitions. But even if it is true that most people are naive moral realists, the question of what would happen if they ceased to be so is an empirical matter, concerning which neither optimism nor pessimism seems prima facie more warranted than the other. As with the opposition to moral non-objectivism, the opposition to moral anti-realism is frequently based on an under-estimation of the resources available to the anti-realist—on an unexamined assumption that the silliest, crudest, and/or most insidious version will stand as a good representative of a whole range of extremely varied and often sophisticated theories.
- Anscombe, G.E.M., 1958. “Modern moral philosophy,” Philosophy, 33: 1–19.
- Asay, J., 2013. “Truthmaking, metaethics, and creeping minimalism,” Philosophical Studies, 163: 213–232.
- Austin, J.L., 1962. How to do Things with Words, Cambridge, MA: Harvard University Press.
- Ayer, A.J.,  1971. Language, Truth and Logic, Harmondsworth: Penguin.
- –––, 1984. Freedom and Morality and Other Essays, Oxford: Clarendon.
- Baeten, E., 2012. “Another defense of naturalized ethics,” Metaphilosophy, 43: 533–550.
- Bagnoli, C., 2002. “Moral constructivism: A phenomenological argument,” Topoi, 21: 125–138.
- Bedke, M., 2010. “Might all normativity be queer?” Australasian Journal of Philosophy, 88: 41–58.
- Blackburn, S., 1984. Spreading the Word, Oxford: Clarendon.
- –––, 1993a, Essays in Quasi-Realism, Oxford: Oxford University Press.
- –––, 1993b, “Circles, finks, smells and biconditionals,” Philosophical Perspectives, 7: 259–279.
- –––, 1995. “The flight to reality,” in R. Hursthouse et al. (eds.), Virtues and Reasons, Oxford: Clarendon Press: 35–56.
- –––, 1996. “Commentary on Ronald Dworkins' ‘Objectivity and truth: You'd better believe it.’” Brown Electronic Article Review Service in Moral and Political Philosophy, Volume 2 [available online].
- –––, 1998. Ruling Passions, Oxford: Oxford University Press.
- –––, 2005. “Quasi-realism no fictionalism,” In M. Kalderon (ed.), Fictionalism in Metaphysics, Oxford: Clarendon Press: 322–338.
- Blackmore, S., 2013. “Living without free will,” in G. Caruso (ed.), Exploring the Illusion of Free Will and Moral Responsibility, Lanham, MD.: Lexington Books: 161–176.
- Boyd, R., 1988. “How to be a moral realist,” in G. Sayre-McCord (ed.), Essays in Moral Realism, Ithaca: Cornell University Press: 181–228.
- Brandt, R., 1954. “The definition of an ‘ideal observer’ in ethics,” Philosophy and Phenomenological Research, 15: 407–13.
- Brink, D., 1984. “Moral realism and the skeptical arguments from disagreement and queerness,” Australasian Journal of Philosophy, 62: 111–125.
- –––, 1989. Moral Realism and the Foundations of Ethics, Cambridge: Cambridge University Press.
John Stuart Mill: Ethics
The ethical theory of John Stuart Mill (1806-1873) is most extensively articulated in his classical text Utilitarianism (1861). Its goal is to justify the utilitarian principle as the foundation of morals. This principle says actions are right in proportion as they tend to promote overall human happiness. So, Mill focuses on consequences of actions and not on rights nor ethical sentiments.
This article primarily examines the central ideas of his text Utilitarianism, but the article's last two sections are devoted to Mill's views on the freedom of the will and the justification of punishment, which are found in System of Logic (1843) and Examination of Sir William Hamilton’s Philosophy (1865), respectively.
Educated by his father James Mill who was a close friend to Jeremy Bentham, John Stuart Mill came in contact with utilitarian thought at a very early stage of his life. In his Autobiography he claims to have introduced the word “utilitarian” into the English language when he was sixteen. Mill remained a utilitarian throughout his life. Beginning in the 1830s he became increasingly critical of what he calls Bentham’s “theory of human nature”. The two articles “Remarks on Bentham’s Philosophy” (1833) and “Bentham” (1838) are his first important contributions to the development of utilitarian thought. Mill rejects Bentham’s view that humans are unrelentingly driven by narrow self-interest. He believed that a “desire of perfection” and sympathy for fellow human beings belong to human nature. One of the central tenets of Mill’s political outlook is that, not only the rules of society, but also people themselves are capable of improvement.
Table of Contents
- Introductory Remarks
- Mill’s Theory of Value and the Principle of Utility
- Morality as a System of Social Rules
- The Role of Moral Rules (Secondary Principles)
- Rule or Act Utilitarianism?
- Applying the Standard of Morality
- The Meaning of the First Formula
- Right in Proportion and Tendencies
- Utility and Justice
- The Proof of Utilitarianism
- Evaluating Consequences
- Freedom of Will
- Responsibility and Punishment
- References and Further Readings
- Primary Sources
- Secondary Sources
1. Introductory Remarks
Mill tells us in his Autobiography that the “little work with the name” Utilitarianism arose from unpublished material, the greater part of which he completed in the final years of his marriage to Harriet Taylor, that is, before 1858. For its publication he brought old manuscripts into form and added some new material.
The work first appeared in 1861 as a series of three articles for Fraser’s Magazine, a journal that, though directed at an educated audience, was by no means a philosophical organ. Mill planned from the beginning a separate book publication, which came to light in 1863. Even if the circumstances of the genesis of this work gesture to an occasional piece with a popular goal, on closer examination Utilitarianism turns out to be a carefully conceived work, rich in thought. One must not forget that since his first reading of Bentham in the winter of 1821-22, the time to which Mill dates his conversion to utilitarianism, forty years had passed. Taken this way, Utilitarianism was anything but a philosophical accessory, and instead the programmatic text of a thinker who for decades had understood himself as a utilitarian and who was profoundly familiar with popular objections to the principle of utility in moral theory. Almost ten years earlier (1852) Mill had defended utilitarianism against the intuitionistic philosopher William Whewell (Whewell on Moral Philosophy).
The priority of the text was to popularize the fundamental thoughts of utilitarianism within influential circles. This goal explains the composition of the work. After some general introductory comments, the text defends utilitarianism from common criticisms ("What Utilitarianism Is"). After this Mill turns to the question concerning moral motivation ("Of the Ultimate Sanction of the Principle of Utility").This is followed by the notorious proof of the principle of utility (“Of What Sort of Proof the Principle of Utility is Susceptible”) and the long concluding chapter on the relation of utility and justice (“On the Connection Between Justice and Utility”). The last chapter is often neglected – and wrongly so, for it contains a central statement of Mill’s understanding of morals; it creates the foundation for the philosopher’s theory of moral rights that plays a preeminent role in the context of his political thought.
According to his early essay “Bentham” (1838), all reasonable moral theories assume that “the morality of actions depends on the consequences which they tend to produce” (CW 10, 111); thus, the difference between moral theories lie on an axiological plane. His own theory of morality, writes Mill in Utilitarianism, is grounded in a particular “theory of life…–namely, that pleasure, and freedom from pain, are the only things desirable as ends.” (CW 10, 210) Such a theory of life is commonly called hedonistic, and it seems appropriate to say that Mill conceives his own position as hedonistic, even if he does never use the word “hedonism” or its cognates. What makes utilitarianism peculiar, according to Mill, is its hedonistic theory of the good (CW 10, 111). Utilitarians are, by definition, hedonists. For this reason, Mill sees no need to differentiate between the utilitarian and the hedonistic aspect of his moral theory.
Modern readers are often confused by the way in which Mill uses the term ‘utilitarianism’. Today we routinely differentiate between hedonism as a theory of the good and utilitarianism as a consequentialist theory of the right. Mill, however, considered both doctrines to be so closely intertwined that he used the term ‘utilitarianism’ to signify both theories. On the one hand, he says that the “utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end.” (CW 10, 234) On the other hand, he defines utilitarianism as a moral theory according to which “actions are right in proportion as they tend to promote happiness…” (CW 10, 210).
Utilitarians are, for him, consequentialists who believe that pleasure is the only intrinsic value.
Mill counts as one of the great classics of utilitarian thought; but this moral theory deviates from what many contemporary philosophers consider core features of utilitarianism. This explains why the question whether Mill is a utilitarian is more serious than it may appear on first inspection (see Coope 1998). One may respond that this problem results from an anachronistic understanding of utilitarianism, and that it disappears if one abstains from imputing modern philosophical concepts on a philosopher of the nineteenth century. However, this response would oversimplify matters. For it is not clear whether Mill’s value theory was indeed hedonistic (see Brink 1992). As mentioned before, Mill maintains that hedonism is the differentia specifica of utilitarianism; if he were not a hedonist, he would be no utilitarian by his own definition. In view of the fact that Mill’s value theory constitutes the center of his ethics (Donner 1991, 2009), the problem of determining its precise nature and adequate naming has attracted considerable attention over the last 150 years.
2. Mill’s Theory of Value and the Principle of Utility
Mill defines "utilitarianism" as the creed that considers a particular “theory of life” as the “foundation of morals” (CW 10, 210). His view of theory of life was monistic: There is one thing, and one thing only, that is intrinsically desirable, namely pleasure. In contrast to a form of hedonism that conceives pleasure as a homogeneous matter, Mill was convinced that some types of pleasure are more valuable than others in virtue of their inherent qualities. For this reason, his position is often called “qualitative hedonism”. Many philosophers hold that qualitative hedonism is no consistent position. Hedonism asserts that pleasure is the only intrinsic value. Under this assumption, the critics argue, there can be no evaluative basis for the distinction between higher and lower pleasures. Probably the first ones to raise this common objection were the British idealists F. H. Bradley (1876/1988) and T. H. Green (1883/2003).
Which inherent qualities make one kind of pleasure better than another, according to Mill? He declares that the more valuable pleasures are those which employ “higher faculties” (CW 10, 211). The list of such better enjoyments includes “the pleasures of intellect, of the feelings and imagination, and of the moral sentiments” (CW 10, 211). These enjoyments make use of highly developed capacities, like judgment and empathy. In one of his most famous sentences, Mill affirms that it “is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied” (CW 10, 212). This seems to be a surprising thing to say for a hedonist. However, Mill thought that we have a solid empirical basis for this view. According to him, the best obtainable evidence for value claims consists in what all or almost all people judge as valuable across a vast variety of cases and cultures. He makes the empirical assertion that all or almost all people prefer a “manner of existence” (CW 10, 211) that employs higher faculties to a manner of existence which does not. The fact that “all or almost all” who are acquainted with pleasures that employ higher faculties agree that they are preferable to the lower ones, is empirical evidence for the claim that they are indeed of higher value. Accordingly, the best human life (“manner of existence”) is one in which the higher faculties play an adequate part. This partly explains why he put such great emphasis on education.
3. Morality as a System of Social Rules
The fifth and final chapter of Utilitarianism is of unusual importance for Mill’s theory of moral obligation. Until the 1970s, the significance of the chapter had been largely overlooked. It then became one of the bridgeheads of a revisionist interpretation of Mill, which is associated with the work of David Lyons, John Skorupski and others.
Mill worked very hard to hammer the fifth chapter into shape and his success has great meaning for him. Towards the end of the book he maintains the “considerations which have now been adduced resolve, I conceive, the only real difficulty in the utilitarian theory of morals.” (CW 10, 259)
At the beginning of Utilitarianism, Mill postulates that moral judgments presume rules (CW 10, 206). In contrast to Kant who grounds his ethical theory on self-imposed rules, so-called maxims, Mill thinks that morality builds on social rules. But what makes social rules moral rules? Mill’s answer is based on a thesis about how competent speakers use the phrase “morally right” or “morally wrong”. He maintains that we name a type of action morally wrong if we think that it should be sanctioned either through formal punishment, public disapproval (external sanctions) or through a bad conscience (internal sanctions). This is the critical difference between “morality and simple expediency” (CW 10, 246). Wrong or inexpedient actions are those that we cannot recommend to a person, like harming oneself. But in contrast to immoral actions, inexpedient actions are not worthy of being sanctioned.
Mill differentiates various spheres of action. In his System of Logic he names morality, prudence and aesthetics as the three departments of the “Art of Life” (CW 8, 949). The principle of utility governs not only morality, but also prudence and taste (CW 8, 951). It is not a moral principle but a meta-principle of practical reason (Skorupski 1989, 310-313).
There is a field of action in which moral rules obtain, and a “person may rightfully be compelled to fulfill” them (CW 10, 246). But there are also fields of action, in which sanctions for wrong behavior would be inappropriate. One of them is the sphere of self-regarding acts with which Mill deals in On Liberty. In this private sphere we can act at our convenience and indulge in inexpedient and utterly useless behavior as long as we do not harm others.
It is fundamental to keep in mind that Mill looks into morality as a social practice and not as autonomous self-determination by reason, like Kant. For Kantians, moral deliberation determines those actions which we have the most reason to perform. Mill disagrees; for him, it makes sense to say that “A is the right thing to do for Jeremy, but Jeremy is not morally obliged to do A.”For instance, even if Jeremy is capable of writing a brilliant book that would improve the life of millions (and deteriorate none), he is not morally obliged to do so. According to Mill, our moral obligations result from the justified part of the moral code of our society; and the task of moral philosophy consists in bringing the moral code of a society in better accordance with the principle of utility.
4. The Role of Moral Rules (Secondary Principles)
In Utilitarianism, Mill designs the following model of moral deliberation. In the first step the actor should examine which of the rules (secondary principles) in the moral code of his or her society are pertinent in the given situation. If in a given situation moral rules (secondary principles) conflict, then (and only then) can the second step invoke the formula of utility (CW 10, 226) as a first principle. Pointedly one could say: the principle of utility is for Mill not a component of morality, but instead its basis. It serves the validation of rightness for our moral system and allows – as a meta-rule – the decision of conflicting norms. In the introductory chapter of Utilitarianism, Mill maintains that it would be “easy to show that whatever steadiness or consistency these moral beliefs have attained, has been mainly due to the tacit influence of a standard not recognized” (CW 10, 207), namely the principle of utility. The tacit influence of the principle of utility made sure that a considerable part of the moral code of our society is justified (promotes general well-being). But other parts are clearly unjustified. One case that worried Mill deeply was the role of women in Victorian Britain. In “The Subjection of Women” (1869) he criticizesthe “legal subordination of one sex to the other” (CW 21, 261) as incompatible with “all the principles involved in modern society” (CW 21, 280).
Moral rules are also critical for Mill because he takes human action in essence as to be guided by dispositions. A virtuous person has the disposition to follow moral rules. In his early essay “Remarks on Bentham’s Philosophy” (1833) he asserts that a “man is not really virtuous” (CW 10, 12), unless the mere thought of committing certain acts is so painful that he does not even consider the possibility that they may have good consequences. He repeats this point in his System of Logic (1843)and Utilitarianism:
[T]he mind is not in a right state, not in a state conformable to Utility, not in the state most conducive to the general happiness, unless it does love virtue in this manner - as a thing desirable in itself, even although, in the individual instance, it should not produce those other desirable consequences which it tends to produce, and on account of which it is held to be virtue. (CW 10, 235 and 8, 952).
It is one thing to say that it could have optimal consequences (and thus be objectively better) to break a moral rule in a concrete singular case. Another is the question as to whether it would facilitate happiness to educate humans such that they would have the disposition to maximize situational utility. Mill answers the latter in the negative. Again, the upshot is that education matters. Humans are guided by acquired dispositions. This makes moral degeneration, but also moral progress possible.
5. Rule or Act Utilitarianism?
There is considerable disagreement as to whether Mill should be read as a rule utilitarian or an (indirect) act utilitarian. Many philosophers look upon rule utilitarianism as an untenable position and favor an act utilitarian reading of Mill (Crisp 1997). Under the pressure of many contradicting passages, however, a straightforward act utilitarian interpretation is difficult to sustain. Recent studies emphasize Mill’s rule utilitarian leanings (Miller 2010, 2011) or find elements of both theories in Mill (West 2004).
In Utilitarianism he seems to give two different formulations of the utilitarian standard. The first points in an act utilitarian, the second in a rule utilitarian direction. Since act and rule utilitarianism are incompatible claims about what makes actions morally right, the formulations open up the fundamental question concerning what style of utilitarianism Mill wants to advocate and whether his moral theory forms a consistent whole. It is important to note that the distinction between rule and act utilitarianism had not yet been introduced in Mill’s days. Thus Mill is not to blame for failing to make explicit which of the two approaches he advocates.
In the first and more famous formulation of the utilitarian standard (First Formula) Mill states:
The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure. To give a clear view of the moral standard set up by the theory, much more requires to be said (…). But these supplementary explanations do not affect the theory of life on which this theory of morality is grounded….” (CW, 210, emphasis mine)
Just a few pages later, following his presentation of qualitative hedonism, Mill gives his second formulation (Second Formula):
According to the Greatest Happiness Principle (…) the ultimate end (…) is an existence exempt as far as possible from pain, and as rich as possible in enjoyments, both in point of quantity and quality; (…). This, being, according to the utilitarian opinion, the end of human action, is necessarily also the standard of morality; which may accordingly be defined, the rules and precepts for human conduct, by the observance of which an existence such as has been described might be, to the greatest extent possible, secured to all mankind; and not to them only, but, so far as the nature of things admits, to the whole sentient creation. (CW, 214, emphasis mine)
The Second Formula relates the principle of utility to rules and precepts and not to actions. It seems to say that an act is correct when it corresponds to rules whose preservation increases the mass of happiness in the world. And this appears to be a rule-utilitarian conception.
In the light of these passages, it is not surprising that the question whether Mill is an act- or a rule-utilitarian has been intensely debated. In order to understand his position it is important to differentiate between two ways of defining act and rule utilitarianism. (i) One can conceive of them as competing theories about objective rightness. An action is objectively right if it is the thing which the agent has most reason to do. Act utilitarianism would say that an action is objectively right, if it actually promotes happiness. For rule utilitarianism, in contrast, an action would be objectively right, if it actually corresponds to rules that promote happiness.
(ii) One can also conceive of act- and rule utilitarianism as theories about moral obligation. Act utilitarianism requires us to aim for the maximization of happiness; rule utilitarianism, in contrast, requires us to observe rules that facilitate happiness. Understood as a theory about moral obligation, act utilitarianism postulates: Act in a way that promotes happiness the most. Rule utilitarianism claims, on the other hand: Follow a rule whose general observance promotes happiness the most.
Mill is in regard to (i) an act utilitarian and in regard to (ii) a rule utilitarian. This way the seeming contradiction between the First and the Second Formula can be resolved. The First Formula states what is right and what an agent has most reason to do. It points to the “foundation of morals”. In contrast, the Second Formula tells us what our moral obligations are. We are morally obliged to follow those social rules and precepts the observance of which promotes happiness in the greatest extent possible.
6. Applying the Standard of Morality
In “Whewell on Moral Philosophy” (1852), Mill rejects an objection raised by one of his most competent philosophical adversaries. Whewell claimed that utilitarianism permits murder and other crimes in particular circumstances and is therefore incompatible with our considered moral judgments. Mill’s discussion of Whewell’s criticism is exceedingly helpful in clarifying his ethical approach:
Take, for example, the case of murder. There are many persons to kill whom would be to remove men who are a cause of no good to any human being, of cruel physical and moral suffering to several, and whose whole influence tends to increase the mass of unhappiness and vice. Were such a man to be assassinated, the balance of traceable consequences would be greatly in favour of the act. (CW 10, 181)
Mill gives no concrete case. Since he wrote – together with his wife Harriet Taylor –a couple of articles on horrible cases of domestic violence in the early 1850s, he might have had the likes of Robert Curtis Bird in mind, a man who tortured his servant Mary Ann Parsons to death [see CW 25 (The Case of Mary Ann Parsons), 1151-1153].Does utilitarianism require us to kill such people who are the “cause of no good to any human being, of cruel physical and moral suffering to several”? Mill answers in the negative. His main point is that nobody’s life would be safe if people were allowed to kill others whom they believe to be a source of unhappiness (CW 10, 182). Thus, a general rule that would allow to “remove men who are a cause of no good” would be worse than a general rule that does not allow such acts. People should follow the rule not to kill other humans because the general observance of this rule tends to promote the happiness of all.
This argument can be interpreted in a rule utilitarian or an indirect act utilitarian fashion. Along indirect act utilitarian lines, one could maintain that we would be cognitively overwhelmed by the task of calculating the consequences of any action. We therefore need rules as touchstones that point us to the path of action which tends to promote the greatest general happiness. Mill compares, in a critical passage, the core principles of our established morality (which he also calls “secondary principles”) with the Nautical Almanack, a companion for navigating a voyage (CW 10, 225). Just as the Nautical Almanackis not first calculated at sea, but instead exists as already calculated, the agent must not in individual cases calculate the expected utility. In his moral deliberation the agent can appeal to secondary principles, such as the prohibition of homicide, as an approximate solution for the estimated problem.
Apparently, the act utilitarian interpretation finds further support in a letter Mill wrote to John Venn in 1872. He states:
I agree with you that the right way of testing actions by their consequences, is to test them by their natural consequences of the particular actions, and not by those which would follow if everyone did the same. But, for the most part, considerations of what would happen if everyone did the same, is the only means we have of discovering the tendency of the act in the particular case. (CW 17, 1881)
Mill argues that in many cases we can assess the actual, expected consequences of an action, only if we hypothetically consider that all would act in the same manner. This means we recognize that the consequences of this particular action would be damaging if everyone acted that way. A similar consideration is found in the Whewell essay. Here Mill argues: If a hundred breaches of rule (homicides, in this case) led to a particular harm (murderous chaos), then a single breach of rule is responsible for a hundredth of the harm. This hundredth of harm offsets the expected utility of this particular breach of rule (CW 10, 182). Mill believes that the breach of the rule is wrong because it is actually harmful. The argument is questionable because Mill overturns the presumption he introduces: that the actual consequences of the considered action would be beneficial. If the breach of the rule is actually harmful, then it is to be rejected in every conceivable version of utilitarianism. The result is trivial then and misses the criticism that act utilitarianism has counter-intuitive implications in particular circumstances.
There is one crucial difficulty with the interpretation of Mill as an indirect act utilitarian regarding moral obligation. If the function of rules was in fact only epistemic, as suggested by indirect act utilitarianism, one would expect that the principle of utility – when the epistemic conditions are satisfactory – can be and should be directly applied. But Mill is quite explicit here. The utilitarian principle should only be applied when moral rules conflict:“We must remember that only in these cases of conflict between secondary principles is it requisite that first principles should be appealed to.” (CW 10, 226). From an act utilitarian view regarding moral obligation, this is implausible. Why should one be morally obliged to follow a rule of which one positively knows that its observance in a particular case will not promote general utility?
Coming back to the example, it is important to remember that “the balance of traceable consequences would be greatly in favour of the act [of homicide].” (CW 10, 181) Thus, according to an act utilitarian approach regarding moral obligation it would be morally allowed, if not required, to kill the man.
As mentioned, Mill arrives at a different conclusion. His position can be best understood with recourse to the distinction between the theory of objective rightness and the theory of moral obligation introduced in the last section. Seen from the perspective of an all-knowing and impartial observer, it is – in regard to the given description – objectively right to perpetrate the homicide. However, moral laws, permissions, and prohibitions are not made for omniscient and impartial observers, but instead for cognitively limited and partial beings like humans whose actions are mainly guided by acquired dispositions. Their capacity to recognize what would be objectively right is imperfect; and their ability to motivate themselves to do the right thing is limited. As quoted before in his “Remarks on Bentham’s Philosophy” (1833),he states that some violations of the established moral code are simply unthinkable for the members of society: people recoil “from the very thought of committing” (CW 10, 12) particular acts. Because humans cannot reliably recognize objective rightness and, in critical cases, cannot bring themselves to act objectively right, they are not obliged to maximize happiness. For ought implies can. In regard to the given description, the fact that the assassination of a human would be objectively right does not imply that the assassination of this human would be morally imperative or allowed. In other words: Mill differentiates between the objectively right act and the morally right act. With this he can argue that the assassination would be forbidden (theory of moral obligation). To enact a forbidden action is morally wrong. As noted, Mill’s theory allows for the possibility that an action is objectively right, but morally wrong (prohibited). An action can be wrong (bearing unhappiness), but its enactment would be no less morally right (Lyons 1978/1994, 70).
Thus, Mill’s considered position should be interpreted in the following way: First, the objective rightness of an act depends upon actual consequences; second, in order to know what we are morally obliged to do we have to draw on justified rules of the established moral code.
7. The Meaning of the First Formula
What has been said about Mill’s conception of morality as a system of social rules is relevant for the interpretation of Mill’s First Formula of utilitarianism. The Formula says that actions are right “in proportion as they tend to promote happiness” and wrong “as they tend to produce the reverse of happiness” (CW 10, 210). Roughly said, actions are right insofar as they facilitate happiness, and wrong insofar as they result in suffering. Mill does not write “morally right” or “morally wrong”, but simply “right” and “wrong”. This is important. Mill emphasizes in many places that virtuous actions can exhibit a negative balance of happiness in a singular case. If the word “moral” occurred in the First Formula, then the noted virtuous actions would be, for Mill, morally wrong. But as we have seen, this is not his view. Virtuous actions are morally right, even if they are objectively wrong under particular circumstances.
Accordingly, the First Formula is not to be interpreted as drafting a moral duty. It is a general statement about what makes actions right (reasonable, expedient) or wrong. The First Formula gives a general characterization of practical reason. It says that the promotion of happiness makes an action objectively right (but not necessarily morally right); or, as Mill says in his System of Logic, “the promotion of happiness is the ultimate principle of Teleology” (CW 8, 951) An action is objectively right if it maximizes happiness; however, an action is morally right if it is in accordance with social rules which are protected by internal and external sanctions and which tend to promote general utility. Subsets of right ones are morally right actions; subsets of wrong actions are morally wrong.
Mill’s differentiation between a moral and a non-moral sphere of action is not far from our everyday understanding. We generally believe that not all actions must be judged in regard to a moral point of view. This does not exclude us from valuing actions, which are not in the moral realm, in regard to prudence. Less evident is how one should take Mill’s claim that the promotion of happiness can be understood as a general principle of rightness even with respect to artistic production. Many artists would presumably not be comfortable with the thesis that good art arises from the goal of facilitating the happiness of humankind. This however is not what Mill means. Apart from cases of conflict between secondary principles, the First Formula does not guide action. Just as Mill speaks in a moral context about how noble characters will not strive to maximize general happiness (CW 8, 952), he could argue in an aesthetic context that artists should work from a purely aesthetic point of view. The rules of artistic judgments, nonetheless, are justified through their contribution to the flourishing of human life.
To summarize the essential points: Mill can be characterized as an act utilitarian in regard to the theory of objective rightness, but as a rule utilitarian in regard to the theory of moral obligation. He defines morality as a system of rules that is protected by sanctions. The principle of utility is not a part of this system, but its fundamental justification (the “foundation of morality” (CW 10, 205)).
8. Right in Proportion and Tendencies
(i) For contemporary readers it is striking that Mill’s First Formula does not explicitly relate to maximization. Mill does not write, as one might expect, that only the action which leads to the best consequences is right. In other places in the text we hear of the “promotion” or “multiplication” of happiness, and not of the “maximization”. Alone does the “Greatest Happiness Principle” explicitly refer to maximization. The actual formula, in contrast, has to do with gradual differences (right in proportion). Actions which add to the sum of happiness in the world but fail to maximize happiness thus can be right, even if to a lesser degree.
This is confusing insofar as it would be unreasonable to prefer that which is worse to that which is better. For every good there is a better that one should reasonably choose until one succeeds to the best. If the First Formula expresses the ideal of practical reason, then one should expect that it requires maximization. Maybe Mill’s point is that the search for a global best option would exceed the cognitive capabilities of humans. He probably does not want to suggest that an agent should not choose the best local option. But the local best option must not represent the objective (global) best. This may be the reason why Mill does not refer to maximization in the formula of utility.
(ii) A further complication arises with the word “tend”. According to the formula of utility, actions are more or less correct insofar as they facilitate happiness (CW 10, 210). It is doubtlessly not the same to say that an action is right if it actually facilitates happiness, or to say that it is right if it tends to facilitate happiness. The model seems to be roughly this: At the neutral point of the preference scale, actions have the tendency – in regard to the status quo – to neither increase nor decrease the mass of utility in the world. All actions that tend to facilitate happiness are right, all actions that tend to be harmful are wrong, but all are not in the same measure. An action has a high positive value on the scale of preference, if its tendency to facilitate happiness is high. An action has highly negative value on the preference scale, if its tendency to evoke unhappiness is high. But what does the concept “tendency” mean precisely?
In everyday language, we often use the word “tend” in the sense of “will probably lead to”. That an action tends to produce a particular consequence means that this consequence has a high probability. Mill could have wanted to say that an action is right in proportion to the probability with which it promotes happiness. This makes sense when we compare options that produce the same amount of happiness. But what about cases in which two actions produce different amounts of pleasure? One plausible answer is that both dimensions must be regarded: the amount of happiness and the probability of its occurrence. Action A is better than action B, if the expected happinessfor Ais greater than the expected happiness for B. If one reads Mill this way, then “in proportion” relates to “promote” and to “tend”. The best action is one that maximizes the amount of expected happiness.
9. Utility and Justice
In the final chapter of Utilitarianism, Mill turns to the sentiment of justice. Actions that are perceived as unjust provoke outrage. The spontaneity of this feeling and its intensity makes it impossible for it to be ignored by the theory of morals. Mill considers two possible interpretations of the source of the sentiment of justice: first of all, that we are equipped with a sense of justice which is an independent source of moral judgment; second, that there is a general and independent principle of justice. Both interpretations are irreconcilable with Mill’s position, and thus it is no wonder that he takes this issue to be of exceptional importance. He names the integration of justice the only real difficulty for utilitarian theory (CW 10, 259).
Mill splits this problem of integration into three tasks: The first consists in explaining the intensity and spontaneity of the sentiment of justice. The second task is to make plausible that the various types of judgments about justice can be traced back to a systematic core; and the third task consists in showing that the principle of utility constructs this core.
In a nutshell, Mill explains the sentiment of justice as the sublimation of the impulse to take revenge for perceived mortifications of all kinds. Mill sees vengeance as “an animal desire” (CW 10, 250) that operates in the service of self-preservation. If it is known that one will not accept interventions in spheres of influence and interest, the probability of such interventions dwindles. The preparedness to take revenge tends to deter aggression in the first place. Thus, a reputation for vindictiveness – at first glance an irrational trait – arguably has survival value. This helps to explain why the sentiment is so widespread and vehement.
Our sentiment of justice, for Mill, is based on a refinement and sublimation of this animal desire. Humans are capable of empathizing such that the pleasure of others can instill one’s own pleasure, and the mere sight of suffering can cause own suffering. The hurting of another person or even an animal may therefore produce a very similar affect as the hurting of one’s own person. Mill considers the extension of the animal impulse of vengeance on those with whom we have sympathy as “natural” (CW 10, 248), because the social feelings are for him natural. This natural extension of the impulse of revenge with the help of the social feelings represents a step in the direction of cultivating and refining human motivation. People begin to feel outrage when the interests of the members of their tribe are being violated or when shared social rules are being disregarded.
Gradually, sympathy becomes more inclusive. Humans discover that co-operation with people outside the tribe is advantageous. The “human capacity of enlarged sympathy” follows suit (CW 10, 248).
As soon as humans begin to think about which parts of the moral code of a society are justified and which parts are not, they inevitably begin to consider consequences. This often occurs in non-systematic, prejudiced or distorted ways. Across historical periods of times, the correct ideas of intrinsic good and moral rightness will gradually gain more influence. Judgments about justice approximate progressively the requirements of utilitarianism: The rules upon which the judgments about justice rest will be assessed in light of their tendency to promote happiness. To summarize: Our sentiment of justice receives its intensity from the “animal desire to repel or retaliate a hurt or damage to oneself”, and its morality from the “human capacity of enlarged sympathy” and intelligent self-interest (CW 10, 250).
According to Mill, when we see a social practice or a type of action as unjust, we see that the moral rights of persons were harmed. The thought of moral rights is the systematic core of our judgments of justice. Rights breed perfect obligations, says Mill. Moral rights are concerned with the basic conditions of a good life. They protect an “extraordinarily important and impressive kind of utility.” (CW 10, 250-251). Mill subsumes this important and impressive kind of utility under the term security, “the most vital of all interests” (CW 10, 251). It comprises such things as protection from aggression or starvation, the possibility to shape one’s own life unmolested by others and enforcement of contracts. Thus, the requirements of justice “stand higher in the scale of social utility” (CW 259).To have a moral right means to have something that society is morally required to guard either through the compulsion of law, education or the pressure of public opinion (CW 10, 250). Because everyone has an interest in the security of these conditions, it is desirable that the members of society reciprocally guarantee each other “to join in making safe for us the very groundwork of our existence” (CW 10, 251).Insofar as moral rights secure the basis of our existence, they serve our natural interest in self-preservation – this is the reason why their harm calls forth such intense emotional reactions. The interplay of social feelings and moral education explains, in turn, why we are not only upset by injustices when we personally suffer, but also when the elemental rights of others are harmed. This motivates us to sanction the suffering of others as unjust. Moral rights thus form the “most sacred and binding part of all morality” (CW 10, 255). But they do not exhaust the moral realm. There are imperfect obligations which have no correlative right (CW 10, 247).
The thesis that moral rights form the systematic core of our judgments of justice is by no means unique to utilitarianism. Many people take it to be evident that individuals have absolute, inalienable rights; but they doubt that these rights can be grounded in the principle of utility. Intuitionists may claim that we recognize moral rights spontaneously, that we have intuitive knowledge of them. In order to reject such a view, Mill points out that our judgments of justice do not form a systematic order. If we had a sense of justice that would allow us to recognize what is just, similar to how touch reveals forms or sight reveals color, then we would expect that our corresponding judgments would exhibit a high degree of reliability, definitude and unanimity. But experience teaches us that our judgments regarding just punishments, just tax laws or just remuneration for waged labor are anything but unanimous. The intuitionists must therefore mobilize a first principle that is independent of experience and that secures the unity and consistency of our theory of justice. So far they have not succeeded. Mill sees no suggestion that is plausible or which has been met with general acceptance.
10. The Proof of Utilitarianism
What Mill names the “proof” of utilitarianism belongs presumably to the most frequently attacked text passages in the history of philosophy. Geoffrey Sayre-McCord once remarked that Mill seems to answer by example the question of how many serious mistakes a brilliant philosopher can make within a brief paragraph (Sayre-McCord 2001, 330). Meanwhile the secondary literature has made it clear that Mill’s proof contains no logical fallacies and is less foolish than often portrayed.
It is found in the fourth part, “Of What Sort of Proof the Principle of Utility is Susceptible”, of Utilitarianism. For the assessment of the proof two introductory comments are helpful. Already at the beginning of Utilitarianism, Mill points out that “questions of ultimate ends are not amenable to direct proof.” (CW 10, 207). Notwithstanding, it is possible to give reasons for theories about the good, and these considerations are “equivalent to proof” (CW 10, 208). These reasons are empirical and touch upon the careful observation of oneself and others. More cannot be done and should not be expected in a proof re ultimate ends.
A further introductory comment concerns the basis of observation through which Mill seeks to support utilitarianism. In moral philosophy the appeal to intuitions plays a prominent role. They are used to justify moral claims and to check the plausibility of moral theories. The task of thought-experiments in testing ethical theories is analogous to the observation of facts in testing empirical theories. This suggests that intuitions are the right observational basis for the justification of first moral principles. Mill, however, was a fervent critic of intuitionism throughout his philosophical work. In his Autobiography he calls intuitionism “the great intellectual support of false doctrines and bad institutions.” (CW 1, 232). Mill considered the idea that truths can be known a priori, independently of observation and experience, to be a stronghold of conservatism.
His argument against intuitionistic approaches to moral philosophy has two parts. The first part points out that intuitionists have not been able to bring our intuitive moral judgments into a system. There is neither a complete list of intuitive moral precepts nor a basic principle of morality which would found such a list(CW 10, 206).
The second part of the Millian argument consists in an explanation of this result: What some call moral intuition is actually the result of our education and present social discourse. Society inculcates us with our moral views, and we come to believe strongly in their unquestionable truth. There is no system, no basic principle in the moral views of the Victorian era though. In The Subjection of Women, Mill caustically criticizes the moral intuitions of his contemporaries regarding the role of women. He finds them incompatible with the basic principles of the modern world, such as equality and liberty. Because the first principle of morality is missing, intuitionist ethics is in many regards just a decoration of the moral prejudices with which one is brought up –“(…) not so much a guide as a consecration of men’s actual sentiments” (CW 10, 207).
What we need, Mill contends, is a basis of observation that verifies a first principle, a principle that is capable of bringing our practice of moral judgments into order. This elemental observational basis – and this is the core idea in Mill’s proof – is human aspiration.
His argument for the utilitarian principle – if not a deductive argument, an argument all the same – involves three steps. First, Mill argues that it is reasonable for humans to aspire to one’s own well-being; second, that it is reasonable to support the well-being of all persons (instead of only one’s own); and third, that well-being represents the only ultimate goal and the rightness of our actions is to be measured exclusively in regard to the balance of happiness to which they lead (CW 10, 234).
Let us turn to the first step of the argument. Upon an initial reading it seems in fact to have little success. Mill argues that one’s own well-being is worthy of striving for because each of us strives for his or her own well-being. Here he leans on a questionable analogy: “The only proof capable of being given that an object is visible, is that people actually see it. […] In like manner, I apprehend, the sole evidence it is possible to produce that anything is desirable, is that people do actually desire it.” (CW 10, 234).
Can a more evident logical fallacy be given than the claim that something is worthy of striving for because it is factually sought? But Mill in no way believes that the relation between desirable and desired is a matter of definition. He is not saying that desirable objects are by definition objects which people desire; he writes instead that what people desire is the only evidence for what is desirable. If we want to know what is ultimately desirable for humans, we have to acquire observational knowledge about what humans ultimately strive for.
Mill’s argument is simple: We know by observation that people desire their own happiness. With a conclusion that Mill calls “inductive”, and to which he ascribes a central role in regard to our acquisition of knowledge, we succeed to the general thesis that all humans finally aspire to their happiness. This inductive conclusion serves as evidence for the claim that one’s own happiness is not only desired, but desirable, worthy of aspiration. Mill thus supports the thesis that one’s own happiness is an ultimate good to oneself with the observation that every human ultimately strives for his or her own well-being.
On this basis, Mill concludes in the second step of his proof that the happiness of all is also a good: “…each person’s happiness is a good to that person, and the general happiness, therefore, a good to the aggregate of all persons.” (CW 10, 234).
The “therefore” in the cited sentence above has evoked many a raised eyebrows. Does Mill claim here that each person tries to promote the happiness of all? This seems to be patently wrong. In a famous letter to a Henry Jones, he clarifies that he did not mean that every person, in fact, strives for the general good. “I merely meant in this particular sentence to argue that since A's happiness is a good, B's a good, C's a good, &c., the sum of all these goods must be a good.” (CW 16, 1414, Letter 1257).
Indeed, in the “particular sentence” he just concludes that general happiness is a “good to the aggregate of all persons.” Nonetheless, one may doubt that Mill adequately responds to Jones’ reservations. It is unclear what it means that general happiness is the good of the aggregate of all persons. Neither each person, nor the aggregate of all persons seem to strive for the happiness of all. But Mill’s point in the second step of the argument is arguably a more modest one.
He simply wanted to vindicate the claim that if each person’s happiness is a good to each person, then we are entitled to conclude that general happiness is also a good. As he says in the letter to Jones: “the sum of all these goods must be a good.” Similar to the first step of the argument we have here an epistemic relationship: The fact that each person is striving for his or her own happiness is evidence that happiness as such (regardless to whom) is valuable. If happiness as such is valuable, it is not unreasonable to promote the well-being of all sentient beings. With this, the second step of the argument is complete. The result may seem meager at first. That it is not unreasonable to promote the happiness of all appears to be no particularly controversial claim. On closer inspection, however, Mill’s conclusion is quite interesting since it imposes pressure on self-interest theories of practical rationality. The “notion that self-interest possesses a special, underived rationality (…) seems suddenly to require justification.” (Skorupski 1989, 311).What Mill fails to show is that each person has most reason to promote the general good. One should note, however, that the aim of the proof is not to answer the question why one should be moral. Mill does not want to demonstrate that we have reason to prefer general happiness to personal happiness.
Hedonism states not only that happiness is intrinsically good, but also that it is the only good and thus the only measure for our action. To show this, is the goal of the third step of the proof. Mill’s reflections in this step are based on psychological hedonism and the principle of association. According to Mill, humans cannot desire anything except that which is either aninstrument to or a component of happiness. He concedes that people seem to strive for every possible thing as ultimate ends. Philosophers may pursue knowledge as their ultimate goal; others value virtue, fame or wealth. Corresponding to his basic thesis that “the sole evidence it is possible to produce that anything is desirable, is that people do actually desire it” (CW 10, 234), Mill must consider the possibility that knowledge, fame or wealth have intrinsic value.
He blocks this inference with the thesis that humans do not “naturally and originally” (CW 10, 235) desire other goods than happiness. That knowledge, virtue, wealth or fame is seen as intrinsically valuable is due to the operation of the principle of association. In the course of our socialization, goods, like knowledge, virtue, wealth or fame acquire value by their association with pleasure. A philosopher came to experience knowledge as pleasurable, and this is why he desires it. Humans strive for virtue and other goods only if they are associated with the natural and original tendency to seek pleasure and avoid pain. Virtue, knowledge or wealth can thus become parts of happiness. At this point, Mill declares that the proof is completed.
11. Evaluating Consequences
According to Mill’s Second Formula of the utilitarian standard, a good human life must be rich in enjoyments, in both quantitative and qualitative respects. A manner of existence without access to the higher pleasures is not desirable: “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.” (CW 10, 212).
The life of Socrates is better because no person who is familiar with higher pleasures will trade the joy of philosophizing against an even infinite amount of lower pleasures, Mill suggests. This does not amount to a modern version of Aristotle’s’ view that only a life completely devoted to theoretical activity is desirable. One must not forget that Mill is a hedonist after all. What kind of life is joyful and therefore good for a particular person depends upon many factors, such as tastes, talents and character. There are a great variety of lifestyles that are equally good. But Mill insists that a human life that is completely deprived of higher pleasures is not as good as it could be. It is not a desirable “mode of existence”, nothing a “competent judge” would choose.
Utilitarianism demands that we establish and observe a system of social, legal and moral rules that enables all mankind to have the best life possible, a life that is “as rich as possible in enjoyments, both in point of quantity and quality” (CW 10, 214). Mill’s statement that every human has an equal claim “to all the means of happiness” (CW 10, 258), belongs in this context. Society must make sure that the social-economic preconditions of a non-impoverished life prevail. In one text passage, Mill even includes the happiness of animals. Animals, too, should have the best possible life, “so far as the nature of things admits” (CW 10, 214).
The Second Formula maintains that a set of social rules A is better than the set B, if in A less humans suffer from an impoverished, unhappy life and more enjoy a fulfilled, rich life than in B.
More difficult is the question how to evaluate scenarios that involve unequal population sizes. With Mill there is no explicit unpacking of this problem; but his advocacy of the regulation of birth gives us at least an indication of the direction in which his considerations would go. Let us consider the following example: Which world would be better: world X in which 1000 humans have a fulfilled life and 100 a bad one, or world Y in which 10000 humans have a fulfilled life and 800 an impoverished one? The answer to this question depends on whether we focus on the minimizing the number of bad lives or on maximizing the number of good lives, and whether we measure this absolutely or relatively to the total population.
(i) One possible answer concerns the minimization of the number of bad lives. This can mean the absolute number of humans with joyless or impoverished lives. If one answers this way, then world X would be better than world Y because in this world the absolute number of humans with bad lives would be less. But it is also possible to think of the Second Formula as a statement about the relative number of humans with bad lives; in this case world Y would be preferable.
(ii) Another possible answer emphasizes the maximization of fulfilled lives. If one follows this interpretation, then world Y is better than world X because in this world absolute and relative measurements suggest that more humans have fulfilled lives.
Under the influence of Malthus, Mill insisted throughout his work that the problem of poverty is to be resolved only through a reduction of the population number – as noted, he encouraged the regulation of birth. This proposal is reconcilable with all three interpretations, but does not bear any relation to the question concerning which of the interpretations he could have preferred. One can speculate how Mill would answer, but there is not clear textual basis.
A further theme that Mill does not address concerns the problem of measurement and the interpersonal comparison of quantities of happiness. From an utilitarian point of view, other things being equal, it makes no moral difference whether A or B experiences an equal quantity of happiness (CW 10, 258). A quantity of happiness for A bears precisely as much value as a quantity of happiness for B. But this answers neither the question of measurement nor the question of the comparison of interpersonal utility. Can quantities of happiness be measured like temperatures? The philosopher and economist Francis Edgeworth spoke in his 1881 Mathematical Psychics of a fictitious instrument of measurement, a hedonimeter, with whose help the quantities of pleasure and pain could be determined with scientific accuracy.
Or do amounts of happiness have to be assessed approximately, such that Harriet Taylor for example can say that she is happier today than she was yesterday. Interpersonal comparisons of utility are confronted with the related question whether and under which conditions one can say that, for instance, Harriet Taylor and John Stuart Mill experience an equal amount of happiness.
Mill gave both themes little attention. But probably he was convinced that precise measurement and comparison of interpersonal utility would not be needed, maybe not even possible. One often does not need a thermometer to discern whether or not an object is warmer than another. Similarly, in many cases we do not need something like a hedonimeter to judge whether the condition of world A is better than that of world B. We need only a reasonable degree of experience and the capacity to empathize. Often, though, we may be unsure what to say. Which of two systems of income tax, for instance, promotes general happiness more? Mill’s position here seems to be that we have to decide questions like these by means of public debate and not by means of a hedonimeter.
Regarding moral rights, “the most sacred and binding part of all morality” (CW 10, 255), all competent judges seem to agree that they promote general happiness. Our capacity to estimate quantities and qualities of happiness is thus sufficiently good in order to conclude that a society that does protect “the most vital of all interests” (CW 10, 251) is better than a society that does not.
12. Freedom of Will
In various places of his work John Stuart Mill occupied himself with the question of the freedom of the human will. The respective chapter in the System ofLogic he later claimed was the best part of the entire book. Here Mill presents the solution to a problem with which he wrestled not only intellectually. In his Autobiography he calls it a “heavy burden” and reports: “I pondered painfully on the subject.” (CW 1, 177)
Freedom of the will is a traditional philosophical problem whose roots stretch back to antiquity. The problem results from the conflict of two positions: On the one hand, that all events – and thus also all actions – have causes from which they necessarily follow; on the other hand, that humans are free. Both claims cannot be reconciled, or so it seems, and this is the problem.
Mill is a determinist and assumes that human actions follow necessarilyfrom antecedent conditions and psychological laws. This apparently commits him to the claim that humans are not free; for if their actions occurred necessarily and inevitably, then they could not act otherwise. With perfect knowledge of antecedent conditions and psychological laws, we could predict human behavior with perfect accuracy.
But Mill is convinced that humans are free in a relevant sense. In modern terminology, this makes him a compatibilist, someone who believes in the reconcilability of determinism and free will. Part of his solution to the problem of compatibility is based on the discovery of a “misleading association”, which accompanies the word “necessity”. We have to differentiate between the following two statements: On the one hand, that actions occur necessarily; on the other hand, that they are predetermined and agents have no influence on them. Corresponding to this is the differentiation of the doctrine of necessity (determinism) and the doctrine of fatalism. Fatalism is indeed not compatible with human freedom, says Mill, but determinism is.
He grounds his thesis that determinism is reconcilable with a sense of human freedom, first, (i) with a repudiation of common misunderstandings regarding the content of determinism and, second, (ii) with a presentation of what he takes to be the appropriate concept of human freedom.
(i) With regard to human action, the “doctrine of necessity” claims that actions are determined by the external circumstances and the effective motives of the person at a given point in time. Causal necessity means that events are accompanied not only factually without exception by certain effects, but would also be under counter-factual circumstances. Given the preconditions and laws, it is necessary that a person acts in a certain way, and a well-informed observer would have predicted precisely this. As things were, this had to happen.
Fatalism advocates a completely different thesis. It claims that all essential events in life are fixed, regardless of antecedent conditions or psychological laws. Nothing could change their occurrence. If someone’s fate is to die on a particular day, there is no way of changing it. One finds this kind of fatalism in Sophocles “Oedipus”. Oedipus is destined to kill his father and marry his mother and his desperate attempts to avoid his foretold fate are in vain. The determinists of his day, Mill suggests, were “more or less obscurely” also fatalists – and he thought that this explains the predominance of the belief that human will can be free only if determinism is false.
(ii) Mill now turns to the question of whether determinism – correctly understood – is indeed incompatible with the doctrine of free will. His central idea is, firstly, that determinism in no way excludes the possibility that a person can influence his or her character; and secondly, that the ability to have influence on one’s own character is what we mean by free will.
(1) Actions are determined by one’s character and the prevailing external circumstances. The character of a person is constituted by his or her motives, habits, convictions and so forth. All these are governed by psychological laws. A person’s character is not given at birth. It is being formed through education; the goals that we pursue, the motives and convictions that we have depend to a large degree on our socialization. But if it is possible to form someone’s character by means of education, then it is also possible to form one’s own character through self-education: “We are exactly as capable of making our own character, if we will, as others are of making it for us.”
If we have the wish to change ourselves, then we can. Experience teaches us that we are capable of having influence on our habits and attitudes. The desire to change oneself resides, for Mill, in the individual, thus in our selves. Discontent with oneself and one’s own life, or the admiration for another lifestyle may be reasons why one wants to change (CW 8, 841).
(2) The ability to influence the formation of one’s own character, for Mill, is the substance of the doctrine of free will: “(…) that what is really inspiring and ennobling in the doctrine of freewill, is the conviction that we have real power over the formation of our own character; that our will, by influencing some of our circumstances, can modify our future habits or capabilities of willing. All this was entirely consistent with the doctrine of circumstances, or rather, was that doctrine itself, properly understood.” (CW 1, 177). Nothing more is intended by the doctrine of free will: We are capable of acting in a way that corresponds to our own desires; and we are, if we want, capable of shaping our desires. More precisely said, Mill advocates the idea that we are in a measure free, insofar as we can become those who we want to be.
One may object here that Mill’s theory presumes the desire to change. But what about those who do not want to change? If one does not want to change, then one could not change. And with this, not all humans are free. But such an objection presumes that those who do not have the desire to change themselves are missing something (namely, the desire to change), and that, because of this lack, they are less free. But Mill contends that persons in certain ways “are their desires”. If someone is lacking the desire to change, he or she is no less free than a person who has this desire. It is not as if one were simply missing an entry upon a list of choices. The “I” does not choose between various desires and options; instead it is rather that “one’s self” is identified with one’s desires: “…it is obvious that ‘I’ am both parties in the contest; the conflict is between me and myself; between (for instance) me desiring a pleasure, and me dreading self-reproach. What causes Me, or, if you please, my Will, to be identified with one side rather than with the other, is that one of the Me’s represents a more permanent state of feelings than the other one does.” (CW 9, 452).
The thought that there is no “I” is also the reason why Mill rejects the idea that freedom presupposes the capacity to refrain from an act in a given situation (“I could have done otherwise”). Mill finds the idea utterly curious that someone’s will was only free if he could have acted differently. For what does it mean to say that “(o)ne could have acted differently?” Is it supposed to mean that one could have chosen what one did not want to choose (CW 9, 450)?According to Mill’s analysis, what we mean by the phrase (that we could have acted differently) is this: If the circumstances, or my character or my mood or my knowledge and so forth, would have been different, I would have acted differently. Without such variations, the thought that one could have acted differently seems strange to Mill:“I dispute therefore altogether that we are conscious of being able to act in opposition to the strongest present desire or aversion.” (CW 9, 453). Because a person cannot counteract an effective desire, he is necessarily determined by it – just as things are.
13. Responsibility and Punishment
Mill variously examines the thesis that punishment is only justified if the perpetrator could have acted differently. A contemporary of Mill’s, the social reformer Robert Owen, claimed that punishment of the breaking of social norms is unjustified, because the character of a person is the result of social influences. No one is the author of himself. Because actions follow from the character and one is not responsible for this, it is not just to punish people for the violation of norm which they could not help violating. It was not within their power to act differently. And it is unjust to punish someone for something, if he could not do anything to hinder its occurrence (CW 9, 453).
Mill responds to Owen’s criticism that persons could very well have influence on their characters, if they wanted. But does this satisfy us as a defence of punishment for the breaking of norms? It might be right that someone who does not want to change will not become depressed about his inability to change (CW 8, 841). Probably the thought will not even occur to him. But the point here is not whether one’s inability is a source of depression or not. The point is whether it is fair to punish people for actions which they could not control. If one lacks the respective desire, then one cannot change one’s character. It seems unfair to blame a person for her rotten character if there is no “I” that we can accuse of failing to have the desire to change.
Mill’s solution to this problem is somewhat surprising. We have to be clear as to what it means to say that a person “could not have acted differently”. Certainly, it does not mean that a person would have performed a particular act under all conceivable circumstances. This would be the case, if humans were programmed like robots to act in certain ways, regardless of the external conditions. In actual fact, one can in almost all cases imagine variations in circumstances that would effectively hold a person back from acting how he or she acted. Someone with criminal tendencies might not be able to keep himself from acting criminally, because he does not consider the possibility that he will be severely punished if caught. “If, on the contrary, the impression is strong in his mind that a heavy punishment will follow, he can, and in most cases does, help it.” (CW 9, 458)
It is the purpose of punishments to reduce anti-social behavior, in particular the violation of moral rights, “the most vital of all interests” (CW 10, 251). The justification of punishment consists in the fact that it serves this justified goal (CW 9, 459-460). If someone cannot be restrained from breaking the norm through the threat of punishment, then the threat of punishment was ineffective in regard to this individual. It was not enough – seen in the light of his character and his perception of the situation – to discourage him from violating the norm. But that the criminal inclinations of an individual is higher than average and that it had therefore needed a stronger incentive in order to bring him to respect the norm makes neither the punishment nor the threat of punishment unjust or illegitimate.
According to Mill, conceiving oneself as a morally responsible agent does not mean to see oneself as an “I” who could have acted differently. It means to consider oneself as member of a moral community entitled to sanction the violation of justified social norms. This idea of moral responsibility does not seem far-fetched. A person may well agree that it is appropriate to punish him for the violation of moral rights, even if he “could not have done otherwise” under the given circumstances.
14. References and Further Readings
a. Primary Sources
- Mill, John Stuart, The Collected Works of John Stuart Mill. Gen. Ed. John M. Robson. 33 vols. Toronto: University of Toronto Press, 1963-91.
- The standard scholarly edition including Mill’s published works, letters, and notes; cited in the text as CW volume, page.
b. Secondary Sources
- Bain, Alexander, 1882, John Stuart Mill. A Criticism: With Personal Recollections, London: Longmans, Green, and Co.
- Berger, Fred R., 1984, Happiness, Justice, and Freedom: The Moral and Political Philosophy of John Stuart Mill, Berkeley & Los Angeles: U. of California Press.
- Bradley, Francis H., 1876/1988, Ethical Studies, reprint of the second edition, Oxford: OUP.
- Brink, David, 1992, “Mill's Deliberative Utilitarianism”, in: Philosophy & Public Affairs 21, pp. 67-103.
- Brown, D. G., 1973, “What is Mill’s Principle of Utility?”, in: Canadian Journal of Philosophy 3 (1), pp. 1-12.
- Coope, Christopher M., 1998, “Was Mill a Utilitarian?”, Utilitas 10 (1), pp. 33-67.
- Crisp, Roger, 1997, Routledge Philosophy Guidebook to Mill on Utilitarianism, London: Routledge.
- Donner, Wendy, 1991, The Liberal Self. John Stuart Mill’s Moral and Political Philosophy, Ithaca & London: Cornell UP.
- Donner, Wendy & Richard Fumerton, 2009, Mill, Chichester: Wiley-Blackwell (Blackwell Great Minds).
- Eggleston, Ben/Dale E. Miller/David Weinstein (eds.), 2011, John Stuart Mill and the Art of Life, Oxford: OUP.
- Fuchs, Alan, 2006, “Mill’s Theory of Morally Correct Action”, in: The Blackwell Guide to Mill’s Utilitarianism, edited by Henry R. West, Oxford: Blackwell, 139-158.
- Green, Thomas H., 1883/2003, Prolegomena to Ethics, edited by David O. Brink, Oxford: Clarendon Press.
- Grote, John, 1870, An Examination of the Utilitarian Philosophy, edited by Joseph Bickersteth, Cambridge: Deighton, Bell, and Co.
- Lyons, David, 1978/1994, “Mill’s Theory of Justice, in: Rights, Welfare, and Mill's Moral Theory, Oxford: OUP, 67-88.
- Lyons, David, 1994, Rights, Welfare, and Mill's Moral Theory, Oxford: OUP.
- Miller, Dale E., 2010, J. S. Mill. Moral, Social and Political Thought, Cambridge: CUP.
- Miller, Dale E., 2011, “Mill, Rule Utilitarianism, and the Incoherence Objection”, in: Eggleston, Ben/Dale E. Miller/David Weinstein (eds.), 2011, John Stuart Mill and the Art of Life, Oxford: OUP, 94-116.
- Rawls, John, 1971/1999, A Theory of Justice. Revised Edition, Cambridge/Mass.: Belknap Press of HUP.
- Rawls, John, 2008, Lectures on the History of Political Philosophy, edited by Samuel Freeman, Cambridge/Mass.: Belknap Press of HUP.
- Reeves, Richard, 2007, John Stuart Mill. Victorian Firebrand, London: Atlantic Books.
- Riley, Jonathan, 1988, Liberal Utilitarianism. Social Choice Theory and J. S. Mill's Philosophy, Cambridge: CUP.
- Sayre-McCord, Geoffrey, 2001, “Mill's 'Proof' of the Principle of Utility: A More than Half-Hearted Defense”, in: Social Philosophy & Policy 18, 330-360.
- Skorupski, John, 1989, John Stuart Mill. The Arguments of the Philosophers, London & New York: Routledge.
- Skorupski, John, 2006, Why Read Mill Today? London & New York: Routledge.
- Urmson, James O., 1953, “The Interpretation of the Moral Philosophy of J. S. Mill”, in: Philosophical Quarterly 3, pp. 33-39.
- West, Henry R., 2004, An Introduction to Mill’s Ethics, Cambridge: CUP.
Karlsruhe Institut für Technologie