Summary
This post lays out some problems/uncertainties I have with various stances in the philosophy of mind, almost all of which implicitly or explicitly privilege ‘common sense’. The main point is that our intuitions aren’t shaped to be useful at finding out whether something actually experiences/experiences things in a particular way. Instead, those intuitions are strongly influenced by (1) the evolutionary need to navigate ancestral social environments and (2) what your culture believes - which I argue don’t necessarily have much to do with the truth of the matter. The problems noted here may have drastic implications for our credences of what’s capable of suffering, which in turn may lead to ethical implications. This post may serve as one rationale to Brian Tomasik’s framework for thinking about sentience, through its exploration of the causes of our beliefs.
Highlights
-
The problem of other minds means that we can’t theorize about sentience (definition in footnote)1 by analysis of scientific evidence without additional philosophical assumptions.
-
However, these assumptions are often dubious as they are based on intuitions that might not necessarily ‘track’ sentience. This point is explored from both evolutionary and cultural perspectives.
-
There are also general doubts around common sense. We should remember that common sense has been successfully scientifically challenged throughout history. Additionally, differences in common sense can lead to inconsistent conclusions on the topic of sentience, so we might have to reject some common sense beliefs.
-
The (probabilistic) beliefs we have about the sentience of various systems are made on the basis of unstable foundations. It seems that we shouldn’t be overconfident of our models of sentience. This may mean that systems not typically thought of as sentient, or thought to have a ‘lower degree’ of ethically-relevant aspects of sentience (such as microscopic invertebrates and conscious subsystems) could be assigned a credence of having such properties comparable to the credence we already assign to systems more typically thought of as definitively possessing such properties. Upon further investigation, this may lead to various ethical implications, or even ethical wagers on how to best reduce suffering.
Physical systems aren’t labeled sentient/not-sentient, a priori
It’s impossible to directly confirm the existence and experiences of other minds. Other minds are experienced from a first-person perspective, that is, another first-person perspective – not ours.2
In a strict sense, this ‘problem of other minds’ suggests that, without additional assumptions, we can’t falsify nor, more generally, update, nor, in fact, formulate, our theories about other minds. Without additional assumptions, physical systems can’t be labeled sentient nor not-sentient. Physical data is just physical data. It isn’t evidence for sentience if our criteria for relevant data is uninformed. To reach conclusions, some philosophical assumptions3 must be established prior to analysis of empirical, scientific findings (e.g. about the brain).4
So we need to figure out which assumptions are justified and why. The assumptions we usually accept are common-sensical ones. If we reject common-sense ideas about sentience, many of which have to be decided upon prior to science, then we might be forced to accept some counterintuitive ideas about how sentience works. To list a few that challenge our intuitions: it may be that nervous systems (or functional alternatives) are not prerequisites for entities to suffer, it may be that nested minds exist within a system we traditionally think of as a single mind, or it may be that the suffering of simpler entities isn’t less intense as some might intuitively think.
What about introspection?
But what about access to our own minds? Can introspection contribute to our understanding of sentience? That could be true to some extent. With very few pre-theoretic, relatively palatable assumptions, we may be able to reason by analogy on the sentience of human or human-like systems, as well as potentially on the sentience of less-human-like systems. However, the converse doesn’t necessarily follow, that is, it seems a leap to think that systems being unlike humans would strongly suggest that those systems are not sentient. For instance, if pain is, contrary to our expectations, relatively simple to implement, then there might be a lot more of it in the world.
Compared to most people, I’m personally more skeptical about using common sense when thinking about sentience, which leads me to counterintuitive ideas. Yet, it may be unjustified to dismiss counterintuitive ideas simply because they aren’t intuitive. There are a couple of reasons to doubt common sense.
Common sense may not track truth about sentience
Evolutionary debunking
Human intuition about the mental states of other entities is called “theory of mind”. This was influenced by evolutionary pressures. The brain processes by which we instinctively attribute mental states to others evolved as a result of the demands of interacting with those whose responses had significant consequences on the evolutionary fitness (survival & reproduction) of our ancestors.
However, this means that our ability to attribute mental states to others evolved for practical purposes, rather than as a means of acquiring accurate knowledge of the existence, non-existence, or content of the possible mental states associated with any particular physical system. As long as it doesn’t affect evolutionary fitness significantly, a different set of mental facts about other systems might provide us with the same physical outcomes, and so being incorrect about sentience can be consistent with functioning well in an ancestral environment.5
It would be a striking coincidence if the development of our theory of mind, guided by evolutionary pressures to improve fitness, also resulted in abilities that provide us with accurate insights regarding inaccessible, subjective experiences.
Perhaps most importantly, it would be especially striking if it resulted in insights specifically about systems that are very different from us. A critic might claim that fitness benefits require accurate mind-tracking among similar peers, but even they must confess that this might not generalize “out-of-distribution”. For instance, in the environment of evolutionary adaptiveness, our ancestors didn’t interact directly with microscopic lifeforms or computers, so we have little reason to expect accurate intuitions about sentience for these groups even if we have good reason to expect accurate intuitions about sentience for other groups.
Additionally, some systems may be simple enough to be modeled directly, eliminating the evolutionary need to attribute mental states to them, whether or not they are sentient (in some way). Using an extreme example to illustrate, the reason that we instinctively believe that it’s obvious that a rock isn’t sentient, may be because the simplicity of a rock allows for direct observation, making it unnecessary to evolve such intuitions (independent of the fact of the matter).
For myself, personally, this evolutionary debunking argument is one of the most convincing reasons to doubt the completeness and accuracy of our common sense notions of mind. When we realize that the basis of our thinking about the (lack of) sentience of most physical systems might not necessarily have anything to do with being (not) sentient, there seems to be a lot more of a reason to doubt our initial ideas.
Cultural effects
Culture and the prevailing beliefs of one’s society also influences what counts as “common sense” ideas about sentience. Different cultures treat sentience-adjacent concepts in different ways. For instance, traditionally, Western cultures strongly emphasize the human soul. Many Asian cultures and religions believe that it’s possible to reincarnate as non-human animals. Some forms of animism of various indigenous peoples extend the idea of a “spiritual essence” to plants, rocks and rivers. In present-day academic communities, arguments about presence or absence of consciousness and sentience are often made with reference to human-like cognitive capabilities. “Common sense” about counts as sentient is highly influenced by culture at large. In Einstein’s cynical words, “Common sense is nothing more than a deposit of prejudices laid down in the mind before you reach eighteen.”6
The ideas of a “soul”, a “spiritual essence”, or a “living essence” are conceptually distinct from “sentience” and “consciousness”, but it seems we shouldn’t take that too literally. For many, such concepts are related and do overlap with sentience. Such distinctions were likely blurred to many individuals in such cultures. Indeed, even today, non-specialists discussing these ideas often mix them up.
Perhaps our cultural views lead us to all the right conclusions?7 It’s hard to see why that must be the case. While religious and cultural claims about physical reality can be disproved scientifically (including claims about non-physical entities physically interacting with physical entities), it’s, again, impossible to reach the ‘ground truth’ of the matter and hence impossible to use that information to support or critique theorizing about other minds, without having first justifying our foundational assumptions – which are precisely what’s in doubt.
General doubts around common sense
There are also general reasons to doubt common sense. That said, there are also general reasons in favor of common sense, and so these non-specific reasons may seem weaker.
Intuitions perform poorly in other domains
Another reason might be that our intuitions have a poor track record in other domains of knowledge, which could generalize to this specific case. It’s intuitive to think that the Sun revolves around the Earth, but the heliocentric model showed otherwise. The idea that as living organisms we have a special vital force, separating us from non-living matter, is intuitive, but a physicalist understanding of the universe dispelled that notion (indeed, there are parallels between an eliminativist approach to consciousness and eliminating the notion of élan vital). The belief in absolute space and time is intuitive, but this was seriously challenged by the theory of relativity. The possibility of a multiverse or being in a simulation strikes many as ridiculous, but there are good arguments in favor of those possibilities.
There are domains of knowledge where our intuitions perform especially poorly. A domain in which we want to classify physical systems but cannot directly label them as positive or negative examples (i.e. sentient or not), seems to be one where our intuitions do not hold.8
Inconsistency of common sense
Common sense can lead to counterintuitive, or even contradictory, conclusions. In that case, we might end up rejecting parts of our common sense in favor of other parts. An example is when one rejects property dualism about mind in favor of forms of physicalism, which might more likely motivate beliefs about digital sentience, which some find counterintuitive. Hence, there may be problems of consistency within one’s set of common sense ideas about sentience.
Different forms of common sense may also exist between individuals - a point similar to the argument about cultural effects. For example, individuals on the autistic spectrum may have a different understanding of theory of mind compared to more neurotypical individuals. This might suggest that communities composed of different proportions of individuals with varying levels of autistic characteristics would have different ideas about sentience. Indeed, philosophers who disagree with each other might be starting from different premises (such as those relating to structure, function, higher-order abilities, similarity, levels of confidence; see footnote 3) that result from their different forms of common sense. One person’s modus ponens is another person’s modus tollens. If this makes us more doubtful of “common sense”, perhaps that should make us less likely to reject conclusions we find counterintuitive, and it may suggest placing less faith in conclusions that seem obvious to us.
Probabilities
In light of all this, an ethically relevant question is what this means for how we should think about probabilities relevant to the topic of sentience. The answer to this question is a complicated one, and one that’s also necessarily subjective. But at a first glance it seems to me that:
-
The assumptions underlying our thinking about sentience are dubious. With such unstable foundations, we ought to penalize large differences between probabilities for different claims relevant to sentience. Avoid overconfidence in our models of what may or may not be sentient, and how those systems experience it.9
-
As a corollary, given that we think systems commonly claimed to be sentient by most humans are in fact sentient, we can’t rule out to a high degree of certainty, the sentience of entities such as demodex mites, microbes, plants, countries, and conscious subsystems (including disconnected subsets of the universe). This might mean that there exist ethical implications, or even wagers, worth investigating.
-
Statements about the probability of sentience are almost always conditional, and we should try to communicate our assumptions and what we already accept when discussing sentience. E.g., given X, there is a y% probability that z. It may be useful to communicate whether one takes particular empirical findings or philosophical positions as cruxes to some conclusion.
More work is needed to clarify what this means for the probabilities we assign to the presence of sentience or features of sentience in various physical systems. More investigation into the resulting ethical implications has the potential to suggest effective interventions.
Further resources
Dissolving Confusion about Consciousness
The Crazyist Metaphysics of Mind
Appendix of The Possibility of Microorganism Suffering
Acknowledgements
Anthony DiGiovanni, Eric Chen, Magnus Vinding, Miranda Zhang, and Sean Richardson provided helpful comments. Commenting does not imply that they endorse my views.
Notes
-
What do I mean by sentience? The arguments presented here seem to apply to ‘sentience’ in a broad sense that means consciousness or phenomenality of any kind, but they can (of course) also apply, more specifically, to ‘sentience’ in a narrow sense that relates to the ability to feel pain or suffering. While the former definition may be intriguing from a philosophical perspective, the latter definition is more relevant from an altruistic perspective. It’s likely that most readers will focus on the narrow latter definition, but one should keep in mind that the arguments in this piece may apply more generally. ↩︎
-
Using a machine learning analogy, there is data (physical systems) but the data is unlabeled (can’t determine whether the systems are really sentient). ↩︎
-
Which assumptions? In principle, one could make any of an infinite number of non-contradictory assumptions. With the exception of more fundamental assumptions needed to establish the scientific worldview upon which we can construct arguments to challenge the unreliability of our intuitions (e.g., assuming the existence of an external world, the existence of the past, the validity of science etc.), the doubts discussed in this article appear to me to be applicable to most, if not all, subsequent assumptions required for a theory of sentience. A non-exhaustive list of assumptions vulnerable to criticism include: those that attribute sentience based on physical structure or function, those that propose a requirement of higher-order abilities, and those that determine how similar is “similar enough” for other systems to be sentient as well as our level of confidence in such determination. ↩︎
-
This isn’t to say that we can’t make some assumptions after looking at scientific facts, and add those assumptions to our theories. We can do that, but we still have to start with some assumptions a priori. In addition, assumptions that rely on science and ‘common sense’, could still fall apart if common sense doesn’t hold. ↩︎
-
It might be evolutionarily beneficial in some cases to be unable to attribute mental states (e.g., when hunting prey lacking sophisticated defenses). ↩︎
-
Lincoln Barnett The Universe and Dr Einstein (1950 ed.) ↩︎
-
Also, the “sentience-relevant features” that we choose to use to map to “is-sentient” might be influenced by current topics and trends. For instance, it might very well be that we think along the lines of: A lot of progress happens in neuroscience and psychology, and so that seems essential for sentience. A lot of progress happens in computer science and artificial intelligence, so digital sentience is possible. ↩︎
-
An additional complicating factor is that facts about minds may be metaphysical and are ultimately not susceptible to empirical investigation. ↩︎
-
Eric Schwitzgebel makes a similar point in The Crazyist Metaphysics of Mind: “Thus I suggest: Major metaphysical issues of mind are resistant enough to empirical resolution that none, at a moderate grain of specificity, empirically warrants a degree of credence exceeding that of all competitors; and this situation is unlikely to change in the foreseeable future.”. ↩︎