The Sorites Paradox, Ship of Theseus, and the smoke that broke the Planet’s back.

Introduction

The Sorites Paradox, often called the paradox of the heap, is a classic philosophical riddle that arises from the vagueness of our language. It asks: at what point do small changes make a big difference? If removing a single grain of sand from a heap leaves it still a heap, and repeating this seems always to leave a heap, how is it that one can end up with no heap at all? Conversely, if one grain is not a heap, adding one grain shouldn’t suddenly create a heap – yet by adding grain after grain, surely a heap will eventually form. This simple puzzle – first posed in antiquity – conceals deep issues about gradual change, borderline cases, and how we draw distinctions in the world. Over the centuries, the Sorites Paradox has intrigued thinkers not only in logic and philosophy of language, but also in fields as diverse as art, science, cultural lore, and ethics. In everyday life we often face “heap-like” questions: When do many small actions or increments become significant? In this essay, we will explore the origins and nature of the Sorites Paradox, illustrate its significance across various domains – from artistic perception and scientific thresholds to myths and common sayings – and then delve into its philosophical implications. In particular, we will examine how the paradox illuminates problems in ethics, focusing on utilitarianism. We will explain different types of utilitarian ethical theory and see how some of their most challenging problems echo the Sorites Paradox.

Background: An Ancient Puzzle of the Heap

The Sorites Paradox traces back to ancient Greece. It is traditionally attributed to Eubulides of Miletus, a 4th-century BC Megarian philosopher. Eubulides allegedly proposed a series of paradoxes, including this heap puzzle, as challenges to common notions of truth and knowledge. The very name “Sorites” comes from the Greek word sōros meaning “heap”. We don’t know Eubulides’ exact motivation, but later Greek thinkers – especially the Skeptics – seized upon the puzzle as a dialectical weapon against the dogmatism of the Stoics. If the Stoics claimed to have clear knowledge, the Sorites could trip them up by showing how a concept like “heap” (or “tall” or “old”) has no sharp boundary, undermining claims to know precisely where truth lies.

In its classical form, the paradox is usually illustrated with a heap of sand. Imagine a large heap containing, say, one million grains of sand. Certainly we’d all call that a heap. Now remove one grain – intuitively, you’d still have a heap. Remove another grain; it seems you still have a heap. If taking away a single grain never makes the difference between “heap” and “not a heap”, then by iteration one can remove grain after grain without ever losing heap status. Carried to the extreme, this reasoning would imply that even a single grain remaining must still count as a “heap” – an absurd conclusion, since one grain is clearly not a heap. On the flip side, one can start with a single grain (obviously not a heap) and add grains one by one, insisting that no single grain can create a heap. Yet by the time you’ve added up to a million grains, you surely have a heap. Thus, soritical reasoning seems to prove the contradictory conclusions that no collection of grains can form a heap, and that even one grain can form a heap. Something has clearly gone wrong, but it’s devilishly hard to say exactly where the fallacy lies.

The structure of the argument can be formalised. We have a vague predicate “is a heap” that lacks a sharp boundary of application. The argument uses repeated modus ponens steps: If n grains don’t make a heap, then n+1 grains don’t either; starting from 1 grain (not a heap), iterating this inference leads to the conclusion that even 1,000,000 grains don’t make a heap. Alternatively, starting from 1,000,000 grains (a heap) and applying the backward reasoning yields that even 1 grain does make a heap. In both cases, apparently sound reasoning from plausible premises yields a patently false conclusion – the hallmark of a paradox. The puzzle here is tightly linked to the vagueness of terms like “heap” (and also “bald”, “tall”, “rich”, “old”, etc.), which have blurry edges. There is no clear point at which “heap” suddenly stops or starts applying. This lack of a clear boundary is what generates the Sorites Paradox.

Historically, after the Greek period, the Sorites Paradox received relatively little attention for many centuries. Medieval logicians discussed many puzzles, but the heap paradox wasn’t prominent in surviving works. Interest rekindled in the late 19th and early 20th centuries. Notably, Marxist dialecticians such as G. Plekhanov cited the Sorites to argue that traditional (“customary”) formal logic fails for concepts with continuous change, claiming this showed the need for a dialectical logic that can handle contradictions in gradual processes. In one Marxist interpretation, the paradox illustrated how quantitative change can eventually lead to qualitative change – a core dialectical idea. Meanwhile, in early analytic philosophy, vagueness was often dismissed as an imperfection of natural language, something to be cleaned up or ignored in favour of precise logical languages. By mid-20th century, however, philosophers like Bertrand Russell had noted that vagueness is intrinsic to natural language (“all language is vague” in some degree, Russell observed). By the 1970s and onwards, as analytic philosophy turned attention back to ordinary language, the Sorites Paradox became a central problem in the philosophy of language and logic, spawning a proliferation of proposed solutions. We will touch on those attempts later. First, though, let us explore how the Sorites pattern – the logic of small negligible changes adding up – appears outside of abstract logic, in everyday thinking and across different fields of human culture.

Significance in Arts, Sciences, and Culture

Vague Gradations in Art and Perception: The Sorites Paradox highlights phenomena of continuous gradation, which appear in the arts and sensory perception. For example, consider a colour gradient spectrum that shifts imperceptibly from, say, pure green to pure red. To the human eye, adjacent shades can be so similar that one cannot tell them apart. If we label one end “green” and the other “red”, we face a sorites-like puzzle: no single step in the gradient visibly turns green to red, yet eventually the accumulation of tiny changes yields a definite change in colour. In fact, a known variant of the paradox assumes a series of coloured chips where each chip’s colour differs only minutely from the next; if one cannot distinguish chip n from n+1, one might argue one can never distinguish green from red at the ends of the series. This is clearly false – green and red are distinct – so something is wrong with the assumption that imperceptible differences never add up to perceptible ones. Artists and designers work with colour gradients and shading, effectively relying on the fact that small differences can accumulate. A painting that transitions gradually from one hue to another has no sharp border between colours, yet our categorisation of sections as “reddish” or “greenish” is inevitably fuzzy. The Sorites highlights the continuity of colour perception and the challenge of drawing lines in a continuum. Some conceptual or avant-garde artworks have even thematised these borderline cases. Thus, in art and perception, the Sorites Paradox underscores the ambiguity of gradual transitions, forcing us to confront how our concepts (like named colours) often blur at the edges.

Myth, Folklore, and Idioms of Gradual Change: The intuition that big effects come from many small causes has long been captured in myths and sayings. One famous idiom is “the straw that broke the camel’s back.” A camel can carry a great load of straw; adding a single straw doesn’t normally make a difference – until that one last straw does topple the load. The message is that an accumulation of negligible burdens can eventually exceed a threshold. The straw that causes collapse is no different in kind from the straws before, yet at some point the cumulative weight becomes too much. This perfectly illustrates a Sorites-type phenomenon: if one assumes “one more straw won’t matter,” and keeps adding straws, one eventually reaches catastrophe. Another well-known story is the boiling frog fable. In this apocryphal tale, a frog dropped into boiling water will jump out immediately, but a frog placed in cool water that is heated very gradually will fail to notice the incremental temperature changes and eventually boil to death. The story (though not literally true of real frogs) is meant as a metaphor for people failing to notice slow, creeping threats. In philosophy, the boiling frog scenario is explicitly linked to the Sorites Paradox: it asks whether there is a precise moment at which the frog should react, just as the heap paradox asks for a precise grain count where a heap ceases to be a heap. The moral of the fable is that gradual change can be dangerous when each step is too small to trigger a response – a direct parallel to the logic of the Sorites. These cultural snippets show that long before formal philosophers analyzed it, people were well aware of the paradox of imperceptible differences accumulating into big outcomes.

We also find the one-grain reasoning in an ancient anecdote sometimes dubbed the “one coin” paradox. This tale (traced to classical Greek discussions and often retold by modern writers) asks: Can one coin make a person rich? Obviously not; a single coin is far from a fortune. But if one coin won’t make you rich, what about two coins? Three? No single coin ever seems to turn poverty into wealth. Yet if you keep adding coins, at some point the person surely becomes rich. As one summary puts it: “If ten coins are not enough to make a man rich, what if you add one coin? What if you add another? Finally, you will have to say that no one can be rich unless one coin can make him so.” In other words, unless you admit there is some decisive coin, you get the absurd conclusion that no amount of money can qualify as “rich.” This argument of the growing heap (as it was known in Erasmus’s Renaissance retelling) has been used to illustrate how habits and changes work in life. Modern authors have applied it to habits like exercise: No single workout at the gym makes you fit and healthy – missing one session won’t noticeably hurt. Yet a healthy body is the result of many workouts; somewhere, those individual sessions make the difference. Likewise, eating one chocolate bar will not ruin your diet, as dieters often reason, echoing what philosopher Dorothy Edgington calls the “dieter’s paradox” – I don’t care about the difference one chocolate makes. But if you indulge that thought regularly, the extra treats accumulate on the waistline. The “mañana paradox” similarly refers to the logic of procrastination: any single day’s delay in doing an unpleasant task seems inconsequential (why not put it off until tomorrow?), yet perpetual deferral means the task never gets done. These everyday examples show the Sorites pattern ingrained in human behaviour and language: we use loopholes of small insignificance to justify actions, even while understanding that in aggregate they matter. The puzzle forces a confrontation with how we think about thresholds – be it the threshold of wealth, of health, or of a heap. Often, large outcomes are the sum of many small steps, and the Sorites Paradox encapsulates the challenge of deciding when those steps cross a line.

Scientific Thresholds and Emergent Phenomena: In the sciences, one might not talk of heaps of sand, but the principle of many tiny increments leading to a qualitative change is common. Phase transitions in physics are one illustration: cooling water by one degree from 100°C to 99°C leaves it a liquid; another tiny drop in temperature still leaves liquid; yet at 0°C water becomes solid ice. Was there a single molecule’s change that caused liquidity to vanish? Not exactly – it’s an emergent result of countless molecular slowdowns reaching a critical point. Many natural processes have tipping points or non-linear accumulations that resemble a Sorites series, where no obvious “last cause” is solely responsible. In biology, consider the emergence of intelligence from neurons: removing one neuron from a functioning human brain won’t noticeably change the person’s mind, and removing another and another may still leave the mind intact – yet obviously if you keep removing neurons, intelligence and consciousness will disappear at some stage. Is there an exact neuron whose loss marks the difference between sentience and non-sentience? This feels analogous to the Sorites Paradox, and indeed some have noted that emergent properties (like consciousness arising from brain cells, or life arising from non-living components) confront us with sorites-like problems of vague boundaries. The concept of species in evolutionary biology provides another example: every individual organism is very nearly the same as its parents and offspring, yet over many generations these tiny changes accumulate to big differences. There is no clear dividing line where an ancestral species “became” a new species – the transition is gradual. Paleontologist Stephen Jay Gould famously discussed this as the problem of “extreme gradualism” in evolution: nature does not jump, and yet new forms arise. This is essentially a Sorites paradox in natural history – one could say each generation is almost indistinguishable from the last (no single generation gives birth to a new species), but given enough generations, the descendants can belong to a very different species. Science deals with such issues by defining thresholds pragmatically (e.g. in geology, one might arbitrarily say sand becomes “sandstone” when grains are cemented, etc.), or by acknowledging that some categories are fuzzy sets rather than strict logical classes. In computer science and mathematics, the notion of a threshold in algorithms or the concept of a continuum in calculus similarly recognise that summing imperceptible increments can yield a perceptible change – the integral of infinitesimal parts produces a finite whole. So while scientific measurement often introduces precise cut-offs for convenience, the underlying phenomena frequently lack sharp borders. The Sorites Paradox, in highlighting the absence of sharp boundaries, resonates with modern scientific understanding that many categories (from “tall mountain” to “healthy ecosystem”) are somewhat arbitrary or conventional. It challenges us to consider whether our scientific terms are like convenient labels imposed on continua.

Identity and Myth – The Ship of Theseus: No discussion of gradual change would be complete without mentioning the Ship of Theseus paradox, a classic puzzle from Greek lore about identity over time. While not a Sorites Paradox in the strict logical sense (it’s more about identity than vague predicates), it bears a strong family resemblance. The myth asks: if Theseus’s ship, preserved in a museum, has its planks replaced one by one, at what point does it cease to be the “same” ship? If one plank is swapped out, it’s still Theseus’s ship. Another plank replaced – surely still the same ship. Yet if eventually every single piece of wood is replaced, can we still call it Theseus’s original ship? Some say no – at some point it became a different ship – but when? There is no obvious precise replacement that suddenly converts it into another vessel. We have a sorites-style dilemma about the continuity of identity: a series of individually negligible changes (one plank at a time) leads to a drastic difference (a completely new set of planks) with no clear breaking point. This paradox from myth emphasises vague identity – the idea that even the existence of a “same object” over time might be a matter of convention when changes are gradual. In Eastern philosophy, related ideas appear in Buddhism: the Buddha taught that what we call the “self” is really a collection of components (the five skandhas, meaning “aggregates” or “heaps” of existence) that are impermanent and ever-changing. There is no single moment one becomes a different self; rather, we are a process like a flame or a river. The Sorites-like view here is that personal identity is not a fixed essence but a heap of elements – and if one tries to pinpoint a core “I”, one finds only parts that themselves change. The “heap” paradox thus interestingly aligns with doctrines of no-self and emptiness in Buddhism, which deny sharp boundaries in the constitution of persons or things. While the Ship of Theseus and the Buddhist “heap of aggregates” are not typically framed as the Sorites Paradox formally, they underscore a common insight: when something changes little by little, our standard definitions and identities begin to blur. This is a theme that has inspired literature and storytelling as well – from Kafka’s Metamorphosis (where a man gradually, then suddenly, is no longer himself) to modern science-fiction scenarios of incremental cyborg replacement. All these cultural, artistic, and mythical references drive home the intuitive power of the Sorites Paradox: it exposes how deeply uncomfortable we are with drawing lines in a continuum, and how that discomfort pervades many aspects of human thought.

Philosophical Implications: Vagueness, Logic, and Truth

At its heart, the Sorites Paradox is a problem about vague language and categories. Philosophically, it raises the question: How can we reason rigorously with concepts that have no clear boundaries? Terms like “heap,” “bald,” “tall,” “rich,” or “young” are ubiquitous in natural language – we use them constantly and usually effectively. Yet, as the paradox shows, if we treat them like precise predicates in logical reasoning, we get contradictions. Most philosophers agree that something in the Sorites argument must be wrong – either one of the premises is false, or the reasoning is invalid in a subtle way – because the conclusions are absurd. But pinpointing the culprit is tricky. The paradox forces a reassessment of classical logic and semantics: perhaps classical two-valued logic (where every statement is either true or false, with no middle ground) is too rigid for vague terms. Indeed, the Sorites has driven a great deal of work in the philosophy of logic on many-valued logics, fuzzy logic, supervaluationism, and other approaches to vagueness.

One possible diagnosis is that the seemingly harmless premise – “Removing one grain cannot turn a heap into a non-heap” – is not in fact universally true. Maybe there is some grain in the heap whose removal is the tipping point, even if we cannot know which grain it is. This is the Epistemic theory of vagueness: it holds that vague predicates do have sharp boundaries, but those boundaries are unknowable to us. According to this view (defended by philosophers like Timothy Williamson), every person has a precise height at which they go from “not tall” to “tall,” every heap has an exact grain where it ceases to be a heap – we just can’t tell where that boundary lies because of our limited knowledge or observational precision. This view preserves classical logic (so the law of excluded middle – either you are tall or not tall – remains true for every individual) at the cost of accepting a kind of semantic ignorance: there are truths out there (e.g. that 1,000 grains is not a heap but 1,001 grains is a heap) which we cannot know. Critics argue this is somewhat implausible – it posits an invisible sharp line in what seems to be a continuum – but it does offer one way out: the Sorites argument would fail because one of those inductive steps (“if n grains don’t make a heap then n+1 grains don’t”) would be false at the unknown cutoff point.

Another popular approach is many-valued or fuzzy logic. Instead of insisting every proposition is simply true or false, fuzzy logic allows for degrees of truth. Perhaps the statement “X is a heap” is not strictly 0% true or 100% true, but can have some value in between. For example, one grain might make “heap” only 0% true, 500 grains might make it, say, 20% true, 100,000 grains 99% true, etc., with a smooth gradation. In a fuzzy logic setting, the Sorites argument loses its force because modus ponens doesn’t straightforwardly apply with partial truths – a premise that is “almost true” can lead to a conclusion that is slightly less “almost true,” and so on, without contradiction. The appeal of degree theories is that they respect the intuition of continuity: as you remove grains one by one, the truth of “heap” can gently fade out from true to false rather than dropping off a cliff. However, this approach must contend with the problem of higher-order vagueness – if we have degrees of truth, we then have to ask where the cutoff between “true” and “somewhat true” lies, potentially creating a new hierarchy of vagueness about vagueness. Still, fuzzy logic remains an elegant, mathematically precise way to model vagueness (it’s used in fields like control systems and AI to handle graded phenomena). It essentially tells the Sorites: the premises gradually lose truth value as n grows, so we never actually have a string of fully true premises leading to a false conclusion – thus no paradox.

A different strategy, supervaluationism, says that vague statements can be viewed as neither true nor false in borderline cases, but nonetheless logic can be preserved. We imagine all the ways we might sharpen the vague term (“precisifications”) and say a statement is super-true if it’s true on all precise ways of drawing a boundary, and super-false if false on all of them. Anything in between is indeterminate. Under supervaluationism, the statement “heap (500 grains)” might be indeterminate (since some precise definitions of “heap” might set a minimum at 600 grains, others at 400, etc.). The Sorites series then fails because at some point in the induction you go from definitely heap (for large numbers) to an indeterminate zone, breaking the chain of fully true implications. Supervaluationism has the attractive feature that it upholds classical laws like the law of excluded middle at the super-truth level (either a statement is super-true or not super-true, even if it’s indeterminate, one can often say “Either P or not P” is super-true). But it has been critiqued for treating indeterminacy as a kind of meta-level epiphenomenon rather than a property of the statements themselves.

Other approaches include contextualism, where the meaning of “heap” can shift depending on contextual standards or the comparison class. On a beach full of massive dunes, perhaps a small pile of sand wouldn’t count as a “heap” in that context, whereas on a tabletop even a few dozen grains might count as a “heap” in conversational context. Contextualism suggests that the paradox arises from a sneaky shifting of context – as we iterate the heap removal, our implicit standard shifts without us noticing, giving the illusion that no boundary exists. By being sensitive to context, one could say each conditional (“if n grains is a heap, n-1 is a heap”) holds in a fixed context, but as n shrinks, context might tighten the usage of “heap.” Eventually the contextually determined threshold is reached and the premise fails. This can be consistent but requires a fairly complex account of how our conversational context shifts in a Sorites scenario.

Finally, a radical proposal is to “embrace the paradox” and allow that some contradictions can be true in cases of vagueness. This is the dialetheist or paraconsistent logic approach championed by philosophers like Graham Priest. On this view, borderline cases might make the statement “X is a heap” both true and false. Rather than exploding into logical anarchy, one uses a paraconsistent logic that can contain contradiction safely. While intriguing, this approach is less popular, as it conflicts with the deeply held principle of non-contradiction. Yet it resonates in spirit with certain mystical or dialectical philosophies (as the Marxist dialecticians earlier hinted, wanting a “logic of contradiction” for gradual change).

In summary, the Sorites Paradox has profound implications for logic and semantics. It teaches us that the neat bivalent (true/false) categories we often assume can falter when applied to everyday concepts. The paradox “leads to absurdity” if we insist on classical reasoning, so either our reasoning principles or our understanding of truth must be adjusted. Many theorists conclude that vagueness is a real feature of the world (or at least our representation of it) that requires special treatment – the world is not neatly chopped at the joints of our words. This has encouraged a view of meaning where usage and practical success are more important than rigid definitions. As some philosophers note, we use vague terms successfully in daily life precisely because in practice we don’t push them to the Sorites extreme. The paradox only arises in artificial, philosophy-seminar scenarios where one insists on applying the term all the way down or up a series. In ordinary life, we simply tolerate a bit of indeterminacy. This leads to a broader lesson: not every question has a precise yes-or-no answer. Sometimes “Is it a heap?” genuinely has no definite answer – and that’s okay. In such cases, trying to force a binary choice (heap or not) is what causes the mischief. The Sorites thus opens up explorations of philosophy of language (what is the meaning of a vague word? what is the truth-value of a borderline statement?) and even metaphysical questions about whether reality itself has sharp borders or only our language does.

Having surveyed these general philosophical angles, we now turn to the domain of ethics, where problems analogous to the Sorites Paradox appear. Ethics deals with vague predicates too (what counts as “honest”, “harmful”, “fair” can have borderline cases), and it faces aggregation problems: many small impacts might sum to something significant morally. We will see that the Sorites Paradox not only illustrates the difficulty of line-drawing in moral concepts but also emerges in ethical theories such as utilitarianism when considering large numbers of small benefits or harms.

Sorites Paradoxes in Ethics: Gradual Harm and Moral Gray Areas

Vagueness in Moral Concepts: Moral language, much like ordinary language, is rife with vagueness. Consider the predicate “is old enough to be morally responsible” – there is no universally agreed precise age at which a child becomes fully responsible for their actions. A child of 5 clearly isn’t; a young adult of 25 clearly is. If we try to apply Sorites reasoning to responsibility with age as the continuum, we’d say: if a person is not responsible at age 5, adding one day to their age won’t suddenly make them responsible at 5 years and 1 day; and if not at that age, then not at 5 years and 2 days, and so on… this line of thought would “prove” that even a 25-year-old cannot be responsible, which is absurd. We thus see the same logical structure: a vague predicate (“morally responsible person”) combined with tolerance of small changes (one day’s growth doesn’t seem to change anything) can yield a false conclusion. The solution in practice is that societies draw lines – perhaps at 18 years for legal responsibility, or have gradual gradations of responsibility – but any line drawn is somewhat arbitrary. Another example: “lying” – at what point does a misleading statement become a full-blown immoral lie? A tiny fib or exaggeration might not seem to count as a lie, but there’s no single word or added falsehood that clearly flips a statement from acceptable to dishonest; yet a large accumulation of false details is definitely a lie. So ethical concepts like honesty, integrity, generosity, harm, etc., all have fuzzy edges. Slippery slope arguments in ethics often leverage this vagueness: for instance, in debates on abortion, one hears sorites-like reasoning – “If it’s permissible at 4 weeks of gestation, what about 5 weeks? 6? 7? … then why not at 39 weeks?” or conversely “If it’s wrong at birth minus one day, is it wrong at birth minus two days?” etc. A slippery slope is essentially a Sorites argument presented normatively. As philosopher Roy Sorensen noted, a slippery slope argument tries to show that allowing a single step (e.g. a small exception to a rule) inevitably commits one to allowing even extreme steps, by parity of reasoning – unless one can point to a principled stopping point. The Sorites Paradox shows that with vague predicates, finding that principled stopping point is challenging. In ethical discourse, this can lead to what some call the “line-drawing fallacy” or continuum fallacy: dismissing a distinction because one cannot identify a precise boundary. For instance, someone might argue “There’s no exact number of drinks after which a person suddenly becomes drunk, so there’s no meaningful difference between drunk and sober.” But that’s fallacious – differences in degree do matter even if they’re gradual. Ethicists respond by acknowledging spectrum conditions: sometimes moral categories have borderline cases and one must adopt either a precautionary principle (e.g. treat uncertain cases as the more dangerous side) or a conventional threshold (like a legal limit on blood alcohol content) for practical purposes. Underneath those responses, the Sorites is lurking, reminding us that moral reality may not provide sharp lines for our convenience.

Many Small Harms: The Dilemma of Imperceptible Damage: A striking application of Sorites-like reasoning in ethics involves cases of collective harm or what Derek Parfit called “imperceptible harms.” Suppose a large number of people each perform some action that has a tiny negative effect on a victim or on the world – so tiny that no single action noticeably harms anyone. Is it possible for the combined effect to be seriously harmful, and if so, are the individuals responsible? This is sometimes illustrated by Parfit’s thought experiment known as “The Harmless Torturers.” In Parfit’s scenario, imagine 1,000 torturers and 1,000 victims. In one version, each torturer has a dial with 1,000 settings controlling pain for a single victim. They increment the dial gradually; each single click increases the victim’s pain by an imperceptible amount (too small to notice), but if the torturer turns it all the way up through 1,000 imperceptible increments, the victim ends in agony. Here, each torturer individually inflicts great pain by the end – clearly immoral. But Parfit then imagines a second version: the torturers are arranged so that each of the 1,000 torturers controls a network of devices in such a way that each torturer’s single button press raises the pain for all 1,000 victims by one step. Thus, each victim’s pain is the sum of 1,000 tiny increments (one from each torturer). After each torturer presses the button once, every victim suffers the same severe agony as before. Yet now, no individual torturer can be said to have made any victim feel perceptibly worse – each torturer’s contribution was only 1/1000 of the total, too small to notice. In this setup, from the narrow perspective of each torturer: “My action didn’t really hurt anyone, since it made no noticeable difference.” And strictly speaking, it’s true that after any single torturer’s act, each victim would be almost as well-off as before (999 increments vs 1,000 increments of pain might feel the same). Nevertheless, collectively they have tortured people. This presents a moral Sorites: zero pain is fine, extreme pain is awful, but if no single button press makes a noticeable difference, how do we assign blame? Parfit’s example dramatizes a real-world problem: consider climate change. No single person’s carbon emissions measurably change the global climate – releasing one ton of CO₂ more into the atmosphere will not be perceptibly noticeable in global temperatures. So an individual might reason, “If I drive my petrol car or fly in a plane, the extra emissions I cause won’t themselves cause any noticeable harm; therefore my action is harmless.” This is analogous to removing one grain from a heap – “one grain won’t matter.” But of course, if everyone thinks that way, the result is catastrophic harm. A stable climate is like the heap; we remove one “grain” of CO₂ capacity at a time. Ethically, we know that the aggregate of millions of trivial contributions can wreck the climate, yet our moral intuitions and traditional theories struggle to assign accountability to any single contributor who can claim their piece was imperceptible. Philosopher Shelly Kagan called this the problem of “making a difference”: morality often assumes individual actions either harm or benefit others in discernible ways, but what about actions that only harm in combination with thousands of others? Some have even cast this in Sorites terms as the “threshold problem” – there may be a threshold at which harm becomes perceptible or significant, but each individual is below that threshold. If we say “only actions that make a perceptible difference are wrong,” then we’d absurdly conclude that no single fossil fuel burning, no matter how many, is wrong – until the last one that tips over into disaster, akin to saying no straw is the one that breaks the camel’s back except perhaps the very last straw.

This puzzle has pushed ethicists to consider new principles of collective responsibility. One idea is to reject the premise that causing an imperceptible harm is morally neutral. After all, even if a victim can’t notice the incremental pain from one torturer, that torturer still participated in causing actual agony. One cannot wash one’s hands and say “I didn’t really hurt him” if the harm did occur in the end. Some philosophers argue for a share of the harm approach: each torturer did 1/1000 of the harm, so each is accountable for that fraction of wrongdoing. The difficulty is that in scenarios like climate change, harm is not linearly divisible; the effects are diffuse and collective. Others, like philosopher Christopher Kutz, suggest a complicity principle: even if your individual act didn’t itself cause harm, by intentionally participating in a collectively harmful activity, you incur responsibility. Judith Lichtenberg talks about “new harms” in an interconnected world – like buying a product that, through a chain, finances exploitation – where our normal direct cause-and-effect view falters. The key is recognising emergent harm: harm that emerges only from the combination of many acts. Ethically, this may require a shift from an atomistic view of morality (“Did my act cause harm or not?”) to a collectivist or rule-based view (“If such acts are widely performed, do they cause harm? If yes, then each act is wrong to do.”). This is essentially adopting a form of rule utilitarian reasoning (as we will discuss in the utilitarianism section): judge acts by what would happen if everyone did likewise. Indeed, during the COVID-19 pandemic, as one article noted, people learned to internalise this logic: one person going to a party might not by themselves spread the virus widely, but if everyone does it, the pandemic rages – so we conclude each person has a responsibility to refrain, even though individually one might reason “my outing won’t make a difference”. The Sorites paradox of small harms thus highlights a crucial philosophical point in ethics: moral responsibility can exist even for actions that by themselves seem too small to matter. We often have to consider the aggregate effect of many people behaving the same way. In everyday terms, this is the insight behind slogans like “Every vote counts” or “If you think you’re too small to make a difference, try sleeping with a mosquito in the room.” Each individual contribution is tiny, but collectively they add up, and so we project the importance of the collective outcome back onto each contributor’s choice.

This understanding also helps counter a dangerous form of rationalisation that Gretchen Rubin (citing the one-coin paradox) called the “loophole of one coin” – we excuse ourselves by focusing on the triviality of one instance rather than the significance of the pattern. Ethically, overcoming the Sorites paradox means choosing principles that break the chain of complacency. Instead of saying “this one act doesn’t matter,” we realise that somewhere a line must be drawn – otherwise we end up endorsing great evil or harm through a million small steps. Parfit’s torturer example forces us to admit that even imperceptible increments matter morally, precisely because enough of them together matter. There is an inherent tension here: it asks individuals to sometimes act in ways that seem over-scrupulous or supererogatory (like avoiding a luxury that by itself would hardly hurt anyone) for the sake of preventing a collective heap of harm. This leads us into questions of ethical theory: how do different moral frameworks handle this problem of aggregation? Are some better equipped than others to resolve Sorites-like issues in ethics? To explore that, we now turn to utilitarianism, the family of ethical theories arguably most concerned with aggregating harms and benefits. We will outline the main types of utilitarianism and examine how each might view paradoxes of small increments – such as Parfit’s scenarios or the “repugnant” implications of adding many small goods.

Utilitarianism: Types and Sorites-like Challenges

Utilitarianism in a Nutshell: Utilitarianism is a broad ethical doctrine holding, in essence, that the right action is the one that maximises overall well-being or happiness. It falls under consequentialism: only the outcomes (consequences) of actions matter for their moral evaluation. Classical utilitarians like Jeremy Bentham and John Stuart Mill argued that pleasure and the absence of pain are the intrinsic goods to be promoted. The famous utilitarian slogan is “the greatest happiness of the greatest number.” In utilitarian calculation, each person’s well-being counts impartially, and one sums up the positive and negative effects on all affected beings to determine the best action. There are, however, many variants of utilitarianism, which differ on what counts as “well-being,” how to aggregate it, and how to apply the theory in practice. We will briefly explain several key types:

  • Act Utilitarianism vs. Rule Utilitarianism: This distinction concerns how utilitarian reasoning is applied. Act utilitarianism (the classic form) says to evaluate each individual act by its consequences: an act is right if it produces at least as much net good (utility) as any alternative act would. Rule utilitarianism, in contrast, suggests we should follow rules that, in general, tend to produce the best consequences if widely adopted. An action is right if it conforms to a set of rules that maximise happiness overall when generally practiced. Rule utilitarianism is an “indirect” form of consequentialism – it doesn’t judge each act’s outcome in isolation, but by the outcomes of adhering to rules. Act utilitarianism is very flexible but has to calculate afresh each time; rule utilitarianism can accommodate moral rules and intuitions more easily (like “don’t lie” or “don’t steal” usually leading to better results).
  • Hedonistic (Classical) vs. Preference vs. Ideal Utilitarianism: These refer to what is counted as the utility to maximise. Hedonistic utilitarianism (Benthamite) equates well-being with pleasurable experiences minus painful experiences. Preference utilitarianism (associated with thinkers like R. M. Hare and Peter Singer) defines well-being in terms of individuals getting their preferences or desires satisfied, not just raw pleasure. It acknowledges that people value things (like autonomy, knowledge, relationships) that might not reduce simply to pleasure. Ideal utilitarianism (e.g. G. E. Moore’s version) considers certain things like artistic beauty or friendship as intrinsic goods, even if they don’t produce pleasure. All these are still utilitarian in that they aggregate these goods across people impartially, but they differ on what they count in the utility calculus. Welfarism, in general, is the principle that only individuals’ well-being determines an outcome’s value – all utilitarians accept that, though they define well-being differently (hedonic vs. preference, etc.).
  • Total vs. Average Utilitarianism (Population Ethics): When considering populations of different sizes, total utilitarianism says to maximise the total sum of happiness, even if that comes by increasing the number of people with positive welfare. Average utilitarianism says to maximise the average welfare per person. This becomes crucial in what’s called population ethics – should we, for example, prefer a larger population with overall more total happiness but perhaps lower average happiness, or a smaller population with higher average well-being? Total utilitarianism, if taken to its logical conclusion, leads to what Parfit dubbed the Repugnant Conclusion: the idea that a very large population of lives barely worth living could be considered better than a smaller population of very happy lives, simply because the total happiness is greater when you have enough people, even if individually they are not very happy. (We will examine this paradox shortly.) Average utilitarianism tries to avoid that by focusing on quality rather than sheer quantity of lives, but it has its own issues and paradoxes (for instance, it might imply it’s bad to add an extra happy person if they are below the current average happiness, even though their life is positive).
  • Negative Utilitarianism: This less common variant prioritises the minimisation of suffering over the maximisation of happiness. In a strong form, negative utilitarianism might say that our duty is first and foremost to reduce pain; happiness is of secondary importance or even treated as neutral until pain is eliminated. Philosopher Karl Popper advocated a version of this, suggesting it’s more urgent to alleviate misery than to create extra happiness. Extreme negative utilitarianism could even conclude that it’s better to have no life at all (zero suffering, zero happiness) than some life with any suffering, which leads to radical implications like considering whether humanity should cease to exist to eliminate suffering. Few utilitarians go that far, but negative utilitarian perspectives have influenced discussions in, say, global health ethics (focus on eradicating diseases and suffering as priority).
  • Two-level (Multi-level) Utilitarianism: This is a pragmatic reconciliation between act and rule utilitarianism proposed by R. M. Hare and others. Two-level utilitarianism suggests that in our day-to-day life, we should generally follow intuitive moral rules (because constant calculation is impractical and prone to bias), but at a “critical level” one can step back and do direct utilitarian calculation for especially important or novel situations. In effect, one uses something like rule-utilitarian “heuristics” normally, but one recognises that those rules derive from an underlying act-utilitarian logic and can be overridden when necessary by that logic. This tries to combine the consistency of utilitarianism with the common-sense guidance of virtue and rule-based ethics.
  • Scalar and Satisficing Utilitarianism: Traditional utilitarianism is maximising – it demands the best outcome. Scalar utilitarianism suggests we can think of actions being morally better or worse by degrees of utility, without a sharp right/wrong division. Satisficing utilitarianism says it might be acceptable to aim for an outcome that is “good enough” (above some threshold) rather than strictly the maximum. These are refinements that handle the fact that sometimes requiring absolute maximisation is too demanding or unrealistic.
  • Implications of Impartiality and Aggregation: All utilitarianisms share two features: impartiality (everyone’s well-being counts equally) and aggregation (you can add up or aggregate benefits/harm across people). Aggregation is particularly relevant to Sorites-like issues. Utilitarians explicitly hold that enough minor benefits can outweigh a major benefit to one person. For example, saving one person’s life versus curing many people’s headaches: an anti-aggregationist might say “no number of headaches relieved equals the importance of saving one life” – but utilitarians generally do say “if there are enough people with headaches, at some very high number their total suffering can exceed the badness of one death”. That is effectively a Sorites stance: one headache is negligible compared to a death, but a million headaches might not be. Utilitarians would argue that there is some number at which the total mild suffering outweighs the severe suffering of one person. In other words, they are willing to aggregate even very small harms across many individuals to claim that collectively, quantity can override quality. This commitment is intuitive in some cases (e.g. saving a million people from slight injuries vs one person from death – many would feel death is worse, but what about saving a billion people from a week of pain each vs one death? At some point the scales tip). However, it can also lead to repugnant or paradoxical conclusions, which we explore next.

Sorites and Utilitarian Challenges: Utilitarianism, by its nature, is comfortable with calculation along continua – it assigns numerical values to well-being and says “more is better.” In a sense, it offers a straightforward way to handle vagueness: instead of a sharp true/false boundary (heap or not heap), utilitarians have a continuous scale of utility. Removing one grain of sand might correspond to a minute drop in utility (perhaps because a slightly smaller heap gives a bit less satisfaction to whoever likes heaps). There is no paradox as long as one acknowledges each grain does make some tiny difference in utility – the “imperceptible” difference is still a difference in the sum. Indeed, one might argue that a rational utilitarian rejects the premise that one grain never matters: strictly, even if the change is too small for a person to notice, it’s a small loss of a potential pleasure or aesthetic value. So classical utilitarianism could say: 1,000,000 grains gives, say, X amount of happiness to someone looking at the heap; 999,999 grains gives X – δ happiness (a tiny bit less). It might be a very, very small δ, but not zero. Thus the soritical premise “if 1,000,000 grains is happy-heap, then 999,999 is also happy-heap” would be false in utilitarian terms, because the person is ever so slightly less satisfied – not enough to care, but in principle a decrement in utility has occurred. This reasoning effectively solves the paradox by denying the tolerance principle and replacing it with a gradual trade-off principle: each grain removed trades off a minuscule amount of utility.

However, while utilitarianism conceptually has this fine-grained approach, in practice it faces its own Sorites-like dilemmas especially in ethical choices about large numbers. We have already encountered one: the Harmless Torturers scenario. A pure act-utilitarian analysis of that scenario would say: Each torturer pressing the button adds a very small amount of pain to 1,000 people. Summing over all victims, each torturer’s act causes a total of perhaps 1,000 * δ pain (if δ is the imperceptible increment per victim). So each act actually has a measurable bad consequence in total (just spread out). If δ pain per person is truly imperceptible, one might argue whether “pain that no one feels” counts as pain – but a utilitarian could count it as objective damage even if unfelt. If we consider only felt pain, then act-utilitarianism has a challenge: by hypothesis, no victim feels the difference made by one torturer. If no one feels any worse, can we say utility was reduced? This touches on the idea of just-noticeable differences: perhaps the utility function of felt pain has a threshold, a discontinuity where anything below it registers as zero. In that case, a naive act-utilitarian might wrongly conclude “no felt pain increase, so no harm.” This would lead act-utilitarianism to say each torturer’s act is morally neutral, which is unacceptable because collectively they cause agony. Here is where rule utilitarianism would shine: a rule utilitarian would say, “Imagine if everyone presses their button – the resulting state (1000 people in agony) is terribly worse for overall utility. Therefore, we have a rule: do not press the button (i.e. do not participate in imperceptible harm acts). Each torturer following the rule will prevent that collective disaster.” Rule utilitarianism captures our moral intuition that each torturer’s action is wrong by evaluating the pattern of behaviour in general. This addresses the Sorites problem of small harms by effectively inserting a threshold by convention: treat any contribution to a harmful collective act as wrong, even if by itself it’s negligible, because we care about the outcome of the collective. Act utilitarianism can account for it too, but only if it recognises that the imperceptible difference still has moral weight when aggregated – which means considering the counterfactual: “If I don’t press the button, others might still, but if enough of us refrain…” etc. Standard act utilitarianism struggles when an individual action’s marginal utility is basically zero in isolation but non-zero in combinations that one action alone doesn’t control. This is sometimes called the problem of overdetermination or distributive harm – many acts together cause harm, but no single act is individually necessary or sufficient for the harm. Utilitarians have tried to refine act-utilitarian decision-making in such cases by appealing to expected utility or by considering one’s act as part of a probability of tipping the outcome. If harm only materialises when a threshold is passed (like a 1000th straw breaking the camel’s back), one might say before that each straw adds risk or brings the system closer to breaking. Expected utility calculations could then assign each straw a small fraction of the bad outcome’s disutility proportional to the increased risk of collapse it causes. This is a technical fix to ensure act utilitarianism doesn’t ignore tiny contributions. It basically smuggles in the idea that reality might have a threshold, but we often don’t know if our act might be the one to cross it – so prudence (and probability) dictate we treat it as potentially harmful.

Moving to another Sorites-like utilitarian issue: the Repugnant Conclusion. This arises in total utilitarianism regarding populations. Parfit’s formulation was: For any possible world with a large number of very happy people, there seems to be a possible world with far more people who are only barely happy (lives just worth living), yet that world could have a greater total sum of happiness – thus by utilitarian lights, it’s “better”. For example, imagine World A: 1 billion people, all living wonderful lives with 100 units of happiness each. World Z: 100 billion people, each living a life just barely positive in value (say 1 unit of happiness each). World Z contains 100 billion happiness units vs. World A’s 100 billion units – they might be equal in that simplistic arithmetic, but we can make Z even larger, say 200 billion people at 1. Total happiness then is 200 billion, surpassing A, although each person’s life in Z is meagre and perhaps filled with lots of mild suffering but overall just positive. Many people find it repugnant to say that Z is a better outcome than A – after all, Z might look like a dystopia of huge crowded slums with just slightly more joy than sorrow in each life, whereas A is a prosperous utopia. Yet pure total utilitarian logic seems to commit one to saying Z (the huge population with low average welfare) is better, because it has more total welfare. How is this akin to Sorites? Because we can imagine a sorites series of hypothetical populations between A and Z. Parfit actually constructs such a series in his “Mere Addition Paradox”: Step by step, you go from A to Z by adding people in a way that each step doesn’t seem worse. Perhaps first you add some people with slightly lower happiness, the average goes down a bit but total goes up (and perhaps make it so no one’s worse off – just new people added – so “no harm” in adding a just-happy person). Then you might redistribute some happiness, etc., in a series such that at each step no one can strongly say one world is worse than the previous. By a sequence of apparently innocuous changes, you reach the “Z” world. Each addition or change seems acceptable (who could object to adding people who will have lives worth living? It seems cruel to deny existence to someone who would have a life worth living, and it doesn’t harm those already existing). Yet at the end, you prefer Z less than A. This is like removing one grain at a time: each addition of a mediocre life seems fine (or at least not worse), but the end result is counter-intuitive. So the Repugnant Conclusion is a kind of Sorites in population ethics – a series of locally acceptable steps leads to a globally disturbing conclusion. Total utilitarians, as Parfit notes, essentially have to accept the Repugnant Conclusion unless they find a flaw in one of the steps. Average utilitarianism was seen as an escape: average utilitarians would not prefer Z over A because Z has lower average happiness. However, Parfit showed that average utilitarianism has its own version of a paradox – sometimes called the “Sadistic Conclusion” – where it might prefer a scenario with fewer people all at very high average happiness even if it means eliminating some lives that are positive but below average. Plus, one can make a Sorites of a different kind for average: adding a person with slightly below-average happiness can very slightly drop the average, which might be considered a worse world by average-utility criterion, even though that person’s life is happy and seems no tragedy to include. If you keep adding such people, each addition barely lowers the average, but at the end, you might have a large population with a moderately lower average – which average utilitarianism would deem worse than the initial, smaller, higher average one, contrary to an intuition that “more happy lives, even if somewhat less happy, could be a good trade.” So neither total nor average utilitarianism cleanly avoid paradoxes of continuum trade-offs – this is an active area in ethical theory (sometimes involving hybrid theories, or critical-level utilitarianism, etc., beyond our scope).

Connecting back to Sorites: The Sorites Paradox teaches the difficulty of drawing a line, and in ethics utilitarianism often says “don’t draw a line – just keep counting and aggregating.” This sometimes works elegantly (as in diffusing the heap by assigning each grain a epsilon of utility), but other times it means utilitarianism is willing to endorse conclusions where a line never gets drawn and that offends common moral intuition (like endorsing extremely large trade-offs of quality for quantity). Utilitarian philosophers are very aware of these challenges. Some respond by modifying the value theory (e.g. perhaps very low quality lives have a different weight, or there’s a threshold of a “life worth living” concept). Others might reject transitivity in welfare comparisons (very controversially) to block the chain of “each addition is not worse” in the population paradox. In effect, to solve these, one almost has to introduce a non-linear element – a bit like saying, “past a certain point, adding more lives or more small benefits doesn’t produce a better outcome” which is analogous to admitting a threshold or denying full tolerance.

Finally, negative utilitarianism and Sorites: Negative utilitarians could be seen as drawing a hard line on suffering: no amount of small pleasures can compensate for significant suffering. In a way, they reject a part of the typical utilitarian Sorites stance by saying some bads cannot be outweighed by heaps of minor goods. For example, a strict negative utilitarian might say no number of happy people feeling mild joy is worth the intense suffering of even one individual. This avoids certain repugnant trade-offs (like the headaches vs life case – a negative utilitarian would definitely choose to prevent the one person’s great suffering rather than many people’s headaches, effectively siding with the anti-aggregationist intuition). But negative utilitarianism can lead to an opposite extreme: by emphasising suffering only, it might say that if a life has any suffering, that is a moral strike – thus it can spiral into advocating the elimination of life (to eliminate suffering) even if it also eliminates much pleasure. It’s as if negative utilitarianism draws a soritical line the other direction: one pain is too much. While not exactly Sorites, it’s a response to the Sorites-like additivity of classical utilitarianism by putting lexical priority on avoiding pain: even an “heap” of pleasures cannot cancel out a “heap” of pain in one person, some might argue. There are intermediate positions too – e.g. prioritarianism (which isn’t exactly utilitarian, but related) gives extra weight to improving the lot of the worst-off. That means, in aggregation, benefits to those who are very badly off count more than equal benefits to those well-off. This tilts away from the pure additive Sorites logic by making the “grain” of utility for a suffering person worth more than a “grain” for a happy person. As a result, it would not allow arbitrarily large minor gains to privileged people to outweigh serious harms to the poor or suffering. It’s like acknowledging that beyond some context, not all utils are equal if we want to avoid some paradoxical trade-offs.

In summary, utilitarianism grapples with Sorites-like problems whenever it deals with continuums of harm or benefit and large numbers. Different versions handle it differently:

  • Act utilitarianism in pure form can misjudge cases of imperceptible impact, but by recognising incremental utility (however small) or using expected utility, it can theoretically handle them – yet it tends to treat everything as continuous, so it can sanction surprising large-scale trade-offs.
  • Rule utilitarianism can better capture our instinct to avoid each small step that leads to disaster (by effectively inserting a collective viewpoint – “if everyone did this, it’s disastrous, so don’t do it” – thereby blocking the Sorites progression at some point). For example, rule utilitarianism would support a rule like “do not pollute even a little” because if everyone does, the outcome is bad; this prevents the scenario where each says “just a little won’t hurt.” This aligns with general moral common sense for collective action problems.
  • Total utilitarianism embraces aggregation so fully that it runs into the repugnant conclusion – it is too willing to accept heaps of small goods in lieu of big goods or heaps of people with small goods instead of fewer with big goods.
  • Average utilitarianism tries to correct that but implicitly places a sort of threshold on acceptable life quality, which then has its own paradoxes.
  • Negative utilitarianism draws a line prioritising suffering elimination, which can avoid some Sorites problems by effectively saying some thresholds (like any non-zero suffering) have infinite weight, but that stance can yield extreme conclusions itself.

The interplay between Sorites and utilitarianism highlights a broader theme: ethics often must balance incremental thinking with threshold thinking. Utilitarianism is fundamentally incremental/aggregative: everything counts a little, and enough of a little counts a lot. This is very much in Sorites spirit – no bright lines, just accumulation. This can be a strength (consistency and flexibility) but also a weakness (counter-intuitive implications). Other ethical approaches (like rights-based or deontological ethics) sometimes impose hard lines – e.g. a right not to be killed that cannot be outweighed by any number of small benefits to others. That avoids a Sorites in one sense (one cannot just add up trivial gains to override a right) but creates a different kind of inflexibility (what if enough is truly at stake? etc.).

In conclusion, the Sorites Paradox serves as a reminder in ethics that we need to be mindful of “the heap of small decisions.” It urges us to ask: when do quantitative changes become qualitative changes in morality? Utilitarianism as a framework is one of the most straightforward in dealing with quantitative changes – it literally adds them up. Yet it must confront the paradoxes of that approach, refining its principles to ensure that our moral common sense (e.g. that one should not contribute even a tiny part to a terrible outcome, or that we shouldn’t prefer a world of tiny grinning joys over rich fulfilled lives) is respected or at least addressed. By correlating the Sorites Paradox with utilitarian thought, we gain insight into ethical dilemmas like climate action, public health measures, and population policy: we see why people struggle with galvanising action against imperceptible individual effects, and how ethical theory can guide us to overcome that by taking a broader view. We also see how a relentless focus on aggregation (the utilitarian hallmark) can lead to morally puzzling results akin to the heap paradox – challenging us to refine our approach to well-being, perhaps by incorporating considerations of distribution, quality thresholds, or rights that act as firebreaks against sliding all the way down a slippery slope.

In wrapping up this extensive exploration, we find that the Sorites Paradox is far more than an amusing puzzle about heaps of sand. It is a profound illustration of continuity in a world that demands distinctions. Its legacy in philosophy is a richer understanding of vagueness and a humbling recognition of the limits of binary thinking. Its echoes in arts, culture, and myth show that human beings have long grasped the paradox of gradual change in intuitive ways. And in ethics, it challenges us to think carefully about the accumulation of our seemingly insignificant choices – a timely lesson in an era where global problems are often the result of billions of small actions. The Sorites Paradox ultimately asks us to consider when and how we draw lines – be they lines between a heap and a non-heap, or lines in the moral sand that we vow not to cross. By studying its implications across disciplines, we gain not only theoretical knowledge but also practical wisdom: an awareness that big problems start small, and that clarity, where it exists at all, often lies at the ends of a long gradient of gray.


by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *