What should a utilitarian do to produce the greatest possible happiness? Differing ethical perspectives, uncertainty about the future, and sociological naivete all make this question tricky to answer. But Nick Bostrom's 2003 paper 'Astronomical Waste' presents one logical, if unorthodox, solution: utilitarians should devote their lives to scientific development because it will maximise future worthwhile lives[1]. This argument depends on the striking calculation that our nearest supercluster in space, the Virgo Supercluster, could harbour 1023 biological people. For comparison, approximately one hundred billion (1011) people have ever lived on Earth. Taken together, these figures suggest that, in the event of successful space colonisation, the overwhelming majority of human life is in the future.

To a total utilitarian, who concerns herself with maximising expected total utility, the dramatic disparity between the potential population and our current one dictates that she should almost exclusively apply herself to creating technological advances that ensure these future persons come to exist. One caveat which I underline in this analysis is that the future lives must be 'worthwhile' for this endeavour to be productive in terms of utility. The paper also introduces an opposing school of utilitarianism in the person-affecting view, which stipulates that moral consideration should only be given to the presently existing population, thus ignoring the wellbeing of future people. Bostrom's thesis can read as contentious, for its demand that we should devote ourselves to supporting a non-existent future race is a colossal demand. But the lack of a definite conclusion, uncharacteristic of an author who is expressly concerned with the application of philosophy (and has established an institute researching humanity's long term prospects for this purpose), suggests to me that one of the paper's aims is to expose implicitly the failure of utilitarian theories to react to the immensity of our universe.

Technological development, the paper argues, is urgent due to the opportunity costs of delays. Total utilitarians and person-affecting utilitarians think about these costs in separate ways, so some explanation and discussion of them is warranted. The creation of value-structures in space (Bostrom uses the example of 'worthwhile lives' as I have hitherto done) requires use of space's resources, principally its energy (the variable R). Assuming our existence is not infinite, the lapsed time before space colonisation is an issue for the total utilitarian, because potential 'worthwhile' lives could have been lived during that period, had space colonisation occurred sooner -- the sum total of utility decreases with any loss of potential lives. However, the waste of time includes the reduction of R, as black holes are absorbing energy and stars are fruitlessly lighting the darkness (et cetera). Therefore, when we eventually do colonise space, fewer resources will be available for conversion into value. Within the Virgo Supercluster alone, Bostrom correlates the loss of R to a loss of 'over 1013 potential human lives' per second and notes that this rate 'boggles the mind'[2].

The numbers may be large, but their magnitude is not strictly important; what matters is that energy, which could have been used to sustain valuable life, has been lost. It follows that a totalist seeking to maximise utility should therefore focus on reducing this loss by making technological developments that bring humanity closer to a cosmic diaspora, and population boom. 'The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity'[3]. This aspect of the paper is entirely consistent with the core tenet of total utilitarianism -- 'greatest happiness for the greatest number' where 'number' means 'population size' -- but still leaves a sour aftertaste[4]. What totalist theory promotes is an undertaking not only inconvenient to us, the current population, but also dependent on a fleet of unknown variables about future technologies and lifestyles.

We might hope that the person-affecting approach is more pragmatic, and actionable. Utilitarians of this breed include current persons only in their utility functions; they care about the 'greatest happiness for the greatest number' where 'number' means 'proportion'. With this view, it does not hold that energy waste in space is a concern because it constitutes a loss of potential lives. But the waste is nonetheless a problem as the energy, which could have been used to improve existing lives, is lost. The 'expected utility to current individuals', could we use this energy for purposes of well-being, 'is astronomically great', argues Bostrom[5]. He does not mention that, given earth's native resources are far from exhausted, it would be more appropriate to harvest these before venturing out of our orbit for more[6]. Additionally, as humans are already on average happy[7], it is likely that earth's resources alone are sufficient to make all terrestrial lives worthwhile, making trips to space for more resources unnecessary. My point is that a person-affecting utilitarian with an average utility function does not consider energy loss as a waste of potential human lives. Certainly this is the attitude we tacitly take at present.

Bostrom neglects to define 'worthwhile lives'. In another thought experiment it may not be relevant, but the endeavour proposed in this one involves extraordinary population sizes and a commitment to centuries of work. Furthermore, he makes the assumption that 'any civilisation advanced enough to colonize the local supercluster would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living'. The endeavour to create new lives with energy in space must be constrained by the condition that the new lives will be good, or else the endeavour would be counterproductive in terms of utility. But it's a step beyond this basis to assume that future lives will inevitably be worthwhile because of progress made elsewhere. Technological and sociological developments can be linked, but can also exist independent of one another. For evidence of this, we need only compare the marvel of rocket science to the lack of water infrastructure in developing countries, where one child dies from a water-related disease every twenty seconds[8]. To rephrase this point, our migration into space does not require 'minimally favorable conditions' to have been met anywhere, so it seems a mistake on Bostrom's part to assume that it does, and an unfortunate omission from the paper that no definition is given to the phrase 'worthwhile lives'.

To a totalist, the quality of future lives is more important than the number of them, as they must at least be non-negative to increase total utility. But one hardly finds 'non-negative' an adequate meaning of 'worthwhile'. It may be that, as of writing, there is a positive trend in global wellbeing, with healthcare expenditure, philanthropy, and the like increasing yearly. But neither this nor any scientific progress explicitly guarantee that future lives will be worth living. It appears then that more research must be done to understand what 'worthwhile' means, and to understand if we can sustain happiness throughout the endeavour of space colonisation. To refine Bostrom's instructions, we should insert another task of caution (point 2):

  1. Minimise existential risk[9].
  2. Ensure that future lives in space will be worthwhile throughout.
  3. Build the infrastructure to create and sustain future lives in space.

That space colonisation (let alone the task of creating 1023 valuable lives) is still decades or centuries, if not millennia away, is a relevant yet overlooked aspect to the argument. A totalist does not in principle view the length of this interim as a case against the endeavour, but in reality, certain discoveries could be made before or after colonisation which render it futile. For example, it is not implausible that we find human happiness to be negatively affected by population size. Such a discovery would make the elements of the utilitarian algorithm antagonists (the 'greatest happiness' could only be won by a 'smaller number'), and our utilitarian attempt to lodge trillions of people in space would be self-defeating. Following the Three Mile Island nuclear accident, Daniel Dennett observed a paradox in how the unfortunate disaster led to positive consequences like regulation which may have prevented further disasters. In response, he coined the phrase 'Three Mile Island Effect' to describe the moral uncertainty of a future action (it is difficult to predict whether an outcome will be net positive or net negative)[10]. Similarly, he claims that precise utility values can not exist, because not every consequence of an action is obvious or measurable. The 'Three Mile Island Effect', I believe, is especially applicable to the endeavour of space colonisation as described by Bostrom, because the first birth in space is likely to be centuries away, and because the goal involves enormous scientific advances to be made, some of which may change the nature of the endeavour completely.

A convincing case for space colonisation is made for both the totalist and person-affecting frameworks -- space's resources are useful to us either way. The conclusion of the paper offers no judgement on which framework to practise, but I find neither has a satisfying resolution to the problems of population ethics. Were the totalist approach adopted, everybody would be morally compelled to devote their lives to scientific progress in order that a population explosion might happen sooner. This does not seem feasible. And would we ever be satisfied with the amount of generated value, or would we create worthwhile lives infinitely, filling the endless universe? Total utilitarianism and its monotone quest for utility seem brutish to me. On the other hand, the person-affecting approach is not much more useful, being absolutely oblivious to future peoples. If we subscribed to that, Earth would surely become a junkyard as all current individuals enjoy themselves, climate change would be a concern only insofar as it prevents us from maximising our happiness, and children would be born into a world inferior to the one in which their parents grew up. This attitude towards the future does not seem feasible either.

A variation of the person-affecting view, which we can be said to practice already, is my preferred school of thought, particularly with respect to the issues raised in Bostrom's paper and this analysis. Yes, our primary moral responsibility should be the existing population, but potential lives should be a factor of our utility function as well. I suggest that the proximity of a future life to today decides how much consideration we should give it, so that closer generations are regarded as important while those further away do not occupy a disproportionate amount of our efforts. In other words, our responsibility for the future should decrease exponentially with time, otherwise we could potentially be acting on behalf of an infinite number of future people[11]. We should seek to access space's resources as fast as possible (especially once Earth's own are depleted), primarily in order to improve the lives of those currently alive and their offspring, and existential risk should be a moderate worry of ours. By contrast, traditional person-affecting utilitarians would not consider existential risk in a meaningful way, as shown in the paper[12].

Philosophical conundrums that envisage unfathomable numbers of people usually come from playful, hypothetical thought experiments. Yet Nick Bostrom has found a convincing challenge for utilitarian theories that is rooted in real phenomena (astronomical waste) and feasible scientific progress. Perhaps this is simultaneously the most outlandish, the most numerically extreme, and, according to some utilitarians, the most pressing question there is.

--

[1] Bostrom, 'Astronomical Waste: The Opportunity Cost of Delayed Technological Development', p.311. [2] p.308. This is the waste to which the pun in the title refers. [3] p.311. [4] Essentially, Bostrom's argument in this form supports Derek Parfit's 'repugnant conclusion', which has long been used as a criticism of the 'total happiness' utility function: a large group of slightly happy people, according to total utilitarians, is preferable to a small group of blissful people. This is clearly not desirable according to current standards. Parfit, Reasons and Persons, pp.149-186. [5] p.313. [6] It is obviously easier to use Earth's natural resources and to improve our solar capture technologies than to use space itself as an energy source. [7] World Values Survey (Longitudinal Multiple-Wave file, version 2015). [8] Here is an echo of Christopher Lovell's regret that there remains a 'high proportion of those currently living in poverty, and yet we have been firing a selected few in to space for decades'. [9] Existential risk, the paper concludes, is an uppermost priority for total utilitarians as it threatens future populations, but a minor concern for person-affecting utilitarians, who gamble their lives in the hope that it won't occur before their deaths. Although these ideas are interesting, I have not discussed them in this essay. [10] Dennett, 'Information, Technology, and the Virtues of Ignorance', p.151. [11] This is a form of time-discounting incongruous with utilitarianism's common doctrine, but necessary all the same to deal with the potential of an infinite population. [12] Bostrom, 'Astronomical Waste: The Opportunity Cost of Delayed Technological Development', p.312.