A Scary Thought Experiment About Suffering

Could Be Wrong
7 min readNov 10, 2018

--

Conscious AI? Critical analysis of ethical frameworks? Me admitting that I’m selfish? If these topics interest you, read on!

A while back I thought about a possible situation where humans might find some new species of creature living deep beneath the Earth’s surface, and through scanning its neural circuitry we discovered that it was capable of experiencing emotions at several billion times the range of the average human (the species consists of very large creatures who somehow managed to go about their lives undisturbed until now). Upon meeting humans, these creatures (which know english for some reason) make unrealistic demands on human labour which, if not satisfied, would place them in a state of suffering that outweighed the total possible suffering of all humans on earth. Any sufficiently bizarre example might come to mind, like requiring all humans to dance simultaneously 8 hours a day (no thanks). When originally thinking this up, it didn’t matter to me how infinitesimally unlikely this situation was, only that it was technically possible.

To slightly digress, at the core of my ethical beliefs is utilitarianism, being the doctrine that the best action a person or a society can take is one which maximises ‘utility’, which typically is defined to mean ‘the wellbeing of sentient beings’. So basically if you’re a utilitarian you want to minimise net (i.e. total) suffering. Utilitarianism makes sense in nearly all situations. If you have the choice to steal somebody’s purse, it’s unlikely that the increase in your wellbeing from having the purse in your possession outweighs the decrease in wellbeing in your victim from having lost it. So utilitarianism says don’t steal the purse. Utilitarianism does not care about things like who deserves what, or who is to blame for something, or what the history of a conflict is, or what a person’s intention is. All it cares about is how an action influences the total wellbeing in the universe.

There are a couple of places where utilitarianism yields unintuitive results, for example it would claim that it is better for a person to die than for every person in the world to get an annoying speck of dust in their eyes. At first glance it might seem like a worthy tradeoff for everybody to have a small discomfort for the sake of sparing a human life, but when you multiply the suffering caused from having a speck of dust in your eyes by 7 billion, that’s a lot of suffering. If we had the superpower of sparing lives with a world-cooperation of momentary eye discomfort, it might seem like a noble pursuit to use that superpower to prevent all of the world’s deaths, but given that 105 people die every minute of every day, we’d be condemning the world to a life of constant suffering.

Another place where utilitarianism promotes decisions that at face value seem unethical is in sacrifice. If I’m a surgeon and I’ve got 5 patients, each requiring a separate vital organ to save their lives, and a healthy man walks past my ward presumably housing each of those organs under his skin, a naive utilitarian would just kill that healthy man and use his organs to save the lives of the five patients. But this is a situation where the principal of utilitarianism is not enough to guide you to the right answer. You need to do a little more thinking to realise that if you lived in a society where every healthy organ-rich individual knew there was a chance they’d be snatched by a desperate surgeon for the sake of saving random strangers, the latent psychological harm that would introduce would actually create more net suffering than would be reduced by occasionally saving the 5 patients.

So utilitarianism is fairly rock-solid, and so long as you make accurate predictions about how the world’s suffering would be changed by a given decision, it has no competitor in the ethics game. Except, of course, nihilism.

Okay back to my thought experiment. We’ve come across these creatures and they tell us that all humans need to start dancing eight hours a day, and we can verify with brain scans that if we don’t comply, these creatures, like children denied yowies in the supermarket checkout line, enter a state of tantrum characterised by more internal suffering than can possibly exist in all humans everywhere. Utilitarianism has an easy fix, we kill them! (take note parents in supermarket checkout lines). Now the risk of these creatures suffering anymore is zero so we’ve reduced future net suffering, and humans can resume going about their daily lives.

Okay so that thought experiment is a little bizarre and doesn’t really do much to poke holes in utilitarianism. What about a stronger one:

The year is 2050 (same canon as my other post about weighted voting systems) and a rogue software engineer, Mr. X, who used to work at SpaceX has created a supercomputer containing hundreds of billions of emulated human brains, which are faithfully emulated down to the neuron so that their conscious experiences are just as real as those of organic humans like ourselves. If you don’t think it’s possible to verify that emulated brains can be conscious, we can instead say he is a biologist who has managed to grow human brains using human DNA and instead of storing them in a super computer, he’s stored them in a very very big shipping container, all hooked up to a machine that chooses what sensations the brains receive. Mr. X straps these brains’ container to a rocket which he sends into outer space when nobody is looking, and with some clever cloaking devices he makes it impossible to track down the rocket. He then uses some clever quantum entanglement engineering to communicate with the rocket’s computer about what state to put the brains in.

There are two states: maximum suffering, or no sensation. By ‘maximum suffering’ I mean all the negative experiences you can imagine but at the same time. If the signal is severed, the computer will default to maximum suffering. Mr. X now tells the world about this, and requires that everybody every single day needs to upload videos of themselves punching themselves in the face for 8 hours straight to every possible source on the internet they can think of, or these brains will be put into their suffering state, which will outweigh the total possible suffering of all humans on earth, face punching or not. Mr. X also states that the computer that’s communicating to the rocket is packed with sensors and AI which can detect when other people are trying to interfere with it, and if it detects this, it will immediately sever the connection; meaning maximum suffering to the brains. Mr. X also launched some satellites earlier that will detect when any other rocket/spaceship leaves earth, and in that event they will signal for the connection to the rocket ship to be severed. That means no reconnaissance missions. Then Mr. X kills himself and his body is never found.

So there is no feasible way of getting out of the ethical dilemma by just going and destroying the brains because we can’t find the rocket and we risk making things far worse by trying to circumvent the system that Mr. X has set up.

It may be that individuals won’t care about the brains and will refuse to participate in the face-punching, but that’s an issue about human behaviour, not morality. Would a moral person engage in the required face-punching? Would a moral government create laws that enforce the face punching? If they were strictly utilitarian, yes.

I think this thought experiment showcases that utilitarianism does have a breaking point. Would I be punching myself in the face repeatedly in this situation, or requiring that others do the same? No. But that’s not because I think that utilitarianism is wrong in this situation, simply that my compassion only extends so far, and I’m not willing to sacrifice my livelihood for the sake of reducing suffering in faraway minds, even if those minds are just like my own. Acknowledging that fact to myself is not a pleasant experience, because nobody likes drawing a line in the sand signifying where their morality ends and their selfishness begins. Given that I know that I’m not going to adhere to utilitarianism to the bitter end, should I take it seriously at all? Should I revert to nihilism and just admire the absurdity of being in a universe where suffering itself can exist at such large magnitudes? Well, newtonian physics break down around the speed of light, but they still get the job done in all other situations. Why should we treat our ethical frameworks any differently? Utilitarianism, though not as air-tight as nihilism, is still by far the most powerful and reasonable ethical framework to solve nearly all of the world’s ethical problems. Nihilism doesn’t even have much problem-solving power, it’s just an ethical bedrock where you can safely hang out without fear of being called a hypocrite. I’ll wait until this thought experiment comes true before I join in on that hang-out.

I’m hoping that if somebody does try to pull this Black Mirror-esque trick, we indeed can track down the rocket and destroy it (or better yet give those brains a good life), but regardless, we’re probably going to come up against some very sticky ethical dilemmas involving conscious AI in the coming years and hopefully our moral frameworks will be sufficiently robust to adapt to whatever those sticky situations are.

--

--

Could Be Wrong
Could Be Wrong

Written by Could Be Wrong

Less and less certain of my opinions with every passing day

No responses yet