Welcome, robot overlords

Originally published in the Daily Maverick

On a flight back from somewhere, earlier this year, the pilot announced to us that we’d just been treated to a fully automated landing. While nobody expressed any concern, there were a few thoughtful or confused looks around the cabin, of people not quite sure how to respond to this news.

My first thought was regarding the timing of the announcement. Just in case anyone would be concerned at being landed by an algorithm, the SAA (I think it was) management (yes, I know) presumably decided to only let us know once the deed had successfully been done. But I also wondered how many others were, like me, thinking something along the lines of “it’s about time”.

It’s about time, I mean, that we acknowledge that humans are inferior to computers at making some decisions, and that we should therefore remove humans from the equation. And not just some – in areas where decisions are made by reference to a multitude of factors, and the intended outcome (such as landing a plane safely) is unambiguous, I’d be tempted to up that to “most”.

Pilots are of course well trained, and no doubt need to pass regular checks for things that might impair judgement, like drugs, alcohol or sleep-deprivation. That’s one of the reasons that far fewer people die from accidents involving planes than die from accidents involving cars. But another reason is that we think far too highly of ourselves, and our own competence at performing routine tasks in adverse circumstances – like driving home after one too many drinks.

We’re reluctant to understand ourselves as a simple statistical data-point, far more likely to conform to the mean than not. Anecdotes trump data every time for most of us, which is why we can think that we’re superb drivers while under the influence of something, until that day when we’re just a drunk driver, like all the other drunk drivers who have caused accidents since booze first got behind the wheel of a large metal object.

But despite our occasional incompetence in this regard – and note, also an incompetence that we can to some extent control – is it time to hand everyday driving over to computers also? I’d say it might well be, for those of us who can afford to. Because even if you’re as alert as you could possibly be, you’re still not able to simultaneously engage with as many variables as a computer can, and nor are you able to react to the outputs of that engagement as quickly.

Computers – or robots – pose questions beyond whether they or a human would be superior at performing a given task, like getting you to the church on time or destroying an enemy installation during war. In war, proportionality of response is a key issue for determining whether a drone attack is legal or not, and as soon as a drone is fully autonomous, we’d need to be able to trust that its software got those judgements right.

Or would we? The standards that we set for human beings allows for mistakes, so it would be inconsistent to refuse the possibility of error for robots, even if they were unable to express contrition, or to make amends. As with many encroachments of technology into our existence, robotics is an area where we need to be careful of privileging the way that humans have always done things, just because we are human.

Cloned or genetically modified food, in vitro fertilisation, surrogate motherhood, stem-cell research (to list but a few examples) are all areas where either a sort of naturalistic fallacy (thinking something morally superior or inferior depending on whether it’s natural or not) or some sort of emotive revulsion (the “yuk factor”) get in the way of a clear assessment of costs versus benefits. When speaking of robots driving our children home from school, a similarly emotive reaction can also cloud our thinking.

Just as with any data point in an set, you and me and everyone we know more often feels superior in whatever skill set than actually is better at that skill than the mean. The mean describes something: in this case, it describes the level of performance of the average person. And if we were all better than average, the average would be higher. For driving, it’s not, or we wouldn’t have an average of over 700 road fatalities every month.

So the question to ask is: when can we be confident that – on average – fewer people will die on the roads if cars are robotic than if the drivers are human? If we’ve reached the point of being confident about that, then the moral calculus shifts against human drivers. If you have the means and opportunity, you’d be acting less morally to drive your kids to school than have a computer do so – regardless of how this feels.

In the New Yorker, Gary Marcus recently invited us to consider this scenario:

Your car is speeding along a bridge at 50mph when [an] errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

As Sarah Wild and others have pointed out in response, this sort of scenario does raise question about which rules we would like the car to follow, who makes those rules, and who is to blame when some unfortunate accident or death occurs. But where I think most comment on this issue gets it wrong is in labelling the moral dilemmas “tricky”, as Wild does.

If we can, on average, save many more lives by using robotic cars instead of human-controlled ones, the greater good would at some point certainly be maximised. Yes, there will be circumstances where the “wrong” person dies, because a maximise-life-saving algorithm will not be adaptable to very idiosyncratic circumstances, like the one described by Marcus.

In general, though, the robotic car will not speed, will never run a red light, and will never exceed the threshold for maintaining traction around a corner. It will never drive drunk, and will be far better at anticipating the movements of other vehicles (even if they’re not robots, on the same information grid themselves), thanks to a larger data-set than ours and an objective assessment of that data.

So it’s not that there is a tricky moral dilemma here. What’s tricky is that we aren’t able to view it – and ourselves – as a simply economic problem, where the outcome that would be best for all of us would be to set things up in a way that maximises life, on aggregate.

Any solution that prioritises human agency or building in mechanisms to know who to blame when things go wrong is understandable. But, once the driverless car is sophisticated enough, it would also be a solution that operates contrary to a clear moral good.

The ethics of eating meat

As submitted to the Daily Maverick

It goes without saying that industrial meat production entails harm to animals. While much of this harm might be avoidable, the costs of producing meat ethically (if such a thing is possible) in sufficient quantities to satisfy a hungry market could end up making meat unaffordable to most. But lay debates on whether it’s ethically permissible to eat meat often don’t touch on these economic factors, preferring the extreme positions of either asserting human dominion over other animals, or the essential wrongness of killing animals for food (where, for most of us, this food source is fully replaceable).

The argument for human dominion is of course largely a matter of cultural habit, and as Peter Singer has pointed out in talking about speciesism, it is by itself difficult to distinguish from racism or sexism. In his paper “Eating animals the nice way” (pdf), Jeff McMahan succinctly highlights our prejudices on this matter by pointing out that “human intuitions about the moral status of animals are so contaminated by self-interest and irrational religious belief as to be almost wholly unreliable”.

The position that it’s always wrong to kill for food also seems extreme, in that it’s surely possible to imagine meat without suffering, even if only on a small scale. So, the harm caused and whether it’s needless or not, compared to the benefits of meat production and consumption, need to be part of the debate for our vegetarianism or our meat-eating to be ethical.

I’m of course already taking a stand in the paragraph above in not giving significant attention to some vegan positions, where the treatment of animals as a commodity is universally regarded as ethically wrong. This is simply because I’m not persuaded that we have a duty to maximise the lifespans of (some) other animals, because “being alive” is not something they can value. For those animals that do have a theory of mind, I’d think we would have that duty. As for chickens and cows, though, the issue of suffering seems to be what would determine the ethical status of meat-eating.

The position I’m taking is that, for a practice to be ethically wrong, someone or something’s interests need to be harmed. The thing being harmed needs to be capable of experiencing harm (whether emotional, financial, or other) – this is what it means to be a moral agent. However, even harm is in itself not sufficient for ethical wrongness, as some cases could justify causing a certain amount of harm in order to prevent a larger amount of harm.

When considering whether it’s ethical to eat meat, utilitarian arguments are most commonly marshalled in defence of the view that meat-eating is ethically wrong, due to the harms caused to animals. But the harms cited are not convincing, or are at least not an argument against meat-eating per se, but rather against the particular conditions under which most animal meat is produced.

Non-human animals can suffer pain. If we assume that causing them unnecessary pain is wrong – as I do – what follows is that we need to produce and consume meat in ways that don’t cause unnecessary pain. It would only be ethically wrong to eat meat, in principle, if meat production necessarily caused harm to animals. But meat can be produced under non-harmful conditions, and animals can be slaughtered without distress to themselves or other animals in their immediate environment.

As I say above, farming of this sort would be more costly than factory-farming, and this approach would mean a significant increase in the price of animal meat. But here again, there is no necessary harm – those who cannot afford to eat meat will not suffer significant harms through being forced to eat less or no meat, and farmers who cannot compete under these conditions would have to develop alternative ways of making a living.

There would certainly be some harms resulting here – to the farmers – but these harms would on balance be less than animal suffering under factory-farming. Again, though, the key point is that meat-eating per se would not be ethically wrong, even if certain market-orientations in the production and consumption of meat could result in more harms than others.

Regarding the more general economic arguments around the production and consumption of meat, if meat was a significantly more wasteful sort of foodstuff to produce than alternatives, the argument could be made that eating meat is ethically wrong. But only if a) we grant that we have moral obligations to others and/or the environment; and b) if we have good reason to believe that meat production is indeed more resource-intensive.

Point (a) does stand in need of support, but I’d imagine that most of us would accept its truth. But even if true, it remains possible that a significant price-premium on meat – putting it only in the hands (or rather, mouths) of the wealthy – could result in a net benefit to those less fortunate. The farmers and others involved in meat production could receive greater profits, and potential taxation revenue could be directed explicitly at poverty-relief, or feeding programmes (whether involving meat or not). Much could go awry with this sort of scheme, of course, but the practical problems of allocating this revenue are again not an argument from principle.

On point (b), the argument is far less settled than many seem to believe. It’s often cited as a truism that meat-production is vastly more resource-intensive, but evidence cited in books such as Simon Fairlie’s Meat: A Benign Extravagance offers good reason to be suspicious of this truism (as is often the case with dogmatic utterances of any sort).

It might well be the case that 50 years from now, we’ll look back in disbelief at our current dietary practices, perhaps considering them a form of savagery and exploitation. Our cultural practices don’t always overlap with what’s ethically right, and it can take time for us to realise this. And I can’t deny that part of the reason I eat meat is simply because I assume that it’s okay to do so, and refrain from (potentially) burdening my conscience by thinking about it too much.

But if it’s true that ethical wrongness entails actual, necessary, harms rather than potential harms, then arguments against meat-eating that appeal to potential harms (under existing rather than immutable conditions) aren’t persuasive. The precautionary principle is a poor justification for restricting liberty – if harms cannot be demonstrated, we should be free to eat whatever we like.

The burden of proof should always fall on those who want to restrict liberty, and as things stand, it seems to me that the only justified restrictions on what we eat relate to some ways of producing and eating meat – but not meat-eating in general.

This column was prompted by The New York Times, and their call for submissions to the Put Your Ethics Where Your Mouth Is contest.