On robots, AI, and the future of humanity

robotaiA few weeks back, Sarah Wild asked if I’d be interested in offering a comment or two on artificial intelligence for a piece she was working on (the article in question appears in this week’s Mail & Guardian).

While I knew that only a sentence or two would make it into the article, I ended up writing quite a few more than that, and offer them below for those interested in what I had to say.


 

What role to humans have to play in a world in which computers can do everything better than they can?

In the most extreme scenario, humans might have no role to play – but we should be wary of thinking that we’re somehow deserving of playing one in any event. While it’s common for people to think of themselves, and the species, as both special and deserving of special attention, there’s no real ground for that except our high regard for ourselves, which I think unfounded. We don’t “deserve” to exist, or to thrive as a species, no matter how much we might like to. If the planet as a whole, including all sentient beings, would be better off with us taking a back seat or not existing at all, those of a Utilitarian persuasion might not think that a bad thing at all.

In a less pessimistic (for some) scenario, we’re still a very long way away from a world in which humans are redundant. Computers are capable of impressive feats of recall, but are significantly inferior to us at adapting to unpredictable situations. They’re currently more of a tool for implementing our wishes than something that can initiate and carry out projects independently, so humans will – for the foreseeable future – still be necessary for telling computers what to do, and also for building computers that are able to do what we’d like them to do more efficiently.

Elon Musk has said that AI offer human kind’s “greatest existential crisis”. What do you make of this statement?

This strikes me as bizarrely technophobic. We’re already at a point – and have been for decades – where the average human has no idea how the technology around them operates, and where we routinely place our faith in incomprehensible processes, machines and technologies. (Cf. Arthur C. Clarke’s comment that sufficiently advanced technology is “indistinguishable from magic”.) If it’s a level of alienation from the world we live and work in that triggers this crisis, I’d think we’d be in crisis already.

There seems no reason to prefer this moral panic or fear-mongering to what seems an equally plausible alternative, namely that the sort of alienation Marx was concerned about might be alleviated through AI. If machines can perform all of our routine tasks far more quickly, efficiently and cheaply than we currently can, perhaps we can spend more time having conversations, walks and dinners, rediscovering play over work, or generating art.

It’s probably true that there will be an interregnum wherein class divides will accentuate, in that wealthier people and nations will be first to have access to the means for enjoying these advances, but as with all technologies, they become cheaper and more accessible as our research advances. Technophobia as displayed by Musk here runs contrary to that, in that the last thing we want to do is to disincentivise people from engaging with these technologies through making them fearful of progress.

A recent Financial Times articles paints an apocalyptic AI future. What do you think a future world – with self-driving cars, care-giver robots, Watson-driven healthcare, etc – looks like?

The key fears around an AI future tend to be driven by the concept of the singularity, popularised by Ray Kurzweil. One possibility sketched by those who take the singularity seriously is that if we invent a super-intelligent computer, it would be able to immediately create even more intelligent versions of itself – and then this concept, applied recursively, means that we’d soon end up with something unfathomably intelligent, that might or might not think us worth keeping around.

Again, I think this pessimistic. We’d be building in safeguards along the way (perhaps akin to Clarke’s laws of robotics), and we’d likely see frighteningly smart computers coming years or decades in advance, allowing us to anticipate, to some extent at least, what safeguards would be necessary. Given the current state of AI, we’re so far away from this possibility that I don’t think it worth panicking about now (despite Kurzweil’s claim that the singularity will occur in 30 or so years from now).

(Incidentally, Nick Bostrom is very worth reading on these things.)

A more general reason to not be as concerned as folk like Kurzweil are is that I’d think malice against humans (or other beings) requires not only intelligence, but also sentience, and more specifically the ability to perceive pains and pleasures. Even the most intelligent AI might not be a person in the sense of being sentient and having those feelings, which seems to me to make it vanishingly unlikely that it would perceive us as a threat, seeing as it would not perceive itself to be something under threat from us. (A dissenting view is here.)

But to address the question more directly: such a world could be far superior to the world we currently live in. We make many mistakes – in healthcare, certainly when driving, and it’s simply ego that typically stands in the way of handing these tasks over to more reliable agents. Confirmation bias is at play here, and also mistaking anecdotes for data, in that when you react instinctively to avoid driving over a squirrel, the agency you feel so acutely feels exceptional, and validates fears that the robot driver might make the wrong choice (perhaps, sacrificing the live of its passenger to save other lives). On aggregate, though, the decisions that a sufficiently advanced AI would make would save more lives, and we are each individually typically in the position of the aggregate, not the exceptional. I therefore would think it immoral to not opt for robot drivers, once the data shows that they do a better job than we do.

(An older column about driverless cars, for more on this.)

What you do think is the most interesting piece of AI research underway at the moment?

On a broad interpretation of AI, I’d vote for transhumanism, without a doubt. We’ve been artificially enhancing ourselves for some time, whether through spectacles, doping in sport, Ritalin and so forth. But AI and better technology in general opens up the possibility for memory enhancement (one could perhaps even rewind your memories), or for modulating mood, strength and so forth. Perhaps these modifications will occur with the help of an AI implant, that modulates some of your characteristics in real-time, in response to your situation.

This would fundamentally change the nature of humans, in that we’d no longer be able to define ourselves as persons in the same way. Who you are – the philosophical conception of the person – has always been a topic of much debate, but this would detach those conversations from many of the factors we take for granted, namely that you are your attributes, such as the attribute of being a non-French speaker (with the right implant, everyone is a French speaker in the future).

It would also likely change the nature of trust, and relationships. Charlie Brooker’s “Black Mirror” TV series had a great episode (The Entire History of You) on this topic, suggesting that it would be catastrophic for human relationships – nobody would be able to lie about anything. It is this area (of human enhancement via AI/tech), rather than autonomous AI, that I think potentially far more worrisome.

But to answer your question more directly – neural network design is going to open up very exciting possibilities for problem-solving and planning. In everyday applications, we’re talking about Google Voice or Siri becoming the most effective PA imaginable. But in more important contexts, we might be fortunate to consult with robot physicians who save far more lives than is currently the case, perhaps with the help of nano-bots that repair cell damage from inside the body.

While many AI applications, such as driverless cars or Watson, offer societal benefits, robot caregivers arguably could damage ideas of collective responsibility for vulnerable people or erode filial responsibilities and make people less caring. Do you think that’s a valid concern? That as we outsource more of the jobs we don’t like, we lose our humanity?

Part – I’d say most – of what we currently value about human interaction has been driven by the ways in which we’ve been forced, by circumstance, ability, environment, to engage with people. In other words, I don’t think it’s necessarily the case that those relationships of feelings of commonality are connected to the ways in which we currently care for people. We need to avoid reifying these ideas into very particular forms. Speaking for myself, if I were living with a terminally-ill loved one, I can imagine my relationship with that person being enhanced by someone else performing various unpleasant tasks, which would mean that the time I spent with that person could be of a higher quality.

More generally, we’ve always outsourced jobs we don’t like to machines (or to poor people, of course) – I don’t see how this is a qualitatively different situation from the one we’re already in, rather than just another step on a continuum. Those who argue that these AI applications will cost us some humanity need to accept the burden of proof, and demonstrate that the new situations are incomparable to the old.

Joseph Conrad wrote, in Heart of Darkness, “I don’t like work — no man does — but I like what is in the work — the chance to find yourself. You own reality — for yourself not for others — what no other man can ever know. They can only see the mere show, and never can tell what it really means.” Do we impoverish our experience or fundamentally alter who we are by outsourcing less enjoyable work?

Much of what I said in response to the question above applies here also. We can’t restrict ourselves to one model of work, or certain sorts of activity, to find meaning – and never have. We’ve always adapted to different situations, and found whatever meaning we can in what it is that we’re engaged with. And optimistically, when we’re freed from running on various hamster-wheels, we might find forms of meaning that we never imagined existed.

By Jacques Rousseau

Jacques Rousseau teaches critical thinking and ethics at the University of Cape Town, South Africa, and is the founder and director of the Free Society Institute, a non-profit organisation promoting secular humanism and scientific reasoning.