Computational propaganda, clickbait, and personal responsibility

The proliferation of misinformation on social media – or even just partisan or sensationalistic treatments of politics, science and human relations – could reasonably be considered a threat to democracy itself.

When you add computation propaganda to the mix, where bots are deployed to manipulate public opinion, filter-bubbles form even more readily, and you can now find a closed and self-reinforcing community to reinforce just about any view you can imagine.

Fake news isn’t the same as lying

First published on GroundUp.

Objectivity is impossible to achieve. We all have our biases, and on top of that, we all have brains that work to confirm those biases, and to undermine the impact of information that could change our minds.

We are of course not helpless in the face of misinformation – we can remind ourselves to read and think about dissenting views, we can debate issues with friends from different parts of the political spectrum, and most importantly perhaps, we can remind ourselves that discovering our own errors is an essential component of triangulating on the truth.

Free speech versus fake news

Let’s assume that we – as a species – are not as smart as some of us think we are. I think that this is true (even if sometimes overstated), and that realising that it’s true allows us to accept that sometimes, we don’t know what’s best for us.

Recognising that we are irrational choosers doesn’t tell us how to solve the problem. My post, linked above, makes the case that we should accept that “nudges” or “benign paternalism” are acceptable. But you could object that even if we don’t know what’s best for us, we’re still better at knowing our own wants and desires than anyone else could be.