Predicting the Future
I've spoken before about Tristan Harris of the Center for Humane Technology. He was behind 'The Social Dilemma', which remains the most-watched documentary on Netflix. A former Design Ethicist at Google, he left after becoming more and more disturbed at the impact the 'attention-at-all-costs' social media business model was having on society (addiction, radicalisation, doomscrolling, distraction, political polarisation, social unrest, mental health impacts, etc). He's given a lot of lectures, interviews and talks which are widely available. But this is, I think, one of the most important I've heard. He covers a lot of bases in it with particular respect to AI. And it isn't all doom and gloom! As he says, very pertinently, whenever mankind has faced existential threats - the development of nuclear weapons, the hole in the ozone layer, etc - we've been able to come together and solve them. Even though the climate change issue feels like it's being kicked into the long grass all the time, we're still working together through different global organisations and initiatives in an attempt to at least address the threats. So, why can't we do the same with AI? The problem, as with social media, is what's driving it.
This is a long interview - almost 2 hours - but it's well worth it if you're as concerned about these issues as I am. As he points out, too, there's a bit of a deterministic mindset in our approach to this tech: it's going to happen, whether we want it or not, so what's the point in trying to stop it? It's not about stopping it. It's about changing our approach to its development. Forewarned, I always think, is forearmed.



