Last week, business “magnate” (he finds the qualifier weird too) Elon Musk went on comedian Joe Rogan’s podcast and smoked pot. A couple of things happened after that:
1) Some people were shocked that a “magnate” would do such a reckless thing, because smoking pot, as everyone knows, is the gateway drug to ruling the world.
2) More people found out who Joe Rogan was, even though he’s a very established stand-up comedian (and martial arts practitioner, another good reason to take him seriously) and his podcast happens to be one of the most influential in the genre. Even though Rogan doesn’t quite have a PhD in biochemistry (treading carefully here…), he is a very curious guy who is somewhat knowledgeable in various fields, including hard science. In other words, not your typical comedian podcaster.
3) The show went for over two and a half hours, so I couldn’t bring myself to listen to the whole thing (I still have a few other activities besides checking Youtube). And I also don’t exactly see how a guy as busy as Musk would have the time to do it either, but that’s besides the point. The point is - they talked about AI, a key tech theme of late and one of Musk’s pet peeves for years.
This is where it got interesting for me - and where I stopped actually listening to the podcast: Elon Musk's position on AI is a fairly strange and divisive one (arguably, like many of his other positions). Having created a non-profit corporation in the field, Open AI, he’s been very publicly involved and vocal about it, with pretty alarming messages over the years, essentially implying that the technology is a potential threat to our very existence (there seems to be a thread here…) and that it is therefore not being taken seriously enough.
On the podcast, he went a little further than that: according to him, it’s already too late now, AI has already grown past what mankind will ever be able to do to contain it if/when it were to become dangerous to us humans. Admittedly, I am not an AI expert, although I have met with and talked to a few of them over the years. And the field is in essence one where there are a lot of major question marks still lying around: we are still witnessing the very beginning of that technological revolution, and we will have to see what happens next.
In the mean time, I have very serious reasons to think that Musk’s gloomy statements may be slightly overblown. AI, and more specifically deep learning, is a very (potentially) powerful tool that is slowly being integrated in many of our processes because it can help us improve said processes and, ultimately, get rid of many of those “bullshit jobs” sociologists and anthropologists keep talking about these days. The way machines took over mechanical jobs, AI is now taking over intellectual jobs that do not require complex and/or subjective and/or creative skills (see AI expert Kai Fu-Lee’s video for more on how safe your job is with AI now being in the room). And that’s a good thing.
Now, to the risk that AI may eventually take over the world, thus ironically becoming the new “magnates”: I find that highly doubtful. And here is why:
1) First, we have to come back to the fact that the human brain has its limits that are still to this day far from understood. We only use portions of our theoretical brain power and we don’t fully know how our mind works just yet. Moreover, we also know that, from a philosophical and scientific perspective, we do not know anything fully, therefore that everything we “know” is only the best theory on the topic to date (see phenomenologist Karl Popper’s falsification principle in scientific research for more).
2) Consequently, anything that we create with our minds is bound to be limited, flawed, wrong to some extent. Any AI software is therefore limited by the capabilities - and flaws - of its creators, how ever many - and clever - they may be. If you have imperfect individuals with limited understanding of their own abilities, how can you logically expect their creation from being scientifically superior to it/them? This is, in my opinion, the key argument that keeps getting ignored in AI/singularity debates: AI is not brute force, insofar as it cannot compensate its inherent flaws with pure computing power. AI software may have greater abilities than humans in speed or factual knowledge, but there is no way it is or will be able to make the connections humans do between different notions. We don’t even know how we do them ourselves, how exactly would we able to program anything that replicates it?
Therefore, and no disrespect to the ever so cool Elon Musk, but the rumors of AI effectively taking over mankind, or robots ruling over men as has been feared for decades, have been greatly overestimated in my very humble opinion. Instead, we should be focusing more on the incredible benefits of such a revolutionary technology in improving our jobs and creating new ones, because that is truly exciting and promising for the future of mankind. But I guess scary movies are sexier…