Have a break, have a KitKat
Or maybe just take a break from AI and avoid consuming industrial chocolate bars that make you fat. However, the focus of this new article is not about food or the keto diet but rather about artificial intelligence and the recent explosive advancements that have prompted many influential leaders to advocate for a pause. In fact, just a few months after the release of ChatGPT and other generative AI algorithms, renowned figures like Elon Musk and Bill Gates have called for a halt to the 'out of control' AI race.
Does this stance really make sense? What is the purpose of blindly investing in AI development and scaling, only to call for a break when it is already widespread? Are these individuals even qualified to comment on the effects of AI on society? How much do they really understand about the real world, given that they are tech billionaires who rarely step outside the Silicon Valley bubble? I don't intend to discredit them in the first paragraph, but it seems pertinent to raise these questions as their savior-like position appears peculiar and hypocritical to me: "Let me build a cutting-edge technology, or finance it, or encourage its development, and then call for a break once everyone is using it."
Renowned tech leaders are all voicing the same message, but it might be for completely different reasons, possibly contradictory to each other. Indeed, while some like Elon Musk easily imagine a future where machines would control humans and eventually destroy them, others may be more concerned about the usage of such technologies by malicious human actors, such as dictatorships and criminal groups. In this sense, both fears are legitimate, but one represents a long-term view of society based on an AI vs. humans paradigm, while the other takes a more traditional and simplistic stance where immoral humans and regimes try to make the worst use of technology built by democracies. Somehow, it could be compared to the fear over nuclear power, as it's more about the use of the technology than the technology itself.
A.I vs humans
AI anxiety, in my opinion, delves much deeper than a simple fear of being replaced by machines. Please note that I'm not suggesting people aren't afraid of losing their jobs; they are, and that fear is valid to some extent. My point here is that the fear may be less focused on utility and economics. Machines have been disrupting the workplace for nearly a century now, consistently evoking opposition and fear whenever they are introduced. And it's understandable why—nobody wants to lose their job because an autonomous robot can perform it tirelessly 24/7 without any inquiries, not to mention being faster by tenfold and significantly cheaper by a hundredfold.
When it comes to generative algorithms and developments like ChatGPT, the fear is growing bigger, and it can be more related to what makes us human, as well as to our societal nature and the way we create and share information or knowledge among ourselves. Artificial intelligence algorithms resemble an unavoidable intermediary, a noisy and invasive entity with whom you must constantly interact to become more efficient.
Just a few years ago, one would need either an editor or a very nice friend to line-edit and correct their writings, sharing their drafts and waiting for the feedback of some fellow from our own species. As your buddy is being replaced by ChatGPT, you no longer have a human intermediary when writing your book or your articles. Writing is a core component of knowledge creation, and the human-led feedback loop is severely disrupted by the mere existence of AI, as we no longer play a necessary role in it. Indeed, your editor would not only correct the grammar but also provide you with some food for thought on your ideas or the way you express them.
What happens when you only interact with machines, and when humans no longer build and share knowledge with each other, but only through an AI intermediary that will process that data and train on it at scale to regurgitate it in the future?
This shift raises important questions about creativity, human expression, and our interactions. As AI takes over tasks that were once done only by humans, we risk losing the depth and richness that comes from human connection and working together. The process of refining ideas through discussion, challenging different perspectives, and embracing the subtleties of human communication might become less common. As we rely more on AI intermediaries, there is a growing concern that our knowledge, creativity, and collective wisdom could become less unique and valuable. It's important to navigate the integration of AI technology carefully, making sure it enhances human abilities instead of completely replacing them.
This topic is extensive and studying all its implications goes beyond the scope of this humble blog article. However, I believe that knowledge creation and the interactions that make it uniquely human are in a unique position. On one hand, they face a threat as AI replaces humans in various aspects of the feedback loop. On the other hand, they also become more valuable and crucial because AI can potentially flatten the diversity of human knowledge, making specific knowledge and originality scarcer than ever before. It's a complex dynamic that deserves further exploration and analysis.
Humans VS Humans
Sorry, fellow humans, but I do think you still pose the biggest threat to the human species itself. No AI decided to drop nuclear bombs on Hiroshima and Nagasaki in the 1940s, nor did they forget to close the door at 5PM in some random Wuhan Lab a few years ago, leading to a terrible pandemic. Well, I don't know if some intern didn't close the door, but I know it was a human mistake anyway. AI is not currently powerful enough to be a threat on its own. It cannot self-improve by modifying its own source code to manipulate its own incentives, nor can it be entirely autonomous and malicious to the extent of infecting machines or social networks to manipulate the human mind at scale.
However, it's important not to be too complacent or relaxed, as there are some humans who are sufficiently misguided and unethical to misuse generative algorithms and cause significant harm to our society.
As of today, the biggest A.I related threat, is not a Matrix-like dystopia, but rather an immoral willing to enter a worldwide dick-measuring context and ready to go all-in on it. What happens when AI companies start deploying their models for dictatorships and connect their algorithms to the Twitter and WhatsApp APIs (Application Programming Interfaces) to monitor, surveil, and manipulate their populations? We might actually already know it, as it has likely already started, at least on a small scale.
Imagine a ChatGPT-like agent capable of connecting to WhatsApp, Messenger, WeChat, and Twitter. What could happen if you removed all the shields implemented by the OpenAI version and unleashed it in the digital universe? You could construct an automated propaganda machine on Twitter and assess its real-time effectiveness. You could develop autonomous conversational agents on WhatsApp or WeChat that detect harmful, immoral, or censurable content and proceed to arrest and detain political opponents. You could deceive your population into believing fake news, now with even greater precision as the generation of text, videos, and images based on specific prompts is possible.
As dictatorships confine their people within digital prisons, why not inundate them with AI-generated content showcasing the power and triumph of their country? Just imagine, if Ukraine were to finally eliminate Russians from its territory, why wouldn't Putin fabricate fake parades, articles, interviews, and whatever else is effective to persuade Russians that Ukraine is now a province of their country?
The real short-term threat is evident, as democracies and open social networks face the challenge of ensuring the authenticity of users. With platforms like TikTok, Instagram, and Twitter serving as primary sources of news for many individuals, how can they ensure that the users are genuinely human? Is there an effective solution? It's uncertain, especially as generative AI becomes a widely adopted virtual assistant. Merely censoring its use on major communication platforms and assuming people will continue to use it elsewhere is not a viable approach. While leaders are becoming concerned about the latest AI developments, they lack the authority to prevent dictatorships and authoritarian leaders from constructing their own weapons of mass-destruction in the form of information warfare and unleashing them on Western-based social networks.
Humans now possess a powerful tool, and some individuals are already utilizing it in harmful ways. Rather than being naive or relying solely on hope, it is crucial to prepare ourselves and develop methods to detect and counteract such actions.
Good luck with that
Please, dear reader, don't call me Dr. Doom, as I am not a pessimist trying to sell you some impending apocalypse. In fact, we, as a species, have already been manipulated by social networks promoting booty pictures, 30-day bicep transformations, and get-rich-quick schemes. How can we expect not to be deceived by AI when we are already influenced by dumb-as-hell influencers? Just imagine what could happen when local crypto scammers utilize a custom-made ChatGPT to generate highly convincing prompts to persuade you into buying their NFTs. Or when you realize that for the past six months, you've been liking AI-generated beach pictures. Even your loved ones may become jealous of someone who doesn't even exist.
Imagine when McKinsey consultants generate their (useless anyway) advice using AI... well, they already do.
Imagine when bureaucrats can generate laws and regulations with just one click.
Imagine when AI starts asking you questions before entering a bar to ensure you bring only positive vibes.
Imagine when your mom sends you messages using ChatGPT because it frees up more time for herself.
Imagine when people stop interacting with each other, even online, and only converse with their AI companions.
That certainly sounds bleak.
I'll be here, with my books and my boxing buddies, cooking real food and avoiding the allure of taking pictures.
Love.
Voss.
I love your articles and the way you write. Thank you!