Algorithms feedback loops : Are influencers nothing but pieces of code?
Are those people selling their souls to the algorithm devil? Do they even have a soul?
Who feeds what? What feeds whom?
That is a recurring theme in my publications, and it is logical considering the fact that the average smartphone user spends between three and four hours a day on their screens. For some people, this number is even higher. As screen time increases, we are feeding algorithms with our personal data such as food habits, favorite places, sexual fetishes, and business dreams. Don't deny that you are not looking at those topics online - it is the most human thing ever since we have never had such easy access to information. As we increasingly become our own digital selves, algorithms embedded in websites and social network applications guide us through the tunnel we have built with the bricks of our own biases. Don't feel ashamed; you also have biases and preferences that drive your fingers to frenetically scroll on this screen.
Let's say, unfortunately, that you are an Instagram foodie who posts oat-milk lattes and plastic Falafel online for a living. Your Instagram and Twitter welcome pages are probably full of content spread by the most popular influencers in this specific theme. The pipeline is quite simple - you post content based on your preferences, social network algorithms find the underlying patterns in what you like, and suggest similar content that you might enjoy. At this stage, nothing very innovative is happening, except that recommendation engines are suggesting content at scale and in real-time. However, as we give away an increasingly large percentage of our deep selves to the machine, the cost of processing information decreases exponentially, and machine learning models are getting better at suggesting and even generating content that fits your preferences.
Every social network user enters a continuous learning loop where they go back and forth with their applications and lock themselves into some online communities. It is interesting to note that the better the algorithm feedback loop, the worse it becomes for the user. As more and more people tend to get exposed to very niche content and become interested in very narrow and shallow pieces of information, their critical mind becomes fluffy, and their learning feedback loop is broken: the better the algorithm, the more broken the human's critical mind becomes.
Recommendation versus generation
The recent and explosive fame surrounding generative algorithms has reminded us that artificial intelligence is only as good as the use we make of it. The OpenAI platform can help non-native English developers sell their excellent programming skills to US-based companies. However, it can also make LinkedIn's compulsive writers even more prolific by generating shallow and cringe-worthy content using a very simple query. Social networks are not yet plugged into generative algorithms, and the core feature they offer users is infinite scrolling of recommended content based on their activity. The algorithm converges toward a close-to-perfect understanding of what type of content gives you the dopamine shots you crave, and your time spent on applications is growing exponentially. At this stage, if your job is to spend time online and post content that fits a narrow scope, some relevant questions are worth asking:
How is your creativity affected when you are stuck in a tunnel flooded with monochromatic content?
Is your content truly original when your only inspiration are very similar creations suggested by algorithms?
As AI can generate content based on recommendations and mix different topics infinitely, what's your edge as a close-minded, non-curious influencer only interested in posting booty pics, against an artificial intelligence? Spoiler : None.
Social networks' artificial intelligence algorithms, coupled with a lack of intellectual curiosity, have locked the least critical minds into pointless tunnel visions and overloaded them with pointless content. This has resulted in a decrease in their natural appetite for originality, as well as a destruction of their creativity and their own creative mind. Social metrics have shifted from multidimensional, complex, real-life qualitative assessments of their happiness to absurdly simplified rankings based on followers, reposts, and views.
An unexpected twist
The latest advancements in artificial intelligence, particularly generative algorithms, have raised concerns. While AI is still far from achieving artificial general intelligence, social networks have enabled the deployment of algorithms at scale. If AI applications on social networks have focused on recommendation engines, it has cut various dimensions of the type of content people are exposed to, leaving them in an intellectual deep-water where they think it's great to make a living out of salad bowls and yoga positions pictures.
Meanwhile, algorithms have become capable of not only generating content but also handling creative queries as well as more analytical ones, doing it at scale using every single piece of information available online. OpenAI has been at the forefront of these advancements, ensuring that their models can generate content using a wide range of information. However, this has also led to people unconsciously making an effort to become the most simple version of their digital selves, which have become more efficient, diverse, and curious than their human counterparts in some ways. This overnight obsolescence is a strange final twist. The more you were plugged to imperfect AI algorithms, the more you are vulnerable to the last developments.
Influencers, the shallow type, are indeed, nothing but pieces of code a large part of the day, as their behavior is easily predicted by what is famous and hype online, and can be written in lines of codes. The sad and paradoxical part is that the more hooked people were to recommendation engines, the easier generative algorithms will be able to simulate and potentially replace them as their behavior became only a simplified version of current algorithms. This raises serious questions, as learning feedback loops may no longer involve humans, but only algorithms. For instance, history students might stop asking questions to their teachers and only refer to ChatGPT in case of a doubt and get overfed with bland, non-interpreted, naturally biased, and potentially erroneous content. Let's not laugh too early at shallow social network addicts, as most of us are actually feeding algorithms in real-time and slowly starting to cut the human part of the feedback loop process.
Influencers are pieces of code, but not the best ones, just the dummy functions you write during a programming 101 session.
Keep building unique knowledge, and be creative.
If you feel replaceable by AI, you are failing the Turing test the other way around.
Mental models can't be learned by AI, at least for now.
AGI is not arriving tomorrow.
At the end, we all become cosmic dust, so post your booty if you fancy it.
GREAT AS USUAL !