On AI scaling side-effects
This is it, we have collectively entered the algorithm era, where our behavior is no longer influenced by offline events but by a steady stream of virtual data manipulating our minds and opinions in real-time. You are not even sure of what you like and why you like it, nor do you know what you want and if you really want it.
Your recent impulse to buy yourself a ticket for Bali might be slightly influenced by the herd of retar* nomads compulsively posting the same pictures over and over. The same way you started studying spiritualism because it sounded like the right thing to do based on social media suggestions and not because you really felt it.
Don't tell me you are different, at least not yet: if you spend time online, then you must be influenced by algorithms and what they suggest you to do, buy, eat, experience, love, and hate. At a time where a third of our productive time is lost on screen, we spend a solid few hours daily binge-consuming random—or not so random—content and thus polluting our minds with a crazy amount of noise.
Yes, noise, not signal, just pure noise. Now that recommendation algorithms are in every single social media and platform people are using, and that generative artificial intelligence is cheaper than ever, what's left of our human originality? Well, quite a lot if you ask me. In fact, I think purity of taste, originality, and the ability not to give a single f*** about what the herd thinks have never been so important to make the difference.
In another article, I argued that individuals who rely solely on social media for their existence are the first to be replaced by artificial intelligence. These individuals exhibit predictable behavior, as they blindly follow algorithmic suggestions and conform to average patterns.
Artificial intelligence algorithms utilize online information to train and generate synthetic data and content. Consequently, social media addicts inadvertently feed these algorithms, reinforcing the idea that the generated output is similar to human behavior, although it often appears algorithmic in nature.
The more time one spends interacting with algorithms, the less human and authentic they become. This paves the way for artificial intelligence to effortlessly imitate and replace individuals, reducing them to mere lines of code. If your whole existence rely on a data-processing platform disguised in a social media, then it’s a matter a time before you will sound and look like a machine.
Junk food for everyone
There we go again. However, a new issue arises: generative artificial intelligence fails to differentiate between shallow characters and deep thoughts. Additionally, it lacks the capability to operate on top of a mental model that can generate data while considering the social context, being contrarian, and avoiding shallowness and superficiality. Generative algorithms function without considering social context; they are probabilistic machines that rely on highly biased online data. When users ask something of ChatGPT, they may encounter extraordinary difficulties in explaining the social context in which the request should be addressed.
Every so often, I come across a new LinkedIn user posting AI-generated content without any shame. Achieving originality using AI-generated content is still nearly impossible unless you infuse specific knowledge and contrarian views into your content. Requesting an algorithm to sound different while considering a social parameter is quite challenging. Anyway, nobody is even asking for it because humans tend to prefer following the crowd rather than sounding different. But in case they wanted to, they could not.
Algorithms like GPTs prioritize generating content that is likely to sound correct to the average human interacting online. This content is then used to train the algorithm further. Mechanically, this results in the AI primarily training on and generating content that represents the average, mediocre mean, lacking the ability to produce more specific or exceptional outputs.
Indeed, the average human tends not to possess the most rebellious and contrarian viewpoints. Naturally, the most active people online, those who inundate social media and websites with content copied from elsewhere, will have a significant impact on algorithm behavior. Consequently, generative AI often finds itself trapped in the middle ground, unable to generate specific content that embodies purity of taste and originality.
Ideas VS execution
How to be different and original, after all? A common mistake made by people over betting on artificial intelligence to replace humans is that they see the creative process as a block, with no different steps and feedback loops. Ideas without executions are worth nothing1, and shallow content not backed by a quest for genuineness, sincerity, and without any skin-in-the-game is also worth nothing.
However, most of the content that is generated online using generative algorithms is about ideas, not about execution. And most people generating content are doing so because it is the quickest path to gaining new followers, not because they are genuinely interested in what they write. Again, AI has no incentives and generates what you tell it to generate. Shallow influencers and creators do have incentives though.
Algorithms are fueled by content created by individuals who embrace the emptiness in their own lives and ensure that everyone else is inundated with the pervasive shallowness of their thoughts. This results in billions of people's feeds being polluted with posts like "10 AI tools you missed last week" and "The thirty reasons why NFTs are the future."
You can typically identify these individuals by examining their post history—they were crypto evangelists in 2021, marketing specialists in 2022, and AI enthusiasts in 20232.
There is a lack of genuineness and personal investment, resulting in shallowness for everyone. Yet, these profiles manage to gain an audience because easily accessible, quantity-focused shortcuts tend to fare better than scarce, profound content.
Let's say you want to open a restaurant and decide to check on LinkedIn for some tips: I bet a lot of the posts you will come across are standardized copy-paste of what ChatGPT has to say about opening an original restaurant. Nevertheless, not many of those digital creatures will have much idea on how to actually execute and do what they say.
This is where humans can make a significant difference when using artificial intelligence. Algorithms are capable of automating all tasks in your restaurant venture except for the ones that will make people recognize your brand as something different.
And this is why purity of taste has never been so important because we all became equally skilled at the 99% almost overnight and are now striving to be different by implementing the remaining 1%.
Love.
Voss.
Don’t post on LinkedIn anyway, it’s useless
If you are attractive and want to date a LinkedIn influencer, contact me first.
Be a weirdo, be a contrarian.
If you think the contrary, then you must be a bureaucrat or a McKinsey consultant.
I won’t say names, unless the plebe ask me.