
It’s a well-known problem with machine understanding that a computer with a distinct reward function may engage in continuous reinforcement learning to improve its performance and increase that reward. However, this optimization path frequently leads AI systems to very distinct outcomes than may occur from humans using individual judgment.
AI developers frequently make use of what is known as reinforcement learning from human feedback ( RLHF ) to introduce a corrective force. Essentially they are putting a human thumb on the scale as the computer arrives at its model by training it on data reflecting real people’s actual preferences. However, where does the data for human preferences come from and how much of it is required to be accurate? This has been the issue with RLHF so far: If it requires hiring human supervisors and annotators to enter feedback, it’s a pricey process.
And this is the problem that Levchin thinks could be solved by the like button. He thinks that Facebook‘s accumulated resources, which any developer can use to train intelligent agents based on user preference data, are a godsend. And what a big deal is that? ” I would argue that one of the most valuable things Facebook owns is that mountain of liking data”, Levchin told us. Being able to access “what content is liked by humans, to use for training of AI models, is probably one of the singularly most valuable things on the internet at this pivotal point in the development of artificial intelligence,” is in fact one of the most valuable things.
Levchin says AI can learn from human preferences by pressing the like button, but AI has already started to alter how these preferences are shaped. In fact, social media platforms are actively using AI not just to analyze likes, but to predict them—potentially rendering the button itself obsolete.
We learned something that struck us, as most people did, and that was because the predictions mostly came from another angle, describing not how the like button would affect AI’s performance but how AI would change the world around it. We’ve all heard that AI is being used to enhance social media algorithms. Early in 2024, for example, Facebook experimented with using AI to redesign the algorithm that recommends Reels videos to users. Could it be better able to determine which video a user would prefer to watch next? Applying AI to the task paid off in longer watch times, which was the performance metric Facebook was hoping to increase, according to the results of this early test.
When we asked YouTube cofounder Steve Chen what the future holds for the like button, he said,” I sometimes wonder whether the like button will be needed when AI is sophisticated enough to tell the algorithm with 100 percent accuracy what you want to watch next based on the viewing and sharing patterns themselves. The like button has been the simplest way for content platforms to do that up until now, but the end goal is to make it as simple and accurate as possible with any data that is available.
He continued to point out, however, that one reason the like button might be required is to handle sudden or unexpected changes in viewing requirements brought on by life events or circumstances. ” There are days when I wanna be watching content that’s a little bit more relevant to, say, my kids”, he said. The like button serves as the simplest possible hinge connecting those three groups, according to Chen, who added that it may have a long-term impact due to its ability to entice advertisers, who are the other important group in addition to viewers and creators. A viewer taps both to show appreciation and feedback to the content provider and evidence of interest and preference for the advertiser.
Another major impact of AI will be its increasing use to generate the content itself that is subject to people’s emotional responses. Already, increasing numbers of the text and image content being liked by social media users are created by AI. One wonders if the like button’s original purpose, which was to encourage more users to create content, will even continue to be relevant. Would the platforms be just as successful on their own terms if their human users ceased to make the posts at all?
Of course, this query raises the issue of authenticity. Alicia Keys hit a sour note during the 2024 Super Bowl halftime show that every observant observer could hardly help but notice. Yet when the recording of her performance was uploaded to YouTube shortly afterward, that flub had been seamlessly corrected, with no notification that the video had been altered. Although it’s minor ( and good for Keys for doing the performance live in the first place ), the sneaky correction did raise some eyebrows. Ironically, she was singing” If IAin’t Got You” and her fans ended up liking something that was slightly different.
If AI can subtly refine entertainment content, it can also be weaponized for more deceptive purposes. The same technology that can fix a musical note can also be used to clone a voice, leading to far more serious consequences.
The US Federal Communications Commission (FCC ) and its equivalents elsewhere have recently cracked down on AI’s use to” clone” a person’s voice and effectively put words in their mouths. This is even more chilling. It sounds like them speaking, but it may not be them—it could be an impostor trying to trick that person’s grandfather into paying a ransom or trying to conduct a financial transaction in their name. Following an incident of robocalls that spoofing President Joe Biden’s voice in January 2024, the FCC made clear instructions that such impersonation is against the terms of the Telephone Consumer Protection Act and warned consumers to be cautious.
FCC chair Jessica Rosenworcel said,” AI-generated voice cloning and images are already creating confusion by deceiving consumers into believing scams and frauds are legitimate.” ” No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls”.
A fake AI-filled future of social media could be filled with phony, computer-generated users in place of phony, deceptive pretense like this, according to the authors. Such virtual creations are snaring the online influencers ‘ pockets and gaining a large following on social media platforms. ” Aitana Lopez”, for example, regularly posts glimpses of her enviable life as a beautiful Spanish musician and fashionista. Her Instagram account had 310, 000 followers as of the last time we checked, and she was making about$ 1, 000 per post for hair-care and clothing brands, including Victoria’s Secret. However, Aitana doesn’t really need clothing, food, or a place to live, so someone else must be spending her hard-earned money. She is the programmed creation of an ad agency—one that started out connecting brands with real human influencers but found that the humans were not always so easy to manage.
The very foundation of online engagement may be shifting as AI-driven influencers and bots interact with one another at an unprecedented rate. What does it mean for the like economy’s future if likes are no longer coming from real people and content is no longer created by them?
In a scenario that not only echoes but goes beyond the premise of the 2013 film Her, you can also now buy a subscription that enables you to chat to your heart’s content with an on-screen “girlfriend”. Caryn Marjorie, a real-life online influencer, had already over a million followers when she teamed up with an AI company to create a chatbot. The virtual Cary n’s conversation is waged for one-on-one interaction with the virtual caryn, and it is funded by OpenAI’s GPT-4 software, which is trained on an archive of content Marjorie had previously posted to YouTube.
We can imagine a scenario in which a large proportion of likes are not awarded to human-created content—and not granted by actual people, either. Imagine a world filled with synthesised creators and consumers who could exchange information instantly online. If this does occur, even in part, there will undoubtedly be new issues that need to be resolved, especially when it comes to our need to know when a seemingly popular post is actually worth reading.
Do we want a future in which our true likes ( and everyone else’s ) are more transparent and unconcealable? Or do we want to keep the ability to disassemble ( for ourselves but also for others )? It seems likely that new tools will be developed to make it easier to tell whether a like is associated with a real person or merely a real bot. Different platforms might apply such tools to different degrees.