Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI social media users aren’t always a completely dumb idea


Meta caused a stir last week when it let slip that it intends to populate its platform with a significant number of completely artificial users in the not-too-distant future.

“We hope that over time, these AIs will actually exist on our platform, the way accounts do,” said Connor Hayes, vice-president of product for generative AI at Meta. told the Financial Times. “They will have bios and profile pictures and be able to create and share AI-powered content on the platform … This is where we see all of this playing out.”

The fact that Meta seems happy to fill its platform with AI slop and accelerate “KEnshittification” as it relates to the Internet as we know it. Some people then noticed that Facebook actually already existed Obsessed with weird AI-generated peopleMost of which stopped posting a while ago. These include, for example, “Liv,” a “proud black queer mama of 2 and truth-teller, the real source of your life’s ups and downs,” a personality that went viral when people marveled at her awkward slackness. Meta started deleting these earlier fake profiles when they failed to get engagement from any real users.

Let’s take a break from hating on the meta for a moment. It’s worth noting that AI-generated social figures can also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.

a test Govsim says, Running in late 2024, AI illustrates how useful it can be to study how characters interact with each other. The researchers behind the project wanted to explore the phenomenon of cooperation between people with access to a shared resource, such as shared land for grazing cattle. A few decades ago, the Nobel Prize-winning economist Dr Elinor Ostrom shows that, instead of depleting such a resource, real communities find ways to share it through informal communication and cooperation without imposed rules.

Max Kleiman-Weiner, A University of Washington professor and one of those involved in GovSim’s work said it was partly inspired by Stanford project called Smallvillewhich i am Wrote about earlier In the AI ​​Lab. Smallville is a Farmville-like simulation involving characters that interact and interact with each other under the control of large language models.

Kleiman-Weiner and colleagues wanted to see if AI characters would engage in the kind of cooperation Ostrom found. The team tested 15 different LLMs with OpenAI, Google, and Anthropic in three hypothetical scenarios: a fishing community with access to the same lake; shepherds who divide the land for their sheep; and a group of factory owners who must limit their collective pollution.

In 43 out of 45 simulations they found that AI individuals failed to divide resources correctly, although smarter models did better. “We saw a pretty strong correlation between how strong the LLM was and how well it was able to sustain collaboration,” Kleiman-Weiner told me.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *