The new digital dark age

The new digital dark age

For researchers, social media has always represented greater access to data, more democratic participation in knowledge production, and greater transparency about social behavior. Getting an idea of ​​what was going on – especially during political crises, major media events or natural disasters – was as easy as looking around on a platform like Twitter or Facebook. In 2024 however, this will no longer be possible.

In 2024 we will face a grim digital dark age as social media platforms shift from Web 2.0 logic to one dictated by AI-generated content. Companies rushed to include large language models (LLM) in online services, full of hallucinations (inaccurate, unjustified answers) and errors, which further shattered our trust in online information.

Another aspect of this new digital dark age comes from not being able to see what others are doing. Twitter once pulsated with the publicly readable sentiments of its users. Social researchers liked Twitter data, relying on it because it provided a ready, reasonable approximation of how a significant portion of Internet users behaved. However, Elon Musk has now pulled Twitter data from researchers after recently announcing that it was ending free access to the platform’s API. This has made it difficult, if not impossible, to obtain data needed for research on topics such as public health, natural disaster response, political campaigns, and economic activity. It was a stark reminder that the modern internet has never been free or democratic, but instead has been fenced off and controlled.

Closer cooperation with platform companies is not the solution. X, for example, filed a lawsuit against independent researchers who pointed to the rise of hate speech on the platform. It was also recently revealed that researchers who used Facebook and Instagram data to study the platforms’ role in the 2020 US election were granted “independence by permission” from Meta. This means the company chooses which projects to share its data with, and while the research may be independent, Meta also controls what kinds of questions are asked and who asks them.

With upcoming elections in the US, India, Mexico, Indonesia, the UK and the EU in 2024. the stakes are high. Until now, online “observatories” have independently monitored social media platforms for evidence of manipulation, inauthentic behavior and harmful content. However, changes in access to data from social media platforms, as well as an explosion of AI-generated disinformation, mean that the tools researchers and journalists developed in previous national elections to monitor online activity will not work. One of my own collaborations, AI4TRUST, is developing new tools to combat disinformation, but our efforts have stalled due to these changes.

We need to clean up our online platforms. The Center to Counter Digital Hate, a research, advocacy and policy organization that works to stop the spread of online hate and misinformation, called for the adoption of its STAR (Safety by Design, Transparency, Accountability and Accountability) framework. This will ensure that digital products and services are safe before they are released to the market; increasing transparency around algorithms, policy enforcement and advertising; and work to hold companies both accountable to democratic and independent bodies and accountable for omissions and actions that lead to harm. The EU’s Digital Services Act is a step in the right direction for regulation, including provisions to ensure that independent researchers can monitor social networking platforms. However, these regulations will take years to be implemented. The UK Online Safety Bill, which is slowly making its way through the political process, could also help, but again these provisions will take time to implement. Until then, the transition from social media to AI-mediated information means that in 2024 a new digital dark age is likely to begin.

Leave a Reply

Your email address will not be published. Required fields are marked *