As powerful AI language models dominate the development track, we are yet again confronted by a familiar history of powerful technology with a legacy of opaqueness.

This is why scrutinizing opaque and harmful AI has never been more imperative. A subject matter that AI Forensics understands and studies in depth. Enigmatic AI systems have played a significant role in widening societal inequalities; capitalizing on discrimination, while polarizing groups through the amplification of disinformation.

The way these systems have been built is by stacking up on layers of opaque algorithms. Marc Faddoul, AI Forensics.

AI Forensics is a non-profit which investigates influential algorithms. It was previously known as Tracking Exposed, a project which had been pioneering new methods to hold big tech platforms accountable since 2016. Faddoul explains, “We are refocusing our attention on what we do best — Exposing opaque systems with maligned intentions and responding to what the ecosystem needs.”

The last Tracking Exposed research uncovered how content from TikTok users in Russia was being promoted on the For You feed that otherwise appeared to be banned since the Ukraine invasion. A tactic Tracking Exposed termed as ‘Shadow-Promotion’.

Whereas shadow banning renders a user’s content only visible to themselves, shadow promotion stealthily amplifies particular content. The investigation preceded previous studies which closely examined TikTok’s content moderation and recommender systems in inflaming pro-war propaganda.

Their research reports prompted responses from six U.S. Senators who reached out to TikTok's CEO, demanding the company take action over its policies, which paved the way for the grueling congressional hearings this March.

Read Tracking Exposed’s TikTok observatory reflections on holding TikTok accountable

AI Forensics will continue to peer through social media recommendation algorithms and scrutinize the trust and safety of information spat out by chatbots like ChatGPT. By expanding the infrastructure of their investigation techniques to be mobile-first, they will be able to study user patterns similar to how real users interact with these platforms. Their initial investigations were carried out on desktop browsers.

These developments, Faddoul explains, will be key in studying the role platforms will play in upcoming elections in 2024, where over 50 countries are expected to hold elections with significant global impact. “Inaccurate retrieval of information can have devastating consequences, and this can be through exploiting the failures or vulnerabilities in search and recommendation engines.”

Indeed, information and public perception hold significant power, especially in election seasons, a reality Faddoul and his colleagues will be looking at under a magnifying lens. They are particularly concerned over the extent to which large language models can instigate “search index poisoning” which Faddoul defines as the ability for malign actors to publish arbitrary contents which are then repeated and amplified by the language model. An obvious way to corrupt responses of these AI systems, and a strategy that dictatorial regimes could cash in on.

An example of such a volatile case is Slovakia’s upcoming presidential election in 2024, which Faddoul believes could potentially place the Eastern Europe region in a precarious political climate. Slovakia neighbors Ukraine and Poland, territories that have an antagonistic relationship with Russia, marred by unceasing political tension, “these will be information battlefields, ” he remarks.

But to conduct these audits, researchers are reliant on data access, which is now being undermined and researchers are even facing harassment. AI Forensics conducts adversarial auditing where data is collected independently through users donating data or through a ‘Sock Puppet’ methodology. This is where automated user accounts are deployed to interact with platforms such as TikTok or YouTube. However, the sock puppet tactic can also be detected by platforms as trolling advances and gets blocked. A challenge that researchers have to constantly work around.

Additionally, Faddoul notes that the rapid rise of generative AIs and subsequent product developments are making the AI auditing landscape more complex to study. The AI sprint race not only raises trust and safety concerns but “creates a really big need for scrutiny, requiring agility to adapt to an ever evolving unprecedented landscape.” Faddoul concludes.

About the Mozilla Technology Fund

The Mozilla Technology Fund (MTF) supports open source technologists whose work furthers promising approaches to solving pressing internet health issues. The 2023 MTF cohort will focus on an emerging, under-resourced area of tech with a real opportunity for impact: auditing tools for AI systems.