Skip to main content

Newsroom

Smartphone with "Fake News" headline

The Mathematics of Misinformation

In new book, professor explains how digital media and AI algorithms helped propagate “fake news”

Molly Mastantuono

During an October 2019 press conference, Donald Trump claimed credit for inventing the term “fake news.” While it’s true the former president helped popularize the phrase, the idea of intentionally spreading false or misleading information has been around for centuries. From the sensationalist “yellow journalism” embraced by American media tycoon William Randolph Hearst in the 1890s to racist Nazi propaganda circulated before and during World War II, fake news has long been used to sway public opinion, perpetuate injustice and even instigate violence.  

Professor Noah Giansiracusa
       Professor Giansiracusa

But as Noah Giansiracusa, assistant professor of Mathematical Sciences, makes clear in his new book, “How Algorithms Create and Prevent Fake News: Exploring the Impacts of Social Media, Deepfakes, GPT-3, and More,” the rise of digital media and advances in machine learning techniques have raised the stakes of the information game to critical levels. By eroding the line between fact and fiction, he says, these platforms have inadvertently created “a technological arms race,” increasing both the speed at which fake news spreads and the magnitude of its influence. 

In his book, Giansiracusa explains how artificial intelligence (AI), a branch of computer science that creates machines capable of thinking and acting like humans, has contributed to our “current morass of media mendacity.” Specifically, he focuses on deep learning algorithms, the specialized programs that enable computers to identify patterns among diverse and voluminous data sets. 

Every time we visit social media sites like Facebook and YouTube, Giansiracusa says, we leave behind a “trail of digital crumbs.” This information, based on what we like, share, read or watch during our visit, is fed into AI algorithms and used to predict and influence our future behavior; these programs determine which ads or posts we see in our Facebook news feeds and which videos appear in our YouTube recommendation queues. From a business perspective, he notes, the ultimate goal for these companies is maximizing user engagement — and, in turn, maximizing their own profits. 

This is important, Giansiracusa stresses, because we tend to view Google, Facebook and YouTube as benign search engines and social media platforms instead of what they are: digital advertising companies. Google, for example, earned nearly $150 billion in ad dollars in 2020, an amount reflecting 80% of the company’s total revenue. And Facebook and Google together account for nearly 20% of the global advertising industry.  

In addition to offering advertisers space on its own platform, Google acts as a “virtual realtor” by placing ads, for a fee, on third-party sites. And it’s through this latter mechanism, Giansiracusa explains, that fake news has flourished. Given the immense scale of Google’s network — the company delivers more than 30 billion ad impressions every day — these transactions are algorithmically-driven, prioritizing quantity over quality. As a result, Google unwittingly placed billions of ads for and with fake news peddlers, providing revenue streams for them in the process.  

Even the best AI algorithms today don’t form abstract conceptualizations or common sense the way a human brain does — they just find patterns when hoovering up reams of data. They are fantastic tools, but they need human guidance.
Noah Giansiracusa
Assistant Professor, Mathematical Sciences

In a similar fashion, YouTube — which is owned by Google — spawned an “accidental synergy” between its recommendation algorithm and fake news videos, specifically those promoting conspiracy theories. Giansiracusa notes that YouTube users watch 1 billion hours of videos each day, and that 500 hours of new content are uploaded to the site every minute. YouTube’s algorithm, which keeps viewers engaged by recommending a steady stream of alternative videos, is responsible for 70% of users’ total watch time.   

Professor Noah Giansiracusa's new book
   Giansiracusa’s new book 

From flat earthers and climate change deniers to alt-right extremists and Bigfoot believers, conspiracy theorists have a long history of posting to YouTube. When users spend time watching their videos, the algorithm propels them further down the rabbit hole, simultaneously reaching a larger audience and conferring an air of legitimacy. As Giansiracusa explains, “Even if a particular conspiracy theory seems blatantly implausible, the viewer tends to feel that all signs are pointing to the same hidden truth” when YouTube recommends a sequence of similarly themed videos from multiple creators.  

Although his book demonstrates how AI algorithms have played “an alarming role in the spread of fake news and disinformation,” Giansiracusa wants readers to refrain from rising against the machines. Indeed, he says, we can use these programs to achieve positive outcomes — for example, identifying when hate speech is shared via social media platforms, or assessing the validity of questionable content and promoting only verified news sources in search rankings. But this will require supervision. “Even the best AI algorithms today don’t form abstract conceptualizations or common sense the way a human brain does — they just find patterns when hoovering up reams of data,” he explains. “They are fantastic tools, but they need human guidance.”   

In Giansiracusa’s view, it’s the companies who continue to use these powerful programs, despite the discord they sow, that are to blame. He maintains that federal oversight is necessary to ensure that Google, Facebook, YouTube, and other digital advertisers become more transparent with their algorithms and accountable for their actions. And he’s already found some powerful allies who echo this call to action: Nobel Prize-winning economist Paul Romer reached out to Giansiracusa after reading his book (of which Romer says, “There is no better guide to the strategies and stakes of this battle for the future”) to collaborate about ways to keep Big Tech in check. 

Through this promising new partnership, Giansiracusa hopes to raise greater awareness and support for government regulation. After all, he says, we can’t rely on the companies to fix the problems themselves: “There is no real financial incentive for being truthful.” 

More Faculty Research: Real men don’t buy ‘Mrs. Clean’