fbpx

Facebook Accused of Inflating Advertising Bills After Disabling 1.3 Billion Fake Accounts

Facebook is under fire 🔥 yet again this week after it admitted it took down 1.3 billion fake accounts late last year. Advertisers are now wondering: Did they pay to advertise to fake people?

It’s not just businesses that may have been cheated. U.S. taxpayer money is often used by federal agencies to pay for Facebook ads. For example, the U.S. Census Bureau reportedly paid more than $51 million to Facebook for digital advertising during the first six months of 2020.

Purging 1.3 billion accounts is unprecedented. With a figure that huge, the statistical probability that legitimate ads might have been displayed to these fake users is pretty high, and people are understandably unhappy about that. Here’s what we know about Facebook’s latest scandal. 👀

Why Did Facebook Purge 1.3 Billion Accounts?

Last year, the U.S. House Committee on Energy and Commerce did an inspection to look into how tech platforms handle the spread of misinformation. In preparation for this inspection, Facebook did some spring cleaning.

In addition to purging 1.3 billion accounts, Facebook’s cleanup operation also removed 12 million pieces of content about COVID-19 and vaccines that were flagged as misinformation by health experts. The company outlined its actions in a recent blog post.

Advertisers are questioning how much of their ad revenue was spent to serve ads to those fake accounts. After all, Facebook controls 25% of the global digital advertising market and is projected to earn nearly $100 billion 💰 from ad revenue in 2021. It’s only a matter of time before advertisers band together to file a class-action lawsuit accusing Facebook of displaying ads to the fake accounts. 

“How much money was spent advertising to these fake accounts?” asked Angelo Carusone, CEO of progressive watchdog group Media Matters for America. “When you’re an advertiser, the guarantee that Facebook is inferring at minimum is, ‘Hey we will not allow our platform to be consumed by artificial accounts.’”

This isn’t the first time Facebook has been scrutinized for its advertising products. In 2019, the tech titan paid $40 million to settle a lawsuit accusing it of inflating its video-view metrics by up to 900%. It is currently fighting another class-action lawsuit accusing it of knowingly misleading advertisers with its “Potential Reach” measure.

Facebook is also facing backlash for its dominance in the social media industry. The company was sued in late 2020 by the U.S. Federal Trade Commission (FTC) and 46 states in an antitrust action. The FTC claims Facebook has gobbled up 🦃 its competitors in an attempt to build a social media monopoly. The anti-trust lawsuit wants the company to:

  • Seek prior approval for all future acquisitions

  • Loosen its grip on the social media landscape by giving up WhatsApp and Instagram

  • Stop 🚫 its anti-competitive actions against smaller companies

Facebook isn’t the only tech giant to land in hot water with the federal government. Several companies, including Twitter, TikTok, Google, and Instagram, have come under fire in recent years over user privacy violations, anti-competitive tactics, and the spread of misinformation.

Tech Companies Blamed for Spreading Misinformation

The House Committee on Energy and Commerce had a hearing this month to discuss how to stop the spread of false claims and misinformation on technology platforms. 🖥️

“Whether it be falsehoods about the COVID-19 vaccine or debunked claims of election fraud, these online platforms have allowed misinformation to spread, intensifying national crises with real-life, grim consequences for public health and safety,” the committee chairs said in a statement. “This hearing will continue the Committee’s work of holding online platforms accountable for the growing rise of misinformation and disinformation. For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences.”

Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai all testified at the March 25 meeting. Congress wanted answers on the role the platforms played in the Capitol attacks and the spread of COVID-19 misinformation. This was the first time three CEOs appeared before Congress since the January 6 attack ⚔️ on the Capitol, which is blamed on false claims spread through these platforms that the 2020 election was rigged against Trump.

Lawmakers from both sides of the aisle were unable to come to a consensus on what comes next. Democrats want to hold the companies to a higher standard for the spread of bigotry and misinformation. Republicans want them to cut back on moderation, arguing that the current policies infringe on free speech.

Both sides agreed 🤝 it’s time to make changes to Section 230 of the Communications Decency Act, which protects tech platforms from being sued for content their users publish on their sites. Zuckerberg even shared his own ideas for changing the law, which Dorsey and Pchai generally agreed with. Whatever happens next, it is clear that tech platforms have no choice but to evolve and take action to stop the spread of lies and misinformation.