X Refutes NewsGuard Report Which Claims That its Displaying Ads Alongside Harmful Content

X Refutes NewsGuard Report Which Claims That its Displaying Ads Alongside Harmful Content

X continues to dig in its heels amid more reports which indicate that the platform’s brand safety measures are usually not functioning as intended.

Today, ad monitoring organization NewsGuard has published a latest report, which details how its researchers discovered greater than 200 ads on viral posts that contained misinformation concerning the Israel-Hamas war within the app.

As explained by NewsGuard:

From Nov. 13 to Nov. 22, 2023, NewsGuard analysts reviewed programmatic ads that appeared within the feeds below 30 viral tweets that contained false or egregiously misleading information concerning the war […] In total, NewsGuard analysts cumulatively identified 200 ads from 86 major brands, nonprofits, educational institutions, and governments that appeared within the feeds below 24 of the 30 tweets containing false or egregiously misleading claims concerning the Israel-Hamas war.

NewsGuard has also shared a full list of the posts in query (NewsGuard is referring to them as “tweets” but “posts” is now what X calls them), so you may check for yourself whether ads are being shown within the reply streams of every. Ads in replies could qualify the post creators to assert a share of ad revenue from X because of this of such content, in the event that they meet the opposite requirements of X’s creator ad revenue share program.

NewsGuard also notes that the posts in query were exhibited to over 92 million people within the app, based on X’s view counts.

“The 30 tweets advanced a few of the most egregious false or misleading claims concerning the war, which NewsGuard had previously debunked. These include that the Oct. 7, 2023, Hamas attack against Israel was a “false flag” and that CNN staged footage of an October 2023 rocket attack on a news crew in Israel.”

One positive note for X’s latest moderation process is that 15 of the 30 posts identified by NewsGuard were also marked with a fact-check via Community Notes, which X is increasingly counting on to moderate content. Based on X’s guidelines, that also needs to have excluded these posts from monetization, but NewsGuard says that such restrictions weren’t, seemingly, applied.

 “Ads for major brands, corresponding to Pizza HutAirbnbMicrosoftParamount, and Oracle, were found by NewsGuard on posts with and with out a Community Note.”

So some significant flaws here. First, the indisputable fact that X is seemingly amplifying misinformation is a priority, while X can be displaying ads alongside such, so it’s technically taking advantage of misleading claims. Finally, the Community Notes system is seemingly not working as intended, with regard to blocking monetization of “noted” posts.

In response, or actually, prematurely of the brand new report, X got on the front foot:

As noted, NewsGuard has also shared details and links to the posts in query, so the claims might be verified to a big degree. The actual ads shown to every user will vary, nevertheless, so technically, there’s no way for NewsGuard to definitively provide all of the evidence, aside from via screenshots. But then, X owner Elon Musk has also accused other reports of “manufacturing” screenshots to support their hypothesis, so X would likely dismiss that as well.

Essentially, X is now the topic of not less than 4 reports from major news monitoring organizations, all of which have provided evidence which suggests that X’s programmatic ad system is displaying ads alongside posts that include overt racism, anti-Semitism, misinformation, and references to dangerous conspiracy theories.   

In almost every case, X has refuted the claims out of hand, made reactive defamatory remarks (via Musk), and threatened legal motion because of this:

And now, NewsGuard is the most recent organization in Musk’s firing line, for publishing research that essentially outlines something counter to what he desires to imagine is the case.

And while it could possibly be possible that a few of the overall findings across these reports is flawed, and that a few of the data just isn’t 100% correct, the indisputable fact that so many issues are being highlighted, consistently, across various organizations, would suggest that there are likely some concerns here for X to analyze.

That’s very true given which you can re-create lots of these placements yourself by loading the posts in query and scrolling through the replies.

The logical, corporate approach, then, could be for X to work with these groups to deal with these problems, and improve its ad placement systems. But evidently, that’s not how Musk is planning to operate, which is probably going exacerbating concerns from ad partners, who’re boycotting the app due to these reports.

Though, after all, it’s not only these reports which are prompting advertisers to take pause at this stage.

Musk himself commonly amplifies conspiracy theories and other potentially harmful opinions and reports, which, on condition that he’s probably the most followed user within the app, can be significant. That’s seemingly the essential cause for the advertiser re-think, but Musk can be attempting to persuade them that every little thing is fantastic, that these reports are lying, and that these groups are someway working in concert to attack him, despite the fact that he himself is a key distributor of the content in query.

It’s a confusing, illogical state of affairs, yet, someway, the concept that there’s an enormous collusion of media entities and organizations working to oppose Musk for exposing “the reality” is more believable to some than the evidence presented right before them, and provided as a method to prompt further motion from X to deal with such concerns.

Which is what these organizations are literally pushing for, to get X to update its systems as a way to counter the spread of harmful content, while also halting the monetization of such claims.  

That is technically what X is now opposing, because it tries to enact more freedom in what people can post within the app, offensive or not, under the banner of “free speech”. In that sense, X doesn’t wish to police such claims, it wants users to do it, in the shape of Community Notes, which it believes is a more accurate, accountable approach to moderation.

But clearly, it’s not working, but Musk and Co., don’t wish to hear that, in order that they’re using the specter of legal motion as a method to silence opposition, within the hopes that they’ll simply quit reporting on such.

But that’s unlikely. And with X losing tens of millions in ad dollars day by day, it’s almost like a game of chicken, where one side will eventually must cut its losses first.

Which, overall, looks like a flawed strategy for X. But Musk has repeatedly planted his flag on “Free Speech” hill, and he seems determined to keep on with that, even when it means burning down the platform formerly referred to as Twitter in the method.

Which is looking more likely day by day, as Musk continues to re-state his defiance, and the reports proceed to indicate that X is indeed amplifying and monetizing harmful content.  

Possibly, as Musk says, it’s all value it to make a stand. However it’s looking like a costly showcase to prove a flawed point.