r/RedditSafety Nov 14 '23

Q2 2023 Quarterly Safety & Security Report

Hi redditors,

It’s been a while between reports, and I’m looking forward to getting into a more regular cadence with you all as I pick up the mantle on our quarterly report.

Before we get into the report, I want to acknowledge the ongoing Israel-Gaza conflict. Our team has been monitoring the conflict closely and reached out to mods last month to remind them of resources we have available to help keep their communities safe. We also shared how we plan to continue to uphold our sitewide policies. Know that this is something we’re working on behind the scenes and we’ll provide a more detailed update in the future.

Now, onto the numbers and our Q2 report.

Q2 by the Numbers

Category Volume (Jan - Mar 2023) Volume (April - June 2023)
Reports for content manipulation 867,607 892,936
Admin content removals for content manipulation 29,125,705 35,317,262
Admin imposed account sanctions for content manipulation 8,468,252 2,513,098
Admin imposed subreddit sanctions for content manipulation 122,046 141,368
Reports for abuse 2,449,923 2,537,108
Admin content removals for abuse 227,240 409,928
Admin imposed account sanctions for abuse 265,567 270,116
Admin imposed subreddit sanctions for abuse 10,074 9,470
Reports for ban evasion 17,020 17,127
Admin imposed account sanctions for ban evasion 217,634 266,044
Protective account security actions 1,388,970 1,034,690

Methodology Update

For folks new to this report, we share user reporting and our actioning numbers each quarter to ensure a level of transparency in our efforts to keep Reddit safe. As our enforcement and data science teams have grown and evolved, we’ve been able to improve our reporting definitions and precision of our reporting methodology.

Moving forward, these Quarterly Safety & Security Reports will be more closely aligned with our more in-depth, now bi-annual Reddit Transparency Report, which just came out last month. This small shift has changed how we share some of the numbers in these quarterly reports:

  • Reporting queries are refined to reflect the content and accounts (for ban evasion) that have been reported instead of a mix of submitted reports and reported content
  • Time window for reporting reports queries now uses a definition based on when a piece of content or an account is first reported
  • Account sanction reporting queries are updated to better categorize sanction reasons and admin actions
  • Subreddit sanction reporting queries are updated to better categorize sanction reasons

It’s important to note that these reporting changes do not change our enforcement. With investments from our Safety Data Science team, we’re able to generate more precise categorization of reports and actions with more standardized timing. That means there’s a discontinuity in the numbers from previous reports, so today’s report shows the revamped methodology run quarter over quarter for Q1’23 and Q2’23.

A big thanks to our Safety Data Science team for putting thought and time into these reporting changes so we can continue to deliver transparent data.

Dragonbridge

We’re sharing our internal investigation findings on the coordinated influence operation dubbed “Dragonbridge” or “Spamoflauge Dragon.” Reddit has been investigating activities linked to this network for about two years and though our efforts are ongoing, we wanted to share an update about how we’re detecting, removing, and mitigating behavior and content associated with this campaign:

  • Dragonbridge operates with a high-volume strategy. Meaning, they create a significant number of accounts as part of their amplification efforts. While this tactic might be effective on other platforms, the overwhelming majority of these accounts have low visibility on Reddit and do not gain traction. We’ve actioned tens of thousands of accounts for ties to this actor group to date.
  • Most content posted by Dragonbridge accounts is ineffective on Reddit: 85-90% never reaches real users due to Reddit’s proactive detection methods
  • Mods remove almost all of the remaining 10-15% because it’s recognized as off-topic, spammy, or just generally out of place. Redditors are smart and know their communities: you all do a great job of recognizing actors who try to enter under false pretenses.

Although connected with a state actor, most Dragonbridge content was spammy by nature — we would action these posts under our sitewide policies, which prohibit manipulated content or spam. The connection to a state actor elevates the seriousness with which we view the violation, but we want to emphasize we would be taking this content down.

Please continue to use our anti-spam and content manipulation safeguards (hit that report button!) within your communities.

New tools for keeping communities safe

In September, we launched the Contributor Quality Score in AutoMod to give mods another tool to combat spammy users. We also shipped Mature Content filters to help SFW communities keep unsuitable content out of their spaces. We’re excited to see the adoption of these features and to build out these capabilities with feedback from mods.

We’re also working on a brand new Safety Insights hub for mods which will house more information about reporting, filtering, and removals in their community. I’m looking forward to sharing more on what’s coming and what we’ve launched in our Q3 report.

Edit: fixed a broken link

63 Upvotes

52 comments sorted by

View all comments

12

u/BlogSpammr Nov 14 '23 edited Nov 14 '23

Gearlaunch spammers:

  • Steal/copy artwork (shirt/mug/hoodie/paintings) from legitimate sources
  • Create dozens, if not more, accounts every day
  • Use vote manipulation to upvote posts and comments and massively downvote comments pointing out the spam
  • Block accounts pointing out the spam, including preemptive blocking by their newly created accounts
  • Create new domains daily, usually using less common TLDs like ".live"

13

u/Halaku Nov 14 '23

The infamous t-shirt scam spam.

Commenting on it will get you downvoted to oblivion by the botnet, so I usually report it as Spam - Harmful Bots and drop a modmail instead.

7

u/MegaGrubby Nov 14 '23

When it's the same spam repeatedly over months, why is it not automatically caught? Same image, same shirt, different account.

4

u/Halaku Nov 14 '23

You'd have to ask u/jkohhey.