Bluesky has published its inaugural transparency report this week, detailing the initiatives undertaken by its Trust & Safety team, including age-assurance compliance, monitoring of influence operations, and automated labeling, among others.
As a burgeoning social media platform competing with X and Threads, Bluesky experienced a remarkable growth of nearly 60% in 2025, increasing its user base from 25.9 million to 41.2 million. This number encompasses accounts hosted on both Bluesky’s infrastructure and those on independently managed infrastructures within its decentralized network, powered by the AT Protocol.
In the past year, users generated 1.41 billion posts, which constitutes 61% of all posts made on Bluesky to date. Of these, 235 million posts featured media, accounting for 62% of all media content shared on the platform.
Additionally, the report indicated a significant fivefold increase in legal requests from law enforcement, government regulators, and legal entities in 2025, totaling 1,470 requests compared to just 238 in 2024.
While the company had previously released moderation reports in 2023 and 2024, this marks the first comprehensive transparency report. The new report expands its focus beyond moderation to include regulatory compliance and account verification, among other important aspects.
54% Increase in User Reports of Moderation Issues
In comparison to 2024, when Bluesky recorded a 17-fold increase in user reports of moderation issues, this year saw a 54% rise, increasing from 6.48 million reports in 2024 to 9.97 million in 2025.
Despite this increase, Bluesky noted that this growth closely mirrored its 57% rise in user base during the same timeframe.
Techcrunch Event
Boston, MA
|
June 23, 2026
Approximately 3% of the user base, or 1.24 million users, submitted reports in 2025. The predominant categories were “misleading” (including spam) at 43.73%, “harassment” at 19.93%, and sexual content at 13.54%.
A broad “other” category encompassed 22.14% of the reports that did not fit into these classifications, including instances of violence, child safety concerns, breaches of site rules, and self-harm, each constituting smaller percentages.
Within the “misleading” category, there were 4.36 million reports, of which spam accounted for 2.49 million.
Meanwhile, hate speech represented the largest group within the 1.99 million “harassment” reports, totaling around 55,400. Other notable areas included targeted harassment (approximately 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports).
However, Bluesky clarified that many “harassment” reports involved behavior that fell into a gray area of antisocial conduct, which might include inappropriate remarks that did not qualify as hate speech.
Most reports related to sexual content (1.52 million) were due to mislabeling, indicating that adult content was not correctly tagged with metadata, which would allow users to better manage their own moderation experience using Bluesky’s tools.
A smaller number of reports focused on non-consensual intimate imagery (approximately 7,520), abusive content (around 6,120), and deepfakes (over 2,000).
Reports regarding violence (totaling 24,670) encompassed subcategories such as threats or incitement (around 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).
In addition to user submissions, Bluesky’s automated system flagged 2.54 million potential violations.
Significantly, Bluesky reported a reduction in daily reports of antisocial behavior on the platform, which declined by 79% following the implementation of a system designed to identify toxic responses and reduce their visibility, similar to strategies employed by X.
Moreover, Bluesky experienced a month-over-month decrease in user reports, with incidents per 1,000 active users declining by 50.9% from January to December.
In terms of matters outside moderation, Bluesky indicated the removal of 3,619 accounts suspected of being involved in influence operations, most likely originating from Russia.
Surge in Takedowns and Legal Requests
Last autumn, the company announced its intention to intensify its moderation and enforcement efforts, a promise it appears to have fulfilled.
In 2025, Bluesky removed a total of 2.44 million items, which included accounts and various types of content. This contrasts sharply with the 66,308 accounts removed in the previous year, with automated tools accounting for another 35,842 account removals.
Additionally, moderators actively removed 6,334 records, while automated systems accounted for 282 removals.

In 2025, Bluesky also issued 3,192 temporary suspensions and 14,659 permanent removals for ban evasion, primarily targeting accounts involved in inauthentic behavior, spam networks, and impersonation.
Nonetheless, the report suggests a preference for content labeling over outright suspensions. In the past year, Bluesky applied 16.49 million labels to content, representing a 200% increase year-over-year. Meanwhile, account takedowns rose by 104%, from 1.02 million to 2.08 million, with most labels related to adult, suggestive content, or nudity.
