AiPaper
Paper status: completed

其他文献.pdf

Original Link
Price: 0.10
4 readers
This analysis is AI-generated and may not be fully accurate. Please refer to the original paper.

TL;DR Summary

This study audits Twitter's phenomenon of shadowbanning, analyzing algorithms' roles in directing online attention. Testing 25,000 U.S. Twitter accounts revealed shadowbanning is rare; bot-like accounts are more affected, while verified ones are less so, particularly those postin

Abstract

摘要 算法在引导社交媒体上的在线注意力方面发挥着关键作用 。 许多人指责算法可能固化偏 见 。 本研究审核了推特 ( Twitter ) 上的 “ 影子禁令 ” 现象 【 研究对象 】 —— 即用户或其内容被 平台暂时隐藏。我们反复测试了一个 分层随机抽样 【抽样方法】 的美国推特账户样本( n = 25,000 ) 【 样本数量与来源 】 , 检验其是否遭 受了不同形式的影子禁令 。 随后 , 我们识别 了预测影子禁令的用户类型和推文特征 。 总体而言 , 影子禁令较为罕见 。 我们发现 , 具有机 器人行为特征的账户更易遭遇影子禁令 , 而认证账户则较少被影子封禁 。 发布冒犯性内容及 政治相关推文 ( 包括左翼和右翼 ) 的账户 , 其回复更可能被降级处理 。 【 研究结论 】 这些发 现对 算法问责制及未来社交媒体平台审计研究的设计 具有重要意义 【研究意义】 。 关键词 : 平台, Twitter ,审查制度,审计,暗禁,文本分析 社交媒体常因颠覆媒体机构与政府的传统守门角色而受到赞誉 ( Shirky , 2008 ) 。 它表 面上在一个民主审议的空间中提供不受限制的新闻资讯获取途径 , 保障了个人的言论自由权 利( Diamond , 2015 )。然而,社交媒体信息流的现实可能并不像表面显现的那样自由放 任 。 能够呈现在信息流中的内容 , 实则是由相互协调又时常博弈的利益相关方所制定的一系 列复杂规则与规范共同作用的结果( Gillespie , 2010 ; Puschmann & Burgess , 2013 )。 这些规则与规范控制着内容的策略性消声与策略性放大( Donovan & Boyd , 2021 ; Dua n 等, 2022 )。 算法是 " 通过手机和互联网日益介入我们行为的…

In-depth Reading

English Analysis

1. Bibliographic Information

1.1. Title

Auditing Twitter's Shadowbanning: Uncovering Algorithmic Control and Its Social Implications

1.2. Authors

The authors are not explicitly mentioned in the provided document snippet.

1.3. Journal/Conference

The publication venue is not explicitly mentioned in the provided document snippet.

1.4. Publication Year

The publication year is not explicitly mentioned in the provided document snippet. The data collection period spans 2020 and 2021, suggesting publication in 2021 or later.

1.5. Abstract

Algorithms play a crucial role in directing online attention on social media, with many accusing them of solidifying biases. This study audits the phenomenon of "shadowbanning" on Twitter—where a user or their content is temporarily hidden by the platform. The researchers repeatedly tested a stratified random sample of 25,000 U.S. Twitter accounts to determine if they experienced various forms of shadowbanning. Subsequently, they identified user types and tweet characteristics that predict shadowbanning. Overall, shadowbanning was found to be relatively rare. The study revealed that accounts exhibiting bot-like behavior were more susceptible to shadowbanning, while verified accounts were less likely to be affected. Accounts posting offensive content and political tweets (from both left and right ideologies) were more likely to have their replies demoted. These findings hold significant implications for algorithmic accountability and the design of future social media platform audit research.

/files/papers/69245bde5aa2301b620295fb/paper.pdf (This is a local file path, indicating the PDF was provided directly rather than linked from a public web address.)

2. Executive Summary

2.1. Background & Motivation

Social media platforms, often lauded for democratizing information access and free speech, are increasingly governed by complex algorithms that control content visibility. These algorithms are "snippets of code" that "increasingly intervene in our behavior" and "actively make decisions that affect our lives" (Kearns & Roth, 2019). This algorithmic control can lead to strategic silencing or amplification of content, potentially solidifying biases.

A particularly controversial form of algorithmic control is shadowbanning, where users or their content are temporarily hidden without notification. Despite initial denials, platforms like Twitter have acknowledged implementing such measures to foster civil discourse and curb misinformation. However, the lack of transparency surrounding shadowbanning has fueled public anxiety and accusations of ideological bias, with surveys showing a significant portion of users believing they have been shadowbanned. This perceived lack of fairness erodes user trust and poses long-term challenges for platforms.

The core problem this paper aims to address is the opaque nature of shadowbanning mechanisms. There's a critical need for empirical evidence to understand its prevalence, criteria, and impact on social media ecosystems. Specifically, the research aims to uncover whether shadowbanning is widespread, arbitrary, or ideologically biased, and how it influences information gatekeeping and social divisions. Understanding these mechanisms is crucial for improving algorithmic accountability and developing appropriate policy or legal frameworks for platform governance.

2.2. Main Contributions / Findings

This study makes several key contributions:

  • Systematic Audit of Shadowbanning: It provides a large-scale, systematic, and reproducible audit of shadowbanning on Twitter within the U.S. context, using a stratified random sample of 25,000 accounts. This addresses a significant gap in research, which often relies on qualitative methods or limited samples.
  • Identification of Predictive Factors: The research identifies specific user types and tweet characteristics that predict susceptibility to shadowbanning.
    • Rarity of Shadowbanning: Overall, shadowbanning is found to be relatively rare.
    • Bot-like Behavior: Accounts exhibiting bot-like behavior (e.g., new accounts, high friend count relative to follower count, high tweet frequency) are more prone to shadowbanning.
    • Verified Status: Verified accounts are significantly less likely to be shadowbanned, suggesting a layered governance model.
    • Content Characteristics: Accounts posting offensive content and political content (from both left-wing and right-wing perspectives) are more likely to experience reply demotion.
  • Insights into Algorithmic Fluidity: The study demonstrates that shadowbanning of political and social issues exhibits temporal instability. Algorithms are fluid and adapt to emerging user behavior trends, rather than being static.
  • Evidence for Platform-Mediated Gatekeeping: The findings provide empirical evidence for platform-mediated gatekeeping and how technology can reinforce social divisions, particularly through systemic bias against new or low-social-influence users.
  • Implications for Algorithmic Accountability: These insights are crucial for designing algorithmic accountability frameworks and future research on auditing social media platforms, highlighting the need for better transparency and ethical considerations in algorithmic design.

3. Prerequisite Knowledge & Related Work

3.1. Foundational Concepts

To fully understand this paper, a reader should be familiar with the following concepts:

  • Social Media Algorithms: In the context of platforms like Twitter, algorithms are computational processes that manage and personalize the content users see. They determine the visibility, ranking, and distribution of posts in users' feeds, search results, and recommendations. Their primary goal is often to maximize user engagement and platform revenue, but they also enforce terms of service and manage content moderation.
  • Shadowbanning (暗禁): Also known as soft censorship or ghost banning, this is a practice where a user's content or activity is made invisible or less visible to others on a platform without the user being explicitly notified. The user might still be able to post, but their posts will not reach their intended audience. It's considered a "soft" form of punishment, distinct from outright account suspension or content deletion.
    • Types of Shadowbanning mentioned in the paper:
      • Search Ban: A user's tweets are hidden from search results.
      • Search Suggestion Ban: A user's account does not appear in search suggestions when others try to find them.
      • Ghost Ban: A user's replies to others' tweets are completely invisible to all other users.
      • Reply Demotion: A user's replies are hidden in a collapsed section (e.g., behind a "Show more replies" button) and only load when actively triggered.
  • Algorithmic Accountability: This refers to the concept that algorithms, particularly those with significant societal impact, should be transparent, fair, unbiased, and subject to scrutiny and correction. It involves holding creators and deployers of algorithms responsible for their impacts, especially concerning issues like privacy, equality, and fairness.
  • Platform Governance: This describes how social media platforms (as private entities) manage online content and user interactions. It involves setting rules (terms of service), implementing moderation practices (human and algorithmic), and making decisions about what content is allowed, amplified, or suppressed. These decisions are often influenced by a complex interplay of stakeholder interests, profit motives, and regulatory pressures.
  • Stratified Random Sampling: A statistical sampling method where the population is divided into distinct subgroups (strata) based on shared characteristics. Then, a random sample is drawn from each stratum. This ensures that specific subgroups are adequately represented in the total sample, which can improve the precision of estimates, especially when there are significant differences between strata. In this paper, geographic location (U.S. counties) serves as a stratification factor.
  • Ridge Regression: A technique for analyzing multicollinear regression data. When predictor variables are highly correlated, ordinary least squares (OLS) estimates can be unstable. Ridge regression addresses this by adding a small amount of bias to the regression estimates, shrinking the regression coefficients towards zero, which can lead to more stable and reliable predictions for data with many correlated predictors. It's particularly useful when dealing with a large number of potentially interdependent features, as in this study.
  • Botometer: A tool that checks Twitter accounts and gives them a score indicating how likely they are to be bots. It analyzes various features like follower count, friend count, tweet frequency, and content to make this determination.

3.2. Previous Works

The paper contextualizes its research by referencing several prior studies and concepts:

  • Social Media as a Disruptor of Traditional Gatekeeping: Shirley (2008) is cited for the idea that social media disrupts traditional media and government gatekeeping roles, offering unrestricted access to information and free speech (Diamond, 2015). However, the paper argues that this isn't always the reality, as content visibility is shaped by complex rules and norms (Gillespie, 2010; Puschmann & Burgess, 2013).
  • Strategic Silencing and Amplification: Donovan & Boyd (2021) and Duan et al. (2022) are referenced regarding how rules and norms control the strategic silencing and strategic amplification of content. This highlights that algorithmic intervention is not neutral but purposive.
  • Algorithmic Society: Balkin (2017) and Just & Latzer (2017) are mentioned in the context of algorithmic society, where algorithms increasingly mediate human behavior and decision-making, leading to new forms of surveillance, manipulation, and discrimination.
  • Platform Governance Frameworks: Gillespie (2017), Gorwa (2019), Poell et al. (2014), and Sinclair (2019) inform the understanding of platform governance – how platforms manage content and interactions to serve stakeholder interests, often driven by profit (Caplan & Gillespie, 2020; Cohen, 2019; Diakopoulos, 2015; Popiel, 2021).
  • Algorithmic Auditing: Martini et al. (2021) and Rauchfleisch & Kaiser (2020) are cited as examples of emerging research in algorithmic auditing that examines the effectiveness and social impact of algorithms and language models. This paper aims to bridge platform governance and algorithmic auditing.
  • Layered Governance: Caplan & Gillespie (2020) describe a layered governance model where platforms offer differential resources to different users and apply varying procedures for rule violations. This concept is crucial for understanding how shadowbanning might affect users differently based on their status or influence. LeMerrer et al. (2020) also found evidence of layered governance in an early study of Twitter shadowbanning, suggesting that influential users (e.g., politicians) received preferential treatment.
  • Previous Shadowban Research: LeMerrer et al. (2021) conducted an audit on European users, noting French political representatives were less likely to be shadowbanned. Other studies (e.g., Tanash et al., 2015; King et al., 2014; Majo-Vazquez et al., 2021) have explored social media censorship at national levels or linked account suspension to divisive topics and systematic promotion of political figures. This paper extends this by looking at content dimensions and the U.S. context.
  • Bot Detection Research: Davis et al. (2016) and Yang et al. (2020) are referenced for characteristics of bot-like behavior (new accounts, low friend count, high tweet frequency), which are then incorporated into the Botometer tool. Shao et al. (2018) also studied Twitter's efforts to reduce bot accounts.

3.3. Technological Evolution

The evolution of content moderation on social media has moved from primarily human-driven gatekeeping to increasingly sophisticated algorithmic systems. Initially, social media platforms were celebrated for their open, decentralized nature. However, as they scaled, the sheer volume of content necessitated automated solutions. This led to the deployment of algorithms not just for personalization but also for content moderation and rule enforcement.

Early moderation efforts often focused on explicit violations (e.g., hate speech, graphic violence) with "hard" measures like account bans or content deletion. The introduction of "soft" censorship like shadowbanning represents a more subtle, less transparent form of control. This evolution is driven by the need for efficiency, rapid response, and the complex challenge of managing diverse content while balancing free speech with platform integrity and advertiser-friendly environments. This paper fits into this timeline by auditing one of the less transparent, algorithmically driven soft censorship mechanisms, revealing its characteristics and implications.

3.4. Differentiation Analysis

Compared to existing shadowban and content moderation research, this paper offers several core differentiations and innovations:

  • Systematic and Reproducible Audit: Most prior studies on shadowbanning are qualitative, based on anecdotal evidence, or focus on subjective experiences of a few elite users or influencers. This study, in contrast, conducts a large-scale, systematic, and reproducible audit using a stratified random sample of 25,000 U.S. Twitter accounts. This provides robust empirical evidence rather than anecdotal observations.
  • Focus on User Attributes and Content Features: While some previous audits examined account suspension in relation to political events or topics, this study comprehensively analyzes a wide range of user profile characteristics (e.g., verified status, account age, bot-like behavior), content features (e.g., offensiveness, political hashtags), and social features (e.g., friend count, follower count, engagement) to predict shadowbanning. This offers a more granular understanding of the predictive factors.
  • Multi-Wave Data Collection: The iterative, multi-wave data collection (six rounds over two years) allows the researchers to observe the temporal dynamics of shadowbanning, revealing the fluidity and adaptiveness of Twitter's algorithms, a nuance often missed by single-snapshot studies.
  • Bridging Research Fields: The study explicitly positions itself at the intersection of platform governance and algorithmic auditing research, aiming to foster dialogue between these domains concerning the societal impacts of algorithms.
  • U.S. Context: While shadowbanning has been observed in other regions (e.g., Europe, Turkey, China), this study provides a dedicated and in-depth examination within the U.S. context, addressing specific concerns about ideological bias prevalent in U.S. public discourse.

4. Methodology

4.1. Principles

The core principle of this study is to systematically audit Twitter's shadowbanning algorithms to understand their operation, criteria, and impact. This involves iteratively testing a representative sample of Twitter accounts using an external auditing service, extracting a comprehensive set of user profile, content, and social features, and then employing regression analysis to identify which features predict the occurrence of different types of shadowbans. The goal is to "reverse-engineer" Twitter's algorithmic mechanisms, especially concerning platform-mediated gatekeeping and social division reinforcement.

4.2. Core Methodology In-depth (Layer by Layer)

The methodology can be broken down into several sequential steps:

4.2.1. Sample Selection and Stratification

The study begins by creating a stratified random sample of U.S. Twitter accounts.

  1. Initial Data Source: The researchers leveraged 5.72 million geo-tagged tweets from the 1% data stream of Twitter's firehose from January 2019, as part of the CountyLexicalBank project (Giorgi et al., 2018). These tweets were annotated with FIPS codes (Federal Information Processing Standards codes for geographical areas) based on both GPS coordinates and self-declared user locations.
  2. Account Identification: From this data, 2.02 million Twitter accounts with geo-location data were identified, ensuring that at least 50 unique accounts were present in 1,607 counties.
  3. Super-Set Creation: A super-set of 50,000 user accounts was created using stratified random sampling, averaging 30 Twitter accounts per FIPS code.
  4. Sampling for Audits:
    • For the first four shadowban checks (May-June 2020), a smaller sub-sample of 10,107 Twitter accounts was used (averaging 6 accounts per FIPS code).
    • The fifth and sixth checks (June 2020 and July 2021) were performed on the larger super-set.
    • Out of the 50,000 accounts in the super-set, 38,291 were neither suspended nor deleted at the start of data collection. In the 10,107 account sub-set, 7,734 accounts were active.

4.2.2. Shadowban Detection

To detect shadowbans, the Shadowban.EU service (Fosse, 2020; LeMerrer et al., 2021) was utilized.

  1. Service Query: For each username in the sample, the Shadowban.EU web service was queried.
  2. Shadowban Types Checked: The service checks for four specific types of shadowbans:
    • Search Suggestion Ban: The account does not appear in search recommendations when users search for it.
    • Search Ban: Tweets from the account are completely absent from search results, regardless of whether a quality filter is enabled.
    • Ghost Ban: Replies posted by the user are invisible to all other users on Twitter.
    • Reply Demotion: Replies posted by the user are collapsed behind a separator and only load when "Show more replies" is clicked (or by tapping on mobile).
  3. Data Output: This process yielded over 100,000 data points indicating whether an account was active, suspended, or subjected to a shadowban at a specific time.

4.2.3. Data Collection Timing

The shadowban checks were conducted periodically:

  • First Series: May-June 2020 (four checks on the sub-sample).
  • Fifth Check: June 2020 (on the super-set).
  • Sixth Check: July 2021 (on the super-set). This multi-wave approach allows for the observation of temporal dynamics in shadowbanning.

4.2.4. Feature Extraction

Features were extracted using the academic Twitter API, Botometer API, and computational linguistics methods. Features are categorized into personal profile, content, and social characteristics.

4.2.4.1. Personal Profile Features

These features describe the fundamental characteristics of the user account itself.

  • Account Age: Collected via the Twitter API. The raw age (in days) was log-transformed for analysis.
  • Verified Status: Collected via the Twitter API. This is a binary value (either verified or not).
  • Botometer Score: Estimated using the Botometer API (Yang et al., 2020). This score represents the probability that a given Twitter account is a bot.

4.2.4.2. Content Features

These features characterize the content posted by the user. Data for these features was collected from tweets published during six 10-day periods before each shadowban check. This 10-day window is chosen based on prior research suggesting it's optimal for predicting Twitter account suspensions (Seyler et al., 2021) and Twitter's own stated penalty durations (12 hours to 7 days) (Twitter, 2021b). A total of 4.48 million tweets were collected.

  • Tweet Frequency: Calculated as the log-transformed number of tweets posted per day. Accounts that did not post any tweets in the 10 days preceding a check (inactive accounts) were excluded from that specific round's analysis (approximately 8% of accounts per round).
  • Offensiveness Score: For each user, this is the mean of the predicted offensiveness scores of all their tweets within the 10-day window. The offensiveness score for individual tweets was predicted using a machine learning classifier trained on human-annotated data (Davidson et al., 2017).
  • Hashtag Classification: This involves categorizing tweets based on the hashtags they use, following common practice in social media research (Bessi & Ferrara, 2016; Bruns et al., 2016; Gallagher et al., 2018).
    1. Extraction: 2,340 English hashtags were extracted using Python's Natural Language Toolkit.
    2. Filtering: Only hashtags used by at least 30 accounts (0.001% of the total) across the first 1-5 detection rounds were retained, resulting in 154 hashtags.
    3. Categorization: These hashtags were then grouped by semantic similarity or topic. Examples include political tags (#Biden, #DonaldTrump, #Blacklivesmatter, #Defundthepolice), news tags (#BreakingNews), social tags (#Pride, #FathersDay), and leisure tags (#Baseball, #AnimalCrossing, #COVID19, #NewProfilePic).
    4. Feature Value: Each user's posting history related to specific topics was transformed into a standardized frequency distribution relative to their total word count. These topic-specific hashtag frequencies were then used as content features in the regression analysis.

4.2.4.3. Social Features

These features measure an account's connectivity and importance within the Twitter network.

  • Static Social Features:
    • Follower Count: The number of accounts following the user.
    • Friend Count (or following count): The number of accounts the user follows (representing their out-degree or external network). These values were collected via the Twitter API and were assumed to be stable throughout the analysis period. Both were log-transformed.
  • Dynamic Social Features:
    • Likes: Average number of likes received per tweet.
    • Retweets: Average number of retweets received per tweet.
    • Quoted Tweets: Average number of quoted tweets received per tweet.
    • Replies: Average number of replies received per tweet. These dynamic metrics were collected for tweets published or retweeted by 350 users within the 10-day window before each shadowban check. All were log-transformed.

4.2.5. Regression Analysis

To identify predictors of shadowbanning, multivariate regression was employed.

  • Model: Ridge regression was chosen for its ability to handle multicollinearity and shrink coefficients towards zero, effectively selecting the most significant predictors. The sklearn package in Python was used.
  • Regularization Parameter: The alpha value for ridge regression was set to 10510^{-5} (α=105\alpha = 10^{-5}) to correct for the influence of a large number of covariates.
  • Robustness Check: The results were validated for robustness by fitting a quasi-binomial generalized linear model to the dataset. The supplementary materials indicate high consistency in the patterns of conclusions from both methods.

4.2.6. Hypotheses

The study tested eight main hypotheses regarding the relationship between user characteristics and shadowbanning:

  • H1: Verified Twitter accounts are less likely to be subjected to shadowbanning.

  • H2: Older Twitter accounts (registered earlier) are less likely to be subjected to shadowbanning.

  • H3: Twitter accounts exhibiting more bot-like behavior (e.g., higher Botometer scores, high tweet frequency, low follower-to-friend ratio) are more likely to be subjected to shadowbanning.

  • H4: Mentioning offensive content is positively correlated with shadowbanning.

  • H5: Mentioning political issues is positively correlated with shadowbanning.

  • H6: Mentioning social issues is positively correlated with shadowbanning.

  • H7: Accounts with higher social influence (e.g., many followers) are less likely to be shadowbanned.

  • H8: Accounts with high tweet engagement (e.g., many retweets, likes) are less likely to be shadowbanned.

    The study considered a hypothesis supported if any of the four shadowban types showed a statistically significant effect.

5. Experimental Setup

5.1. Datasets

The experimental setup leveraged Twitter data collected from a large sample of U.S. accounts.

  • Source Data: The foundation for the sample was 5.72 million geo-tagged tweets from a 1% sample of Twitter's firehose data stream in January 2019. This data was part of the CountyLexicalBank project (Giorgi et al., 2018). The geo-tagged tweets were crucial for identifying accounts with a U.S. geographic location.
  • User Sample:
    • 2.02 million Twitter accounts with geo-location data were initially identified, ensuring at least 50 unique accounts per 1,607 U.S. counties.
    • A super-set of 50,000 user accounts was created through stratified random sampling (approximately 30 accounts per FIPS code).
    • For the initial shadowban checks (first four runs), a sub-sample of 10,107 Twitter accounts (approximately 6 accounts per FIPS code) was used.
    • The fifth and sixth checks targeted the super-set.
    • Active Accounts: Out of the 50,000 accounts, 38,291 were not suspended or deleted at the start of data collection. In the 10,107 sub-set, 7,734 accounts were active (i.e., not suspended/deleted).
    • Active for Shadowban Checks: In total, 27,718 accounts were "active" (posted a tweet within the 10 days prior to a shadowban check) across all runs, representing the core sample for the shadowban detection.
  • Tweet Corpus for Content Analysis: Over 4.48 million tweets were collected during six 10-day periods preceding the shadowban checks to extract content features.
  • Dataset Characteristics: The data covers U.S. Twitter users and their activities between 2019 (for initial identification) and 2021 (for later shadowban checks). It captures user profile metadata, tweet content, and social interaction metrics. The geo-located nature of the initial sampling aimed to ensure a geographically representative sample across the U.S.

5.2. Evaluation Metrics

The study's "evaluation metrics" are the dependent variables in the regression analysis, representing the occurrence of different types of shadowbans. The effectiveness of the regression model is assessed by its ability to predict these occurrences based on the independent variables (user profile, content, and social features). The paper does not provide mathematical formulas for these outcomes as they are observational states rather than calculated metrics in the traditional sense of model performance.

  • Search Suggestion Ban (搜索建议禁令):
    • Conceptual Definition: This metric quantifies whether an account's visibility in Twitter's search suggestions is suppressed. Its design goal is to detect if the platform is making it harder for other users to find a specific account when actively searching for it, without outright banning the account from posting.
  • Search Ban (搜索封禁):
    • Conceptual Definition: This metric quantifies whether an account's tweets are entirely removed from Twitter's search results. Its design goal is to identify instances where the content posted by a user is made undiscoverable through search, effectively hiding it from broader exposure even if the user can still post.
  • Ghost Ban (幽灵禁令):
    • Conceptual Definition: This metric quantifies whether a user's replies to other tweets are rendered completely invisible to all other users. Its design goal is to detect a severe form of shadowbanning where a user's direct participation in conversations is nullified, making their contributions appear to vanish.
  • Reply Demotion (回复降权):
    • Conceptual Definition: This metric quantifies whether a user's replies are hidden behind a collapsible section (e.g., "Show more replies") rather than appearing in the main reply thread. Its design goal is to identify a milder form of shadowbanning that reduces the immediate visibility and discoverability of a user's conversational contributions, requiring explicit action from other users to view them.

5.3. Baselines

This study is an auditing and predictive modeling research, not a comparative study of different algorithmic models for a specific task. Therefore, it does not compare its proposed method against traditional baseline models in the same way a machine learning paper might. Instead, the "baselines" for comparison are implicitly:

  • Null Hypothesis: That no specific user or content characteristics predict shadowbanning.
  • Different User Groups/Content Types: The study implicitly compares the likelihood of shadowbanning across different categories of users (e.g., verified vs. unverified, old vs. new, bot-like vs. human-like) and different types of content (e.g., offensive vs. non-offensive, political vs. non-political). The regression analysis identifies features that significantly deviate from a baseline (or average) likelihood of being shadowbanned.

6. Results & Analysis

6.1. Core Results Analysis

The study's results provide empirical insights into the prevalence and predictive factors of shadowbanning on Twitter.

6.1.1. Overall Shadowban Prevalence

  • Rarity: Shadowbanning was found to be generally rare. Across the first five checks, 1,731 unique Twitter accounts experienced shadowbanning at least once, totaling 2,476 instances. This represents 6.2% of the 27,718 active accounts (those posting within 10 days of a check).

  • Frequency by Type: Reply demotion was the most common form of shadowban (5.33% of active accounts, 1,479 accounts with 1,900 demoted replies). Search ban affected 0.91% of accounts (252 accounts with 293 search bans). Search suggestion ban (0.57%) and ghost ban (0.13%) were considerably rarer.

    The following are the results from [Table 1] of the original paper:

    运行1至5(2020年6月至7月)
    搜索屏蔽 (1) 搜索建议屏蔽 (2) 幽灵禁令 (3) 低沉的答复 (4)
    个人资料功能
    账户年龄 -0.937** (0.032) -0.884** (0.027) 0.061* (0.011) -0.586** (0.069)
    已验证状态 -0.987 -1.163* -0.263*** -1.056*
    (0.207) (0.172) (0.071) (0.445)
    Botometer评分 3.749**· 2.271.. 0.661* -0.607
    (0.179) (0.149) (0.096)
    内容功能
    推文频率 0.105* (0.028) 0.304* (0.023) -0.004 (0.009) 0.801 (0.059)
    攻击性 0.366 0.446 -0.057 7.041
    (0.279) (0.232) (0.096) (0.6)
    #手拜登 0.265 0.089 0.487.* 4.501*
    (0.393) (0.326) (0.134) (0.843)
    #唐纳德·特朗普 1.207.. -0.14 0.128 4.6**
    (0.462) (0.384) (0.158) (0.991)
    #Blacklivesmatter 0.347* 0.085 0.196** -0.043
    (0.149) (0.124) (0.051) (0.32)
    #Defundthepolice 0.076 -0.74 0.566 0.803
    (0.919) (0.764) (0.314) (1.973)
    #骄傲 0.697 0.01 0.832** 0.014
    (0.402) (0.335) (0.138) (0.864)
    #分手 -0.144 -0.341 -0.034 -0.757
    (1.755) (1.459) (0.6) (3.766)
    #新冠肺炎 0.136 -0.147 0.171… -0.327
    (0.142) (0.118) (0.049) (0.306)
    #棒球 -0.175 -0.131 -0.019 -0.201
    (0.333) (0.277) (0.114) (0.715)
    #动物过境 0.025 0.057 0.055 0.35
    (0.096) (0.08) (0.033) (0.205)
    #父亲节 -0.022 0.001 -0.021 -0.014
    (0.261) (0.217) (0.089) (0.56)
    #新个人资料图片 -0.003 0.028 0.003 0.025
    (0.099) (0.082) (0.034) (0.212)
    社会影响特征
    好友数量 0.047* (0.022) 0.028 (0.018) 0.014 (0.007) -0.549** (0.047)
    爱好 0.108* (0.028) 0.127*· (0.023) -0.019** (0.01) 0.246** (0.061)
    转发 0.032 0.037 -0.034 0.727
    (0.103) (0.086) (0.035) (0.221)
    引用推文 -0.028** -0.028** -0.007* -0.139
    (0.01) (0.008) (0.003) (0.02)
    回复数 0.276 -0.076 0.055 0.603
    (0.113) (0.094) (0.039) (0.242)

Note: p<0.05: <0.01:* <0.001

6.1.2. Personal Profile Features (H1, H2, H3 Supported)

The analysis strongly supported the hypotheses regarding personal profile features:

  • Verified Accounts (H1): Verified accounts were significantly less likely to be shadowbanned. Specifically, they faced a ~0.9% lower probability of search ban compared to unverified users (p<0.001p < 0.001, 95% CI: 0.19% to 1.3%).

  • Account Age (H2): Older accounts were less susceptible to shadowbanning. Accounts older than five years had a ~3% lower probability of search ban than accounts 30 days old or younger (p<0.001p < 0.001, 95% CI for older accounts: 7.25% to 6.80%; for younger: 3.28% to 3.08%).

  • Bot-like Behavior (H3): The Botometer score was positively correlated with shadowbanning. Accounts highly likely to be bots faced a 1.03 times higher probability of search ban (an increase of 3.5% to 3.9%, p<0.001p < 0.001). This suggests Twitter's algorithms do target bot-like behavior.

    The following figure (Figure 3 from the original paper) shows the effect size of different user types and tweet characteristics on the likelihood of accounts experiencing shadow bans:

    图3使用岭回归分析调整后的自变量对账户是否被影子禁令影响的效应量。结果以百分比形式呈现 \(( \\times 1 0 2 ) _ { o }\) 该图像是图表,展示了不同用户类型及推文特征对账户遭受影子禁令影响的效应量。结果以百分比形式呈现,重要性水平通过黄色和黑色标记分别表示p<0.05p<0.05p>0.05p>0.05

The results are presented as percentages (×102\times 10^2). Black circles/triangles indicate effects stable across years, while black circles with light yellow triangles indicate effects significant in 2020 but not 2021.

6.1.3. Content Features (H4, H5, H6 Supported)

  • Offensive Content (H4): Posting offensive tweets was the strongest predictor for reply demotion. A one-unit increase in offensiveness led to a 7.3% increase in the probability of a reply being demoted.

  • Political Content (H5, H6): Both pro-Democrat hashtags (#Biden, #TrumpVirus) and pro-Republican hashtags (#DonaldTrump) significantly predicted reply demotion. Each one-unit increase in the use of these tags resulted in a 4.6% and 4.7% increase, respectively.

  • Other Social/Political Tags: Other tags like #Blacklivesmatter and #Pride also showed positive associations with ghost ban or search ban, but these were found to be more sensitive to model specification (as discussed in specification curve analysis).

    The following figure (Figure 4 from the original paper) shows the specification curve analysis results for predicting demoted replies based on political tags:

    该图像是图表,展示了与推特相关的两个主题标签(#拜登和#Donaldtrump)的数据分布。上方为均值及误差条,下面则列出了各个推特用户的相关特征。整体信息显示了不同用户在这两个主题下的参与程度和特征差异。 该图像是图表,展示了与推特相关的两个主题标签(#拜登和#Donaldtrump)的数据分布。上方为均值及误差条,下面则列出了各个推特用户的相关特征。整体信息显示了不同用户在这两个主题下的参与程度和特征差异。

This figure illustrates how demoted replies correlate with political labels, showing the robustness of these relationships across different model specifications. Blue points and error bars indicate significant positive effects, while red indicates significant negative effects.

6.1.4. Social Features (H7 Supported, H8 Partially Supported)

  • Friend Count (H7): Accounts following a large number of other accounts (friend count) were more likely to face search ban, search suggestion ban, and reply demotion (0.11%, 0.13%, and 0.25% increase per log-transformed unit, respectively). This suggests that high out-degree behavior, potentially indicative of bot-like activity (e.g., mass following), is targeted.

  • Tweet Engagement (H8 - partially): Accounts with higher tweet engagement (measured by retweets) were less likely to be shadowbanned across all four types (0.01% to 0.14% reduction per log-transformed retweet).

  • Follower Count (H7 - continued): Accounts with a high number of followers (in-degree social influence) were significantly less likely to experience reply demotion (0.5% reduction per log-transformed unit). This finding was robust across different model specifications.

    The following figure (Figure 5 from the original paper) shows the specification curve analysis results for predicting the impact of social status on replies:

    该图像是一个图表,上半部分分别展示了好友数量(a)和关注者数量(b)的分布情况。图表中通过红色和蓝色的点标示了不同用户类型的数据,并在下方提供了详细的注释说明。数据的可视化有助于理解社交媒体用户之间的互动及影响程度。 该图像是一个图表,上半部分分别展示了好友数量(a)和关注者数量(b)的分布情况。图表中通过红色和蓝色的点标示了不同用户类型的数据,并在下方提供了详细的注释说明。数据的可视化有助于理解社交媒体用户之间的互动及影响程度。

This figure shows the specification curve analysis for shadowban prediction based on friend count (a) and follower count (b). It reveals that follower count robustly predicts a reduction in reply demotion, while the predictive effect of friend count on reply demotion varies with model settings.

6.2. Data Presentation (Tables)

The table provided in the paper was transcribed in the "Core Results Analysis" section.

6.3. Ablation Studies / Parameter Analysis

The paper conducts a form of robustness and sensitivity analysis rather than traditional ablation studies for model components.

  • Robustness Check: The primary ridge regression results were confirmed using a quasi-binomial generalized linear model, and the supplementary materials show high consistency in the conclusions. This ensures that the findings are not solely dependent on the specific regression technique chosen.
  • Specification Curve Analysis: Figures 4 and 5 illustrate specification curve analysis, which examines the sensitivity of the results to different model specifications (e.g., inclusion/exclusion of specific covariates). This analysis confirms the robustness of key findings, such as the negative association between follower count and reply demotion, and highlights the sensitivity of others, like the predictive power of certain political/social tags for shadowbanning. This is crucial for understanding the reliability and context-dependency of different predictive relationships.

7. Conclusion & Reflections

7.1. Conclusion Summary

This study provides a comprehensive, large-scale audit of Twitter's shadowbanning algorithms in the U.S., shedding light on this opaque content moderation practice. While shadowbanning is relatively rare, the research identifies consistent patterns: bot-like accounts, newer accounts, and unverified accounts are disproportionately affected. Conversely, verified accounts and those with high social influence (many followers) are less likely to be shadowbanned. Content-wise, offensive content and political tweets (from both left and right ideologies) are more prone to reply demotion. A critical finding is the fluidity of these algorithms; shadowbanning criteria can shift over time, reacting to emerging trends and events. These results underscore the presence of platform-mediated gatekeeping and a layered governance system that reinforces social hierarchies, with significant implications for algorithmic accountability and the future of online discourse.

7.2. Limitations & Future Work

The authors acknowledge several limitations and suggest future research directions:

  • Correlation vs. Causation: The study identifies correlations but acknowledges the difficulty in establishing causal directionality. For example, shadowbanning might affect user engagement, or a lack of engagement might contribute to shadowbanning.
  • User Impact and Awareness: The research did not directly assess whether users perceive shadowbanning or its actual impact on their lives and their followers. Given its rarity, detectability by users is unclear. The impact might also depend on interactions with other algorithms (e.g., recommendation systems).
  • Data and Feature Limitations:
    • Geographic Sampling: Reliance on geo-located tweets for sampling, which constitutes a small percentage (~5.65%) of the overall firehose data.
    • Classifier Accuracy: High dependence on pre-validated machine learning classifiers for offensiveness (Davidson et al., 2017) and bot detection (Botometer by Yang et al., 2020). Previous research (Martini et al., 2021; Rauchfleisch & Kaiser, 2020) has highlighted precision issues and potential false positives/negatives with Botometer, although the study argues its findings reflect Twitter's own assessments.
    • Missing Features: The model did not include all possible factors, such as cross-post similarity.
  • Recommendations for Future Work:
    • Temporal Dynamics of Platform Governance: Future theoretical work should focus on the temporal dynamics of platform governance and how behavioral norms and topic discussions evolve on social media, especially given the observed algorithmic fluidity.
    • User Adaptation Mechanisms: Investigate how users react to and adapt to shadowbanning or censorship (e.g., circumvention strategies, neologisms to evade detection, online resistance).
    • Algorithmic Snapshots: Technology companies and legislators should consider methods for preserving and auditing "algorithmic snapshots" to track dynamic algorithm changes over time.
    • Multi-task Learning for Norms: New training methods like auxiliary multi-task learning should be adopted, incorporating social norms (e.g., privacy) as auxiliary variables or constraints in algorithmic training, rather than relying solely on post-hoc audits.
    • New Benchmarks for Fairness: Establish new benchmarks for influence, engagement, or controversy to audit and strengthen algorithmic fairness standards.
    • Improved Documentation and User Rights: Enhance documentation of data usage, data rights, and user status to enable users to understand their rights and appeal algorithmic decisions. Empower marginalized groups through more negotiation opportunities.
    • Digital Literacy Education: Promote digital literacy education to help users identify and understand biases in information flows.

7.3. Personal Insights & Critique

This paper offers a valuable empirical contribution to the increasingly critical field of algorithmic accountability and platform governance.

  • Strength of Empirical Approach: The use of a stratified random sample and iterative auditing over time significantly strengthens the findings compared to previous qualitative or anecdotal studies. This rigor is essential for challenging opaque algorithmic practices.
  • Insight into Algorithmic Fluidity: The observation that shadowbanning criteria are temporally unstable and responsive to trends is a crucial insight. It highlights that content moderation is not a static policy but a dynamic, adaptive system, making auditing a continuous challenge. This fluidity implies that platforms can strategically adjust their moderation during sensitive periods (e.g., elections) and then relax them.
  • Evidence for Layered Governance: The findings regarding verified accounts and those with high follower counts being protected from shadowbanning provide strong empirical evidence for layered governance. This reinforces concerns that platforms are not neutral public squares but rather operate with internal hierarchies, potentially favoring elite or influential users, which can stifle emergent voices and reinforce existing power structures.
  • Nuance in Political Content Moderation: While the paper found both left- and right-wing political content susceptible to reply demotion, it cautiously notes that political tags alone might not fully capture ideological affiliation. This suggests that while outright ideological bias against a specific political wing might not be evident in this particular form of shadowbanning, further granular analysis is needed, potentially using user-centric ideological measures as explored in supplementary materials.
  • The "Harm Principle" and Unintended Consequences: The discussion on the harm principle and algorithmic nuisance is pertinent. Even if shadowbanning aims to curb offensive content, its systemic bias against low-influence users or bot-like behavior (which might include legitimate new users or those trying to organically grow their presence) can cause unintended harm, such as loss of social capital or reduced visibility. The paper correctly emphasizes that unintended consequences should not be an excuse for lack of accountability.
  • Practical Recommendations: The recommendations for algorithmic snapshots, multi-task learning to embed social norms, and new auditing benchmarks are practical and forward-looking, offering concrete steps towards more accountable and fair algorithmic design.
  • Critique on Botometer: The acknowledged limitations regarding Botometer accuracy are important. While the study argues Botometer approximates Twitter's assessment, potential false positives (classifying humans as bots) could mean that Twitter's algorithms might also be inaccurately shadowbanning legitimate users based on bot-like behavior heuristics, further exacerbating the unintended harm to low-influence users.
  • Transferability: The methods and conclusions regarding algorithmic fluidity, layered governance, and the challenges of auditing opaque systems are highly transferable to other large social media platforms and content moderation contexts, emphasizing the broader societal implications of algorithmic decision-making.

Similar papers

Recommended via semantic vector search.

No similar papers found yet.