X Opens Algorithm After €120 Million EU Fine as Unverified Censorship Claims Spread
Elon Musk releases X's recommendation algorithm code following EU enforcement action. Viral claims about Israel criticism being censored by 90% could not be independently verified, though documented concerns about pro-Palestinian content suppression persist.
Elon Musk's social media platform X has released its recommendation algorithm code to the public, following a €120 million fine from the European Union. The move has sparked renewed debate about platform transparency and content moderation, with viral claims circulating about alleged censorship of Israel-critical content that could not be independently verified.
EU Forces Musk's Hand
The algorithm release came after months of regulatory pressure from Brussels. In December 2025, the European Commission fined X for violating transparency obligations under the Digital Services Act. The violations included selling verified blue checkmarks without proper identity verification and blocking researcher access to advertising data.
The EU has extended a data retention order on X through the end of 2026, demanding preservation of internal documents related to algorithms and illegal content. French prosecutors separately launched an investigation into suspected algorithmic bias in July 2025.
Musk announced on January 10 that X would make its algorithm open source within seven days. The code was published on GitHub on January 20, revealing the recommendation system that determines what billions of users see in their feeds.
This marks the second such release since Musk acquired the platform in 2022. The previous release in March 2023 was widely criticised as incomplete and was never updated.
Viral Claims About Israel Censorship
Following the algorithm release, posts circulated on social media claiming that users had discovered Israel critics were being censored with their reach reduced by 90 percent. Some posts claimed the algorithm was fundamentally based on whether users criticise Israel.
These specific claims could not be independently verified. The open-source code does not contain explicit Israel-related content filters, according to researchers who reviewed the release. Critics have noted that key elements like model weights and training data remain hidden, making comprehensive analysis impossible.
What Research Actually Shows
Academic research has documented that content withholding on X can significantly impact engagement. A peer-reviewed study found that accounts subject to geopolitical censorship experienced roughly a 90 percent decrease in daily follower growth after withholding actions took effect.
This research focused primarily on Russian accounts censored in the EU during the Ukraine conflict and Turkish accounts blocked for militant propaganda. It did not specifically examine Israel-critical content.
Digital rights organisations have separately raised concerns about pro-Palestinian content suppression across social media platforms. The organisation 7amleh has documented what it describes as systematic censorship of Palestinian content. Amnesty International has warned of a pattern involving both dehumanising content advocating violence against Palestinians and over-broad censorship of Palestinian accounts.
X has reportedly suspended hundreds of Palestinian accounts since October 2023, according to the Business and Human Rights Resource Centre. A separate report found X failed to remove 96 percent of hate speech posts targeting Muslims, Palestinians and Jewish people during the Gaza conflict.
Transparency Theatre
Observers have characterised the algorithm release as transparency theatre. The published code reveals the ranking system but excludes crucial operational elements.
The release shows that X's algorithm weighs engagement signals: likes, retweets multiplied by 20, replies multiplied by 13.5, and profile clicks multiplied by 12. The system uses Grok-based transformer technology for relevance predictions.
What remains hidden: the training data that shapes the model's behaviour, the specific model weights, and the decision-making processes that determine what content gets suppressed.
The EU's Digital Watchdog Role
The European Union's enforcement action demonstrates the bloc's willingness to hold major technology platforms accountable. The Digital Services Act, which came into force in 2024, requires large platforms to provide transparency about their algorithms and content moderation practices.
As European officials have noted in the context of disinformation battles, strict enforcement of the DSA is essential to preventing harmful content from spreading. Brussels has emerged as the primary regulator capable of compelling change from American technology giants. The €120 million fine, while modest compared to X's valuation, established precedent for future enforcement.
The EU's approach contrasts with the largely hands-off regulatory environment in the United States, where platforms face fewer transparency requirements. This regulatory gap means European users may receive different protections than their American counterparts.
What Comes Next
Musk has pledged to update the algorithm code every four weeks with developer notes. Whether this commitment will be maintained remains to be seen. The 2023 release was never updated despite similar promises.
The viral claims about Israel-related censorship reflect broader anxieties about platform power and content moderation. While the specific 90 percent figure for Israel critics could not be verified, documented concerns about pro-Palestinian content suppression suggest the broader debate about algorithmic bias is far from settled.
For users concerned about platform transparency, the EU's regulatory framework offers the most robust protections currently available. The bloc's continued scrutiny of X's practices will likely yield more disclosures in the months ahead.
January 22, 2026