A firestorm is brewing inside Meta over a secretive program that scrapes employee keystrokes, mouse movements, and screen captures. The program, known as the Model Capability Initiative (MCI), was quietly rolled out to teach Meta's artificial intelligence systems how humans perform everyday computer tasks. But instead of generating useful training data, it has ignited a full-blown rebellion among the workforce.
Meta employees, already rattled by mass layoffs and intensifying pressure to adopt AI tools, are now openly protesting what they see as a blatant invasion of privacy. An internal post from a Meta engineer, seen by nearly 20,000 coworkers, declared: “I don’t want my screen scraped because it feels like an invasion of my privacy. But zooming out, I don’t want to live in a world where humans — employees or otherwise — are exploited for their training data.” The post has become a rallying cry for those who feel that Meta leadership has crossed a red line.
What the MCI Program Actually Does
According to internal documents reviewed by Wired, the Model Capability Initiative uses software that records keyboard strokes, tracks cursor movements, and periodically captures screenshots from employees' computers while they use selected applications. Meta's chief technology officer, Andrew Bosworth, has described the data as “tightly controlled” and insists it will only be used to train AI models to perform agentic tasks — for example, AI that can book flights, send emails, or fill out forms on behalf of a user. But many employees remain unconvinced, arguing that even anonymized data can be deanonymized and that the collection itself sets a dangerous precedent.
The program is not voluntary. Employees who use certain tools — including internal coding platforms and productivity suites — are automatically enrolled. Opting out is reportedly difficult, requiring manager approval. Critics warn that the program blurs the line between legitimate AI training and mass surveillance, and that it could easily be repurposed to monitor employee performance, attendance, or even political leanings.
A Workforce at a Breaking Point
The upheaval surrounding MCI comes at a time when morale at Meta is at a historic low. Earlier this year, the company announced it would lay off 10% of its workforce — roughly 8,000 people — as part of Mark Zuckerberg’s “year of efficiency.” Survivors face an increasingly demanding environment: they are being pushed to use AI coding assistants, produce more with fewer resources, and have their AI adoption factored into performance reviews.
This “efficiency” push has created a culture of fear and resentment. One employee, speaking anonymously to Wired, described the atmosphere as “gloomy and anxious,” adding that the AI tracking initiative feels like “the final straw.” Another engineer wrote that “layoffs, budget cuts, years of efficiency and intensity — all of it contributed to a growing sense of dread. MCI is a microcosm for the AI movement. Yes, it’s just a small turn of the temperature knob, but it’s representative of the types of systems that people will be compelled to build.”
Open Acts of Defiance
The backlash has moved from private grumbling to public defiance. A petition calling for the immediate halt of the MCI program has been circulating since last week, garnering thousands of signatures. The petition states that “it should not be the norm that companies of any size are permitted to exploit their employees by nonconsensually extracting their data for the purposes of AI training.” Employees have even taken to posting flyers in cafeterias, bathrooms, and other common areas of Meta offices to advertise the petition.
This level of organized resistance is rare inside Big Tech, where employees typically fear retaliation. Yet MCI appears to have struck a collective nerve. Some workers have expressed solidarity with colleagues who may be more vulnerable — such as contractors and low-level staff who have even less power to object. The internal discourse suggests that a significant portion of Meta’s workforce views the initiative as a betrayal of the company’s stated values of transparency and respect.
The Ironic Blowback
Observers have noted the irony of Meta employees protesting surveillance. The company has been embroiled in privacy scandals for more than a decade, most famously the Cambridge Analytica incident, where the data of 87 million Facebook users was harvested without consent for political advertising. Meta has paid billions in fines over privacy violations, and continues to face lawsuits over its data-gathering practices. For the very people who helped build these surveillance systems on users to now recoil when the same tools are turned on them is a stark illustration of the “surveillance boomerang” effect.
But the employee rebellion also reflects a deeper tension within Silicon Valley’s AI gold rush. While companies like Meta, Google, and Microsoft race to build ever-more-capable AI, they increasingly rely on vast troves of human-generated data. Whether that data comes from public websites, private messages, or employee terminals, the fundamental question remains: who owns the data, and who gets to benefit from it?
Broader Implications for Workplace AI
Meta is not alone in its appetite for employee data. A growing number of companies use productivity-tracking software that monitors keystrokes, takes screenshots, and analyzes email activity. Amazon has been criticized for its intense monitoring of warehouse workers. Microsoft tracks Office 365 usage patterns. And several startups now offer tools that claim to predict which employees are about to quit based on their digital behavior.
The difference at Meta is that the data is explicitly being used to train AI — which would then be deployed to automate tasks, potentially making some employees redundant. This creates a particularly bitter irony: employees are being asked to help build the very tools that could eventually replace them. As one Meta engineer put it, “We are basically training our own replacements.”
The MCI controversy also raises legal questions. Privacy laws in many jurisdictions — including Europe’s GDPR and California’s CCPA — generally require companies to inform employees about data collection and obtain consent. However, the laws are often ambiguous about “workplace monitoring,” especially when the stated purpose is R&D rather than performance evaluation. Meta has likely consulted its legal team, but any class-action lawsuit would test the boundaries of those laws.
Meanwhile, the internal turmoil could have external consequences. Investor confidence in Meta’s AI strategy is high, but if the company cannot maintain a stable and motivated workforce, its ambitious AI goals may stall. Mark Zuckerberg has bet the company on artificial intelligence — from the metaverse to AI agents — but internal sabotage, low morale, and high turnover could seriously hamper those projects.
For now, the petition continues to circulate, and the flyers remain on office walls. The battle over MCI is far from over, and it may well become a landmark case in the fight for worker privacy in the age of AI. What happens at Meta could set a precedent for how other tech giants treat their own employees when the line between innovation and exploitation grows thin.
Source: Futurism News