As wearable AI devices go mainstream, a hidden layer of human moderation—stretching as far as Kenya—raises urgent questions about privacy, consent, and the true cost of “smart” technology.

The promise of smart glasses has always been seductive: hands-free photos, real-time assistance, seamless integration between the physical and digital worlds. But as adoption accelerates, a quieter, more unsettling reality is coming into focus—one that challenges the very idea of personal privacy in the age of artificial intelligence.
Recent reporting has revealed that thousands of workers, many based in Kenya, are involved in reviewing content captured by users of Meta’s smart glasses. These aren’t just harmless snapshots of landscapes or meals. They can include deeply personal, unfiltered glimpses into people’s private lives—moments recorded casually, often without a second thought, and sometimes without the awareness of those being filmed.
At the heart of the issue is how AI systems are trained and moderated. While companies market their devices as powered by advanced machine learning, these systems still rely heavily on human input. Every time a user asks their smart glasses to interpret an image or respond to a visual query, that data may pass through layers of review—both automated and human.
For many users, this is not widely understood.
A Hidden Human Layer Behind AI
The workers reviewing this content are part of a growing global labor force that underpins the AI economy. Often employed through third-party contractors, they are tasked with labeling, categorizing, and sometimes manually assessing visual data to improve system accuracy.
In Kenya, this workforce has expanded rapidly in recent years, fueled by demand from major tech firms. For workers, the jobs offer income and digital employment opportunities. But they also come with psychological and ethical challenges.
Reviewers may be exposed to sensitive, disturbing, or intimate material—content that users never imagined would be seen by strangers halfway across the world. The emotional toll of such work has been documented in similar roles across content moderation, raising concerns about mental health support and labor conditions.
At the same time, their role reveals a fundamental truth: today’s AI is far from autonomous. It is deeply dependent on human eyes.
The Illusion of Private Technology
Smart glasses blur the boundary between public and private recording. Unlike smartphones, which require deliberate action, these devices are designed for frictionless capture. A voice command or subtle gesture can initiate recording, often in social settings where others may not realize they are being filmed.
This creates a dual layer of exposure. First, individuals in the vicinity may be recorded unknowingly. Second, that footage may be processed, stored, and potentially reviewed by external human workers.
The result is a chain of visibility that extends far beyond the original moment.
Privacy advocates argue that current consent norms are not equipped to handle this shift. Traditional expectations—like noticing a phone camera—are being replaced by more discreet, always-on devices. The social contract around recording is being rewritten in real time.
Corporate Responsibility Under Scrutiny
Meta, like other tech giants, maintains that human review is essential for improving AI systems and ensuring safety. The company has stated that it uses a combination of automated filtering and limited human oversight to refine its services.
However, critics question the transparency of these practices. How much data is reviewed? Under what circumstances? And are users adequately informed?
There is also the issue of data minimization—whether companies are collecting more information than necessary. As smart devices become more integrated into daily life, the volume of captured data grows exponentially, increasing the likelihood of unintended exposure.
Regulators in multiple regions have begun to take notice. The intersection of wearable technology, AI training, and cross-border labor raises complex legal questions, particularly around data protection and jurisdiction.
A Global System with Local Consequences
The outsourcing of AI-related work to countries like Kenya highlights broader economic dynamics. Tech companies benefit from lower labor costs, while workers gain access to digital jobs. Yet the imbalance of power remains stark.
Workers often operate under strict confidentiality agreements, limiting their ability to speak openly about their experiences. Meanwhile, users in other parts of the world remain largely unaware that their data may be passing through human hands in distant locations.
This disconnect raises fundamental questions about accountability in a globalized digital economy.
Rethinking Trust in the Age of Wearables
As smart glasses and similar devices become more common, the conversation around privacy is shifting from abstract concerns to tangible realities. The idea that “the device sees what you see” now carries a deeper implication: others might see it too.
For consumers, this moment calls for greater awareness. Understanding how devices handle data is no longer optional—it is essential.
For companies, it is a test of trust. Transparency, clear consent mechanisms, and robust safeguards will be critical in maintaining user confidence.
And for regulators, it represents a new frontier. Existing frameworks may need to evolve to address the unique challenges posed by always-on, AI-powered wearables.
The future of smart technology will not be defined solely by what it can do, but by how responsibly it does it.




