Big Brother Goes Corporate: How AI Monitors Employee Messages


Major Corporations Employ AI for Employee Message Monitoring

Several prominent companies including Walmart, Delta, Chevron, and Starbucks are utilizing artificial intelligence (AI) to monitor employee communications. Aware, an AI firm specialized in analyzing such messages, revealed that these corporations have engaged its services to gauge sentiment and identify toxic behavior.

According to Aware, their data repository encompasses approximately 20 billion individual interactions across over 3 million employees. The AI technology is deployed to understand employee sentiment in real-time, offering insights that surpass the traditional annual or biannual surveys.

Jeff Schumann, co-founder and CEO of Aware, explains that their AI models enable companies to assess how employees from different demographics or geographical locations respond to various corporate initiatives or marketing strategies. Moreover, the AI can identify instances of bullying, harassment, discrimination, and other concerning behaviors within the messages.

Schumann clarifies that Aware’s analytics tool doesn’t flag individual employee names, ensuring privacy. However, in extreme cases predetermined by the client, their separate eDiscovery tool can identify specific employees.

Despite attempts to gain insights into workplace dynamics, several companies remained tight-lipped regarding their use of Aware’s services. AstraZeneca acknowledged using the eDiscovery product but stated it does not utilize analytics to monitor sentiment or toxicity. Delta, on the other hand, affirmed using Aware’s analytics and eDiscovery tools to monitor trends and sentiment, emphasizing its utility in gathering feedback and legal records retention.

Critics, including Jutta Williams, co-founder of Humane Intelligence, express concerns about the potential implications of AI-driven employee surveillance. Williams warns against the risk of turning normal thoughts into punishable offenses, likening the approach to treating employees as mere commodities.

The use of AI in employee surveillance represents a burgeoning niche within the broader AI market, which has witnessed exponential growth in recent years. Aware, despite its relatively lean operation compared to larger players like OpenAI and Anthropic, has experienced a significant increase in revenue.

Schumann, reflecting on the company’s trajectory, notes the extensive volume of messages processed daily by their AI technology, emphasizing its ability to track real-time sentiment and toxicity among employees. Aware’s machine-learning models are continually trained using data from their enterprise clients, enabling them to identify abnormal patterns of behavior.

However, concerns regarding privacy and fairness persist. Critics argue that even aggregated or anonymized data can pose privacy risks, and AI’s capability to infer personal identifiers based on language and context raises further concerns.

Despite assurances from companies like Aware, questions remain regarding employee recourse in cases where AI-generated insights lead to disciplinary actions. The lack of transparency and explainability in AI decision-making processes complicates the issue, leaving employees in a vulnerable position.

In response, Schumann maintains that Aware’s AI models do not make decisions regarding employee discipline. Instead, they provide contextual information to investigation teams, allowing them to determine appropriate actions in accordance with company policies and legal requirements.

Source: CNBC


Grow your business with AI. Be an AI expert at your company in 5 mins per week! Free Newsletterhttps://signup.bunnypixel.com

Recent Articles

Related Stories