AI Note-Takers at Work: The Silent Threat to Privacy and Compliance

    AI-powered transcription tools are increasingly embedded in workplace routines. These products can join video conferences on platforms such as Zoom, Microsoft Teams, or Google Meet, record and transcribe conversations in real time, and synchronise with calendars and other applications. Marketed as productivity enhancers, their deployment raises significant data protection and AI governance risks—both for employees and employers.

    Consider Otter.ai as a case study. According to Otter.ai’s documentation, the service can join online meetings as a participant and provide live transcription. It automatically synchronises with Microsoft Outlook or Google calendars and can begin recording without any action from the user. Crucially, Otter.ai places responsibility on the account holder to obtain permission from other participants. This means that one individual may trigger the recording or transcription of a meeting without the knowledge or consent of others. All data recorded is transferred, stored, and processed on servers in the United States.

    The legal minefield beneath the convenience

    Several provisions of the General Data Protection Regulation (GDPR) are directly engaged, starting with the legal basis. Otter.ai’s operating model relies on one participant securing permission for all others. Under Articles 6 and 7 of the GDPR, this would likely fail to constitute valid consent. Consent must be informed, specific, and freely given—requirements that cannot be satisfied by delegation to a single meeting participant. Guidelines from supervisory authorities further stress that in the employment context, the imbalance of power renders employee consent invalid.

    Processing special categories of data presents a second concern. Meetings often involve trade union matters, HR issues, or health information. Processing such data is prohibited under Article 9 of the GDPR unless a narrow exemption applies. Third, there is the question of transparency: Articles 13 and 14 of the GDPR require data subjects to be informed. A “silent” transcription bot makes this impossible in practice.

    Smart, Progressive Thinking on the Big Issues of Our Time

    Join 20,000+ informed readers worldwide who trust Social Europe for smart, progressive analysis of politics, economy, and society — free.

    Fourth, international transfers pose substantial difficulties. All data is transmitted to the United States. Following the Court of Justice’s ruling in Schrems II (C-311/18), such transfers are permissible only under the EU–US Data Privacy Framework or with supplementary safeguards. Given the sensitivity of workplace discussions, reliance on standard contractual clauses alone may prove insufficient.

    Fifth, security requirements under Article 32 of the GDPR demand appropriate technical and organisational measures. Automatic synchronisation with calendars and meeting software grants Otter.ai broad access to organisational systems—access that the IT department may not even be aware exists when individual users install such tools. Compliance cannot be demonstrated where third-party AI tools access internal infrastructure without proper controls. Sixth, if meetings were recorded or transcribed without participants’ knowledge, this may constitute a personal data breach under Articles 33 and 34 of the GDPR, triggering obligations to notify the supervisory authority and, in some cases, the data subjects themselves.

    Litigation risks extend beyond Europe. In August 2025, a class action complaint was filed in the US District Court for the Northern District of California (Brewer v. Otter.ai, Inc., Case No. 5:25-cv-06911). The plaintiff alleges that Otter.ai records and transcribes conversations of non-users without their knowledge or consent, and uses this data to train its machine learning models. The court consolidated the claims on 22 October 2025, and the case remains in the early case-management phase. In the next stage, Otter.ai will need to respond to the consolidated complaint.

    The complaint states: “Otter does not obtain prior consent, express or otherwise, of persons who attend meetings where the Otter Notetaker is enabled, prior to Otter recording, accessing, reading, and learning the contents of conversations.” Brewer further alleges that, as a non-Otter user, he had no reason to suspect his conversational data would be retained and processed by the company. Computerworld framed the lawsuit as part of a “wider reckoning” for enterprise AI note-taking applications. The legal claims include violations of the Electronic Communications Privacy Act, the California Invasion of Privacy Act, and the Computer Fraud and Abuse Act, as well as common law privacy torts. Although these statutes differ from the GDPR, the factual allegations mirror the same concerns: lack of valid legal basis, improper reliance on third-party consent, and opaque use of data for AI training.

    For EU workplaces, this case illustrates the litigation exposure that arises when consumer-grade AI tools are deployed without robust governance. Under Article 82 of the GDPR, any data subject who suffers material or non-material damage has the right to compensation. Silent transcription of workplace meetings could easily generate such claims.

    The EU AI Act adds another layer of compliance obligation. Under Annex III, AI systems used for worker management and monitoring are classified as high-risk. Otter.ai advertises “sentiment analytics” and other productivity features. In a workplace setting, these would presumably fall within the high-risk category. Under Articles 9–15 of the AI Act, such systems will be subject to strict risk management, transparency, and human oversight obligations. Organisations that deploy them will bear compliance responsibilities even when the provider is established outside the EU.

    Beyond the legal analysis, several practical risks are apparent. Automatic transcription creates a record of every utterance; for employees, this is indistinguishable from constant monitoring. Research on workplace surveillance has demonstrated the chilling effects of such monitoring on trust, autonomy, and freedom of expression.

    Accuracy and bias present further concerns. Errors in AI transcription can distort meaning, particularly for non-native speakers or those with speech impairments. Studies by Sponholz et al. (2025) and Eftekhari et al. (2024) show how mis-transcription introduces bias in research—an effect that can be equally damaging in workplace decision-making. Security vulnerabilities compound these problems: transcripts and recordings may be stored in multiple locations and, in some cases, accessible to third parties. Once produced, such records may be repurposed or misused, including in litigation. Finally, accountability remains unresolved. Managing, editing, and validating transcripts requires additional resources and raises fundamental questions about responsibility for the accuracy of the record—and the consequences when errors occur.

    What organisations must do now

    For EU organisations, a robust governance approach is essential. Organisations should rely on built-in enterprise tools only where a Data Protection Impact Assessment supports their use. They should adopt an internal policy defining which recording and transcription services may be used and under what conditions. Recording should proceed only with prior notice and explicit consent from all participants. This policy should extend to external meetings and seminars, where participants must be informed and given the opportunity to opt out.

    Consumer-grade transcription tools such as Otter.ai should be blocked from connecting to internal systems. Where transcription tools are authorised, access should be restricted, retention periods limited, and effective deletion ensured. There should be a clear prohibition on the use of external transcription services without prior approval from the Data Protection Officer and IT services. Organisations should consult worker representatives and trade unions before introducing such technologies, in line with data protection by design under Article 25 of the GDPR and the AI Act’s emphasis on human oversight and worker information.

    Otter.ai demonstrates how easily consumer-grade AI tools can enter workplaces unchecked. Its features promise efficiency, but in practice they may constitute non-compliance with the GDPR, high-risk classification under the AI Act, and significant organisational risks. The Brewer v. Otter.ai litigation shows that these risks are not speculative but already materialising in court.

    As the European Data Protection Supervisor noted in its Orientations for Generative AI (2024), public and private entities must “place compliance and fundamental rights at the centre of digital innovation.” Transcription and note-taking tools are no exception.

    Discussion

    No comments yet. Be the first to comment!