online privacy

Is Consent Enough to Make Audio Recordings Safe for Human Processing?

Share this:

Recently, a number of high-profile tech firms have been uncovered permitting human employees to access private conversations consumers believed were only processed by AI.

Google Assistant, Siri, Cortana, and Amazon’s Alexa have all been placed in the limelight, and now Facebook has also come under fire for letting human employees access sensitive personal conversations for transcription purposes.

In the case of AI assistants, private conversations are primarily harvested from consumers who own and use their devices directly. However, there is an emerging body of evidence that these technologies are also harvesting secondary persons’ conversations — completely unknown to those individuals.

Why is it a problem?

When AI processes an audio conversation, there is no definitive risk that its contents may be used for unauthorized purposes. At least not unless that AI has been purposefully developed to monitor conversations underhandedly by its developer.

When a human employee listens to recordings, on the other hand, divergent risks emerge. This is because, unlike AI, humans can judge everything they hear and could potentially decide to use consumer information for their own ends.

In recent months, employees have come forward to disclose that they have been systematically exposed to consumer data considered sensitive under privacy legislation such as GDPR. This includes personally identifiable data such as email or home addresses, names, telephone numbers, and potentially even sensitive category data such as political affiliations or sexual preferences.

In the wrong hands, private details are a risk because they could be used to engage in hacking and other dubious activities. Spear phishing campaigns, for example, are known to leverage previously learned information about victims to fool them into handing over more data. This may lead to fraud, and, in the worst cases, identity theft.

Microsoft, Apple, Amazon, Google, and Facebook have all temporarily closed human processing of audio recordings — acknowledgment, it would seem, that they have been putting consumers at risk. That’s a fact it seems likely they knew all along but kept a secret.

Those firms are now updating their policies to acknowledge human interference in improving their AI. And, in the case of Apple and Amazon (at least), consumers will be given the opportunity to opt-out of having their conversations listened to by humans.

This seems weak because even if consumers opted in (a much stronger consent model), the ingredients required to engage in malevolent activities would still be systematically exposed to human employees with the agency to exploit them.

No knowledge, no consent

Assistants automatically wake and begin recording accidentally a large percentage of the time. According to reports, around 15% of recordings made by Google Assistant are made by accident. These recordings often affect nearby people who are completely unaware that they are being recorded.

This is problematic because in many instances this appears to break the US federal Wiretap Act. That legislation requires at least one participant in a conversation to provide consent for an exchange to be taped.

In cases where a device owner engages in a conversation with a house guest — and the device owner is aware that their voice assistant is in the room and might become activated — this one-sided consent appears to have been granted.

Mutual consent necessary

Human employees have expressed concerns that their work listening to recordings is unethical. And it is becoming apparent that it may be creating a privacy risk for citizens who have no idea they are being involved.

So far, private recordings have been accessed by humans without users’ knowledge or consent. This purposeful concealment on the part of Big Tech would appear to deserve punishment or fine. However, even if consent were granted, would this be enough to protect everyone affected?

Privacy legislations to the rescue?

EU legislators and the UK’s Information Commissioner’s Office have already begun investigating whether allowing humans to access conversations is in breach of GDPR. Under GDPR, consent must be granted for sensitive private data to be accumulated and processed. For consent to conform to GDPR, firms must have a legal basis for collecting any personal data and that data must be collected and processed for specific purposes only.

Facebook’s use of manipulative consent flows should also be placed at the forefront of the discussion. With questions surrounding whether consumers have truly been given the information they require and the ability to deny consent for data processing, that could potentially put them in harm’s way.

In the US, where privacy rights are lagging, new regulations such as CCPA appear to be ill-prepared to curb this kind of invasive data processing. Unlike GDPR, CCPA gives consumers the right to opt-out of having their data collected for commercial purposes.

As a result, prior consent is not required for Facebook and other firms to begin collecting and processing data in the first place. This leaves US citizens at considerable risk and is something that US legislators need to look at much more closely.

What’s more, CCPA allows firms to collect data for research and development purposes, meaning that the practice of allowing humans to access conversations will likely remain legal. This seems problematic, considering that the contents of private conversations could allow employees to engage in illegal activities such as hacking.

Where conversations are being recorded by devices without the knowledge of affected individuals, this seems more problematic. Legislators should consider making it necessary for all recording devices and assistants to provide a noticeable warning when they are activated — to alert everyone potentially affected that a recording is about to start.

Ray Walsh is a digital privacy advocate at ProPrivacy.

Tags: