Guest post: AI surveillance in prisons is a terrible idea, both technologically and ethically


SOURCE: GEEKWIRE.COM
OCT 09, 2021

University of Washington professors Rachael Tatman and Emily M. Bender. (UW Photos)

Editor’s note: This is a guest post written by University of Washington professor Emily M. Bender and Rachael Tatman, a UW graduate, on the use of AI in prison settings.

Thomson Reuters Foundation reported on Aug. 9 that a panel in the U.S. House of Representatives has asked the Department of Justice to explore using so-called “artificial intelligence” (AI) technology to monitor phone communication from incarcerated people with the ostensible purpose of preventing violent crime and suicide.

This is not a hypothetical exercise: LEO Technologies, a company “built for cops by cops,” already offers automated surveillance of incarcerated persons’ phone calls with their loved ones as a service.

As linguists who study the development and application of speech recognition and other language technologies, including the ways in which they work (or don’t) with different varieties of language, we would like to state clearly and strongly that this is a terrible idea both technologically and ethically.

We are opposed to large-scale surveillance by any means, especially when used against vulnerable populations without their consent or ability to opt out. Even if such surveillance could be shown to be in the best interests of the incarcerated people and the communities to which they belong — which we do not believe that it can be — attempting to automate this process scales up the potential harms.

We urge Congress and the DOJ to abandon this path and to avoid incorporating automated prediction into our legal system.

The primary supposed benefit of the technology to the incarcerated people, suicide prevention, is not feasible using an approach “based on keywords and phrases” (as LEO technologies describes its product). Even Facebook’s suicide prevention program, which itself has faced scrutiny from legal and ethics scholars, found keywords to be an ineffective approach as it does not take context into account. Furthermore, humans frequently take the output of computer programs as “objective” and therefore make decisions based on faulty information with no notion that it is faulty.

And even if the ability to prevent suicide were concrete and demonstrable, which it is not, it comes with massive potential for harm.

Automated transcription is a key part of these product offerings. The effectiveness of speech recognition systems is dependent on a close match between their training data and the input they receive in their deployment context, and for most modern speech recognition systems this means that the further a person’s speech is from newscaster standard, the less effective the system will be at correctly transcribing their words.

Not only will such systems undoubtedly output unreliable information (while seeming highly objective), the systems will also fail more often for the people the U.S. justice system most often fails.

A 2020 study that included the Amazon service used by LEO Technologies for speech transcription corroborated earlier findings that the word error rate for speakers of African American English was roughly twice that of white speakers. Given that African Americans are imprisoned at a rate five times greater than white Americans, these tools are deeply unsuited to their application and have the potential to increase already unacceptable racial disparities.

This surveillance, which covers not just the incarcerated but also those they are speaking with, is an unnecessary violation of privacy. Adding so-called “AI” will only make it worse: The machines are incapable of even accurately transcribing the warm, comforting language of home and at the same time will give a false sheen of “objectivity” to inaccurate transcripts. Should those with incarcerated loved ones have to bear the burden of defending against accusations based on faulty transcripts of what they said? This invasion of privacy is especially galling given that incarcerated people and their families often have to pay exorbitant rates for the phone calls in the first place.

We urge Congress and the DOJ to abandon this path and to avoid incorporating automated prediction into our legal system. LEO Technologies claims to “shift the paradigm of law enforcement from reactive to predictive,” a paradigm that seems out of keeping with a justice system where guilt must be proved.

And, finally, we urge everyone concerned to remain highly skeptical of “AI” applications. This is especially true when it has real impacts on people’s lives, and even more so when those people are, like incarcerated persons, especially vulnerable.

Similar articles you can read