Some states are experimenting with algorithmic tools to determine which individuals accused of a crime should be kept in pretrial detention.  Big Tech thinks that’s a bad idea, according to a new report released by the Partnership on Artificial Intelligence (PAI).

Big Tech companies like Amazon, Facebook, Google, T-Mobile are part of PAI, as is the Silicon-Valley funded think tank, the Center for Democracy & Technology (CDT).

The report discusses what it describes as “serious shortcomings” with pretrial risk assessment tools (also known as public safety assessment tools, or PSAs), which use a variety of data points like prior felony or violent convictions and prior failures to report to determine if an individual should be detained pretrial.

Arnold Ventures, a philanthropy dedicated to a variety of social justice causes — including criminal justice reform — launched its own PSA last fall, and 40 jurisdictions across the U.S. are using it with mixed results. While some criminal justice reform activists see PSAs as a way to fix the economic injustice of a money bail system, civil rights groups worry that flawed criminal justice data negatively impacts PSAs.

Flawed data is a big theme in PAI’s report. For example, African Americans are more likely than whites to be incarcerated or wrongly convicted, which skews the criminal justice dataset. As a result, PSAs could exacerbate racial bias and injustice.

“Since accuracy is focused narrowly on how the tool performs on data reserved from the original data set, it does not address issues that might undermine the reasonableness of the dataset itself,” PAI states in the report.

Furthermore, PAI argues, algorithms don’t often reflect real-world nuance. It’s one thing to use an algorithm to rank search results, it’s quite another to rely on it to make decisions about who should be kept in jail.

“If risk assessments purport to measure how likely an individual is to fail to appear or to be the subject of a future arrest, then it should be the case that the scores produced in fact reflect the relevant likelihoods,” PAI explains. “Validity takes into consideration the broader context around how the data was collected and what kind of inference is being drawn. A tool might not be valid because the data that was used to develop it does not properly reflect what is happening in the real world (due to measurement error, sampling error, improper proxy variables, failure to calibrate probabilities or other issues).”

PAI concludes that there isn’t enough research and good data to determine whether PSAs perpetuate injustice and racism or remedy current harms in the system.

But the report misses the mark in the opinion of some AI experts and PSA researchers. Will Rinehart, director of technology and innovation policy at the American Action Forum, told InsideSources that PAI’s report can be misleading because it doesn’t define what it means by “bias” and “fairness.”

“One professor noted there are 21 different versions of fairness when we talk about statistical inferences,” he said.

Furthermore, judges still make the final decision on whether an individual gets pretrial detention or not, so the report’s level of concern over PSAs may be somewhat exaggerated.

“We really don’t know how much they budge judges,” Rinehart said of PSAs. “We have a sense that judges have various versions of bias. That’s the key thing I find is missing in these algorithms. An algorithm could change your thoughts or sway you to choose one thing or another, but how long does that last? I think the biggest question for me, is, what’s the time impact? Is this something that affects judges now but not in six months?”

According to a December 2018 study of a PSA in action in Broward County, Florida, conducted by Bo Cowgill, a research professor at Columbia Business School, “The algorithmic guidances does affect pretrial bail decisions. The effects of crossing a threshold differ by race. What this implies about judicial bias is not clear. The race-related effects in this paper may plausibly be driven by a variety of taste-based or statistical mechanisms of discrimination. Future work should shed more light on this question.”

An August 2017 study conducted by Megan Stevens, an assistant professor of law at George Mason University, found “there is virtually no evidence on how use of this “evidence-based” tool affects key outcomes such as incarceration rates, crime, or racial disparities,” and “the research discussing what ‘should’ happen as a result of risk assessment is hypothetical and largely ignores the complexities of implementation.”

Stevens found “neither the dramatic efficiency gains predicted by risk assessment’s champions, nor the increase in racial disparities predicted by its critics” in Kentucky, and that PSAs did not dramatically alter a judge’s decision-making over time.

Despite what he considers hyped arguments in the PAI report, Rinehart said the report brought up good points about the need for more transparency, accountability and data in the PSA space.

“They’re making valid points and the recommendations are pretty solid,” he said. “It’s really really hard to create a good algorithm, but at a practical level, creating an algorithm is always going to be involved in tradeoffs, so the question becomes, what tradeoffs are you willing to make? And I don’t think they spend enough time talking about that. There is this big question about how effective these algorithms are, and I wish this group would do more on that and actually look at the data on this. There’s a lot more analysis that needs to be done here.”

Follow Kate on Twitter