As part of efforts to make government smarter, data and computer models are becoming more and more important for making policy decisions. For example, in the criminal justice system, computer models and analytic methods are being used to inform decisions from how police officers patrol to judgments about whether individual defendants are kept in jail and for how long.

Predictive policing tools use data about past crime and other factors, in some cases identifying individual city blocks for enhanced police patrol because the model predicts that crime is more likely to happen there.

In the courts and corrections system, risk and needs assessment tools use variables about defendants like their connections in the community, past interactions with the police or courts, and other data to inform judgments about their bail amounts, what sentence is imposed after conviction, how they are held in prison, and about their release into the community after they have served their time.

Inside police departments, early intervention tools are being used to identify officers who are likely to engage in misconduct to help maintain the integrity of the criminal justice system itself.

The use of decision models like this has not been without controversy. From one perspective, Americans live in an era of data, and that data should be applied to best effect to make government organizations as effective as possible. In the absence of decision tools and models like these, the same decisions will need to be made, but they will be made by people — and human decisionmaking is not perfect and individual biases can skew the choices that are made.

So from that perspective, the use of models and tools could help make criminal justice decisions both better and more fair. But on the other hand, questions have been raised about using models that are essentially averages across groups of people to make decisions about individuals, and whether the data going into these models “bakes in bias” and therefore doesn’t increase fairness, but rather makes unfairness more pervasive.

Given the complexities of the modern world, government does need tools like these to do its job well. But the concerns about how they work are legitimate.

Currently, such concerns are addressed through challenge in either the courts or policy debate, where affected citizens can question the decisions made using such tools and the agencies can defend their use. That sort of challenge — putting the analytic tool “on the witness stand” so the prosecution and defense can cross-examine it — is the heart of the adversarial process that establishes truth in legal processes.

But in a recent workshop we held with court and other experts, they pointed out a problem: because these tools and models are often made by private companies, how they work is considered a trade secret that, if revealed, could hurt the company’s ability to make a profit. As a result, a key input to some justice system decisions becomes a “black box” where data goes in one side and an answer comes out the other — and the ability to challenge, or even understand, where that answer came from is difficult.

For some analysis approaches — ones that rely on machine learning or some types of data mining — there may not even be an explicit and readily understandable model to assess, whether the company providing it views it as proprietary or not.

Although the focus of our work has been on the criminal justice system and how such analyses are used to make decisions about the citizens and officers inside it, this “black box problem” is not limited to justice decisions. In a recent discussion with a municipal official about property taxes, the official reported that his town — and many others — rely on proprietary commercial models to assign how much homes and other properties are worth to set taxes, so there was no way for citizens to get more information about the math behind their property taxes. So if you are a homeowner, a black box model might be an important part of how much your mortgage payment is each month.

Since innovation in analytics is often done by commercial firms, and profiting from their knowledge and investments is why they do what they do, a desire to protect those investments is understandable. But confidence in government and in the decisions it makes demand that citizens be able to “check the math” and whether the results are right and fair be litigated appropriately — whether the person affected is a criminal defendant being sentenced using a risk assessment tool, a police officer identified by an early intervention system, or a taxpayer who wants to understand why his or her tax bill is different this year than last.

When models and analytic tools significantly inform governmental decisions that are so consequential for individuals, the interest of companies can’t take precedence.

Sufficient transparency should be viewed as a condition for applying such tools to decisions by government, since review and challenge are key not just for protecting the rights of affected citizens but ensuring that only the best tools are being used by government and justice agencies.