Should a Computer Decide Your Sentence?

• Bookmarks: 90


Amid the election of progressive district attorneys and passage of historic sentencing reforms, many U.S. cities are making strides toward decarceration. In an effort to reduce their prison populations while addressing sentencing bias, at least 20 states employ predictive risk assessment technology during judicial decision making. Through a tailored, statistics-based approach, cities hope to see a decline in recidivism and prison overcrowding. However, contracting with private companies to develop assessment tools comes with its own set of unexamined risks, particularly with the advent of machine learning.

In an October 2019 article for the Columbia Law Review, Andrea Nishi asserts that there are potential issues of bias and transparency that stem from the use of predictive algorithms in sentencing technology and offers solutions to combat those risks.

Nishi begins by explaining how machine learning has created a new breed of risk assessment algorithms. The process, mainly led by developers, starts with defining the outcome the final algorithm should predict (e.g., likelihood of a defendant committing another crime in the future), and then moves through a series of steps such as data collection, data cleaning, model selection, model application, and translation of quantitative outcomes, finally producing a qualitative risk score. Judges and prosecutors use that risk score – “high,” “medium,” or “low” –during sentencing.

Nishi presents several risks associated with using machine learning for sentencing. Decisions that technology companies make during the development process can drastically influence the outcome of a tool, and private actors are not held to the same level of accountability as government entities. A private actor can influence an algorithm’s outcome by tweaking how a data set is selected and cleaned, how recidivism is defined, and how risk factors are chosen. A conflict of interest could arise if a private developer has any stake in the outcome of the case Additionally, while government entities have transparency requirements, private companies can invoke “trade secret” protections to prevent disclosure of tool functions to interested parties.

In addition to subjective decision-making on the part of developers, the technological and legal opacity of assessment tools can limit judicial understanding of risk scores, which has the potential to impact the effectiveness and accuracy of the tool’s application. As machine learning is complex and often considered a “black box,” judges may not have the expertise to properly understand how the algorithm functions or how to interpret its results. Without knowing the right questions to ask or the right metrics to emphasize, the risk scores that the algorithm produces could be oversimplified.

Risk assessment tools may also provide a false sense of objectivity. Because outcomes are generated using data, judges may assume that they are accurate and objective when in reality, much of the tool was shaped by decisions of the developer, which prevents true objectivity. The opaque process of developing risk assessment tools, paired with a lack of oversight in development, lead to what Nishi describes as an “accountability gap.”

To address the accountability gap, Nishi suggests that laws permitting the use of risk assessment technology should come with strings attached to ensure fairness and accountability. To accomplish this, state statutes can mandate that developers report on tool outcomes, including false positives and negatives, error rate, and discrepancies in accuracy for different populations. Laws can also require that sentencing commissions have a say in how datasets are identified, how recidivism is defined, and what the minimum acceptable error rate for recidivism risk prediction should be.

Overall, the use of risk assessment algorithms is a step towards reformers’ goal of decarceration, and can be applied to other stages in the judicial process, such as bail determination. Algorithms, when well-built, promise to reduce the indiscriminate use of draconian sentences that contribute to already historically high incarceration rates. However, given the large effect these recommendations can have on the lives of defendants, tools should be implemented with caution and vigilance, prioritize accountability and transparency, and be continuously evaluated and improved.


Article source: Nishi, Andrea. 2019. “Privatizing Sentencing: A Delegation Framework for Recidivism Risk Assessment.” Columbia Law Review 119, no. 6: 1671-1710. https://columbialawreview.org/content/privatizing-sentencing-a-delegation-framework-for-recidivism-risk-assessment/.

Featured image: Mark Sheppard

592 views
bookmark icon