As part of One Wharton Week, professor of criminology and statistics Richard Berk told MBA students how analytics can produce biased results and how decision-makers can fix it.

Humans can show bias, but mathematical algorithms supposedly do not. So why did machine learning initially red-line some minority areas when Amazon first rolled out same-day delivery in Boston? Why has Google been found to show lower-paying jobs to women?

Where does that bias come from and how can businesses correct for it?

On February 13, Richard Berk, Professor of Statistics and Criminology in the Penn School of Arts and Sciences, addressed these questions in front of a packed room of MBA students. Called “Machine-Learned ‘Bias,’” the lunch-time event was offered during One Wharton Week and co-sponsored by the Wharton Analytics Club.

Prof. Berk’s work focuses primarily on criminal justice applications. He gave the example that men commit most violent crimes, so if you develop an algorithm to forecast risk, the risk will be greater for men. If men are more likely to be sentenced to prison, is this result biased or accurate and fair? The answer, according to Berk, is how you define justice.

Pradya Nandini, WG'18, introducing an event during One Wharton Week
Return on Equality’s Pradya Nandini, WG’18, introducing the session on machine learning.

There are three stages in algorithmic application.

  • The data on which an algorithm is trained.
    The code that is written.
    The decisions that humans make on its application.

In general, Prof. Berk was reluctant to offer answers, which he said is the province of leaders in business and government, not statisticians. He advised, “Ask stakeholders what to optimize for when you are building an algorithm. Build values into it. Write code that is honest, that does what it says it will do.”

Pradya Nandini, WG'18, introducing an event during One Wharton Week
Packed house at the machine learning session of One Wharton Week

Nine top takeaways from Berk’s lecture on how to use analytics and values to “unbias” algorithms.

1. “A good algorithm won’t introduce bias.”
If there is a problem, it starts with the data. For example, if Google is matching jobs to current salary and woman job applicants currently have lower salaries, Google will show lower paying jobs, reflecting the input data.

2. “To reduce bias, look carefully at how the data came to be and how it was gathered and quantified.”
If you have biased data, it will continue in the algorithm. You must know how the data is generated. If you can demonstrate bias in data, you can develop algorithms that make corrections.

3. “When you introduce fairness, it reduces accuracy.”
Algorithms predict outcomes, and the predictive accuracy of an algorithm is reduced if you want to eliminate gender, race, or other differentials in your results. Fairness may be preferable to an accurate result, but it’s a tradeoff.

4. “Algorithm is a computational tool, not a model that describes how the world works.”
Business leaders must learn the language and then ask the right questions of those providing algorithm. If it’s a model, ask a certain set of questions. If it’s an algorithm, ask another one.

5. “Push as hard as you can for transparency when using proprietary algorithms and models.”
In principle, algorithms are more transparent than humans — the human mind is a black box. However, many algorithms used in business are proprietary. Companies that license them won’t let you see the code, but you can ask how it’s built.

6. “When you can’t see how an algorithm is built, test it.”
Run a forecasting contest between algorithms to see the outcomes.

7. “Algorithms make mistakes. The goal is to improve on current practice.”
Even a perfect algorithm will make mistakes, but most will improve on human practice. For example, algorithms in criminal justice are demonstrably more accurate and more fair than human decision makers.

8. “You can build social costs into algorithms.”
Different forecasting errors have different costs. Is the cost of a mistaken “no” higher than a mistaken “yes”? Decisions makers can correct algorithms by applying human values and not just monetary ones.

9. “Algorithms don’t lift you out of the human condition. They leave you in it.”
There are fundamentally unsolved moral questions that algorithms bring to the surface. Take the example of self-driving cars, which have to be programmed to make decisions in unavoidable crashes that must choose whether to spare harm to drivers, passengers, or pedestrians.

— Kelly Andrews

Posted: February 21, 2018

Related Content

Read More Stories