Products and services that rely on machine learning—computer programs that constantly absorb new data and adapt their decisions in response—don’t always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside.
Risks of machine learning
Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake.
A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended.
Read the full version of this article on Harvard Business Review. The article was picked to HBR’s 10 Must Reads 2022.
by Boris Babic, I. Glenn Cohen, Theodoros Evgeniou, and Sara Gerke
An Overview of Transparency Reporting
What Does the DSA Mean for Your Business?
Content moderators: How to protect those who protect us?