Lead Forensics

Harvard Business Review: When Machine Learning Goes Off the Rails

Products and services that rely on machine learning—computer programs that constantly absorb new data and adapt their decisions in response—don’t always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside.

Risks of machine learning

Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake.

A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended.

Read the full version of this article on Harvard Business Review. The article was picked to HBR’s 10 Must Reads 2022.

by Boris Babic, I. Glenn Cohen, Theodoros Evgeniou, and Sara Gerke

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

DSA Compliance

Digital Services Coordinators: Who are they?

Like any regulation, the success of the Digital Services Act (DSA) hinges not only on its wording but, above all, on its enforcement. To this end, the DSA establishes a detailed and robust enforcement framework: the European Commission is not the sole enforcer; instead, Digital Services Coordinators (DSCs) are assigned a critical role within Member

Best practices in trust & safety

What does it take to make your business LLM and GenAI proof?

Theodoros Evgeniou* (Tremau), Max Spero* (Checkfor.ai) Arguably “the person of the year for 2023” has been AI. We have all been taken by surprise by the speed of innovation and capabilities of Large Language Models (LLMs) and more generally generative AI (GenAI). At the same time, many, particularly in online platforms, raise questions about potential

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.