Back to Blog
![]() In medicine, an example of concept drift is when a machine-learning-based diagnostic system that uses skin images as inputs in detecting skin cancers fails to make correct diagnoses because the relationship between, say, the color of someone’s skin (which may vary with race or sun exposure) and the diagnosis decision hasn’t been adequately captured. Similar misalignment may happen with credit-scoring models at different points in the business cycle. As the market changes, the relationship between the inputs and outputs-for example, between how leveraged a company is and its stock returns-also may change. If it has been trained using data only from a period of low market volatility and high economic growth, it may not perform well when the economy enters a recession or experiences turmoil-say, during a crisis like the Covid-19 pandemic. Consider a machine-learning algorithm for stock trading. With the former the relationship between the inputs the system uses and its outputs isn’t stable over time or may be misspecified. While this can happen in many ways, two of the most frequent are concept drift and covariate shift. Second, the environment in which machine learning operates may itself evolve or differ from what the algorithms were developed to face. The likelihood of errors depends on a lot of factors, including the amount and quality of the data used to train the algorithms, the specific type of machine-learning method chosen (for example, deep learning, which uses complex mathematical models, versus classification trees that rely on decision rules), and whether the system uses only explainable algorithms (meaning humans can describe how they arrived at their decisions), which may not allow it to maximize accuracy. Because they make so many predictions, it’s likely that some will be wrong, just because there’s always a chance that they’ll be off. One is simply that the algorithms typically rely on the probability that someone will, say, default on a loan or have a disease. There are three fundamental reasons for this. They don’t always make ethical or accurate choices. But these algorithms don’t always work smoothly. The big difference between machine learning and the digital technologies that preceded it is the ability to independently make increasingly complex decisions-such as which financial products to trade, how vehicles react to obstacles, and whether a patient has a disease-and continuously adapt in response to new data. In this article, which draws on our work in health care law, ethics, regulation, and machine learning, we introduce key concepts for understanding and managing the potential downside of this advanced technology. What happens when machine learning-computer programs that absorb new information and then change how they make decisions-leads to investment losses, biased hiring or lending, or car accidents? Should businesses allow their smart products and services to autonomously evolve, or should they “lock” their algorithms and periodically update them? If firms choose to do the latter, when and how often should those updates happen? And how should companies evaluate and mitigate the risks posed by those and other choices?Īcross the business world, as machine-learning-based artificial intelligence permeates more and more offerings and processes, executives and boards must be prepared to answer such questions. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended. And their complexity can make it hard to determine whether or why they made a mistake.Ī key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. ![]() Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. ![]() Because the systems make decisions based on probabilities, some errors are always possible. Machine learning can go wrong in a number of ways. Executives need to understand and mitigate the technology’s potential downside. And as such offerings proliferate across markets, the companies creating them face major new risks. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. Products and services that rely on machine learning-computer programs that constantly absorb new data and adapt their decisions in response-don’t always make ethical or accurate choices.
0 Comments
Read More
Leave a Reply. |