Concern about the trustworthiness of AI systems has closely followed the excitement about their potential to improve our lives. An attorney was duped when ChatGPT generated fictitious legal research, the California DMV removed self-driving taxis from San Francisco after one injured a pedestrian, and the WHO issued a warning about potential errors from using AI in healthcare. The Biden Administration recently issued a sweeping executive order on "safe, secure, and trustworthy" AI, and this week the EU has reached agreement on an AI Act to regulate AI.
The potential for untrustworthy AI is a topic worthy of study and consideration by policy makers, but so is the opposite problem.
What if AI is too good?
Last week, Carrie Alexander, Renata Ivanek, and I published a paper in Frontiers in AI about how companies may avoid potentially beneficial AI because it works well. This paper grew out of our work in the AI Institute for Next Generation Food Systems (AIFS) on socioeconomic, regulatory, and ethical AI for the food system.
Imagine a food processor deciding whether to adopt an AI technology to detect pathogens in food products, or a farmer considering a computer vision system to detect ripe fruit but which could also reveal fecal contamination, or a company evaluating an AI system to detect disease in workers or livestock.
If these entities learn from an AI system about the presence of pathogens, contamination, or disease, and if they do not mitigate effectively after receiving that information, then they may be held liable for damages. However, if they had not used the AI technology, then they would not have had the information required to prevent or mitigate damage, and they would not be liable.
It may be financially safer for the company not to know. Legal counsel may advise them not to use the technology so as to reduce their liability.
AI technologies that assess risks, such as in the examples above, are typically not designed to make production more efficient or to reduce costs, although some may have that effect. They are designed primarily to precisely map the timing and location of increased risks to workers and customers so that mitigation steps can be considered.
This is an example of a positive externality. A risk-assessment AI may impose mitigation and litigation costs on the company that adopts it, but its potential to identify pathogens, contamination, or disease provides a benefit to society. The standard economic solution to this dilemma would be to subsidize adoption of the AI technology based on the expected mitigation and litigation costs to the firm.
However, two features of this setting complicate that simple recommendation.
First, these technologies are evolving rapidly. They will become much more effective over time, so determining the right level of a subsidy right now is practically impossible. Second, a company may face existential risk from adopting these technologies; a subsidy based on expected losses would not entice a company that faces bankruptcy in the event that the technology identifies something with exorbitant mitigation or litigation costs. Small companies may be particularly vulnerable, especially if none of their competitors adopt the technology.
In the paper, we explore some ideas for moving forward, including a temporary "on-ramp" to facilitate adoption through adjustments to regulation, liability costs, and resources for transitioning to new technologies. This approach requires working through a complex legal and regulatory web, but it may allow a smoother and earlier adoption of these technologies for the best interests of workers, consumers, and the public.
Methods
Carrie has interviewed 66 researchers and stakeholders from all areas of the food system as part of two on-going AIFS projects, along with several surveys and a focus group. The starting point and methodological foundations for this work came from bioethics research on transforming organizational culture from a culture of compliance to a culture of trustworthiness in the development and use of technologies that require and are accountable to the public trust. The main purpose of these methods was/is to explore what “trustworthy” or “responsible” AI means to those creating or using it, and how food system stakeholders make decisions regarding whether AI technologies are trustworthy, reliable, or relevant enough to adopt them.
The scenarios we address in this paper were mentioned by a small number of food industry and researcher participants. Unrecorded follow-up meetings with researchers and legal professionals provided clarification and additional context.
Citation: Safer Not to Know?: Shaping Liability Law and Policy to Incentivize Adoption of Predictive AI Technologies in the Food System. Alexander, C.; Smith, A.; and Ivanek, R. Frontiers in Artificial Intelligence, 6: 1-8. 2023.
This sounds like an issue with the rules not adapting to a new situation. Overall, while not perfect, the food system in the US at least seems pretty safe. If you suddenly found out tomorrow that some significant fraction of the food being produced was being flagged for some kind of "contamination", that sounds like we don't actually understand what constitutes a risky contaminant. If the presence of some particular bacteria almost never results in illness, then flagging it doesn't seem like a good use of resources.
If we don't think that the rules are able to be changed in response to a changed understanding of the risk profile, then I think I'd agree that yes, it is better not to know, if all that knowing does for us is force us to spend billions to mitigate risks that were going to harm nearly no one.