In an environment as complex as health care, it should come as no surprise that artificial intelligence (AI) technology and the machine learning market are still relatively early-on in their maturation process. Expecting the market to be farther along would be like expecting a toddler who can do single-digit addition to also do calculus; we’re just not there yet. Yet.
The authors of a recent STAT+ article entitled “A market failure is preventing efficient diffusion of health care AI software,” make a case for why AI software adoption in health care remains limited, and what the industry can/should do to advance its implementation in a clinical decision support capacity.
To correct what they consider a “market failure,” the authors “offer a reimbursement framework and policy intervention” to better align AI software adoption with emerging best practices.” Among their observations, the authors state that most AI solutions being implemented in hospitals and health systems today are of “questionable” quality, adopted de facto through existing electronic health record (EHR) systems, and point to high per-unit economic costs as the cause of limited AI software adoption.
But, do these factors constitute a market failure? Or is the market functioning exactly as it should be?
And, if the EHR incentive program failed in terms of achieving interoperability and led to adverse unintended consequences (which both the authors recognize and agree with), should we be applying a similar policy playbook to AI?
The answer to this last question: No, absolutely not.
No, AI Is Not A Market Failure, and Policy Mechanisms Won’t “Fix” It
To fuel AI’s adoption, the authors of the STAT+ article call for policy intervention and payment incentives. There are a few issues with this argument and their suggested approach to fix the situation.
First, the authors do not define what a “market failure” is, nor make the case that AI qualifies as one. One definition of market failure suggests an inefficient distribution of goods or services, often because the benefits that are created are not realized by the purchaser. A healthcare example of this is e-prescribing, a technology which doctors must adopt but whose benefits accrue largely to other stakeholders (including pharmacy, payers, and patients).
Second, while the authors break down cost structures (fixed vs variable) of the adoption and use of AI, they stop short of actually quantifying what the per-unit or per-instance costs of AI implementation really are. Nor do they quantify AI’s value or public benefit and compare them to the costs – which makes developing a reimbursement program effectively impossible.
Third, while having AI oversight and quality assurance is incredibly important – with many coalitions and public/private partnerships coming to fruition for just this reason – the authors don’t illustrate any harm created by the lack of AI adoption. (One reason being, one assumes, because demonstrating and quantifying harm is nearly impossible at this stage of AI’s development in health care and few examples documenting the benefits).
Fourth, without assigning value to its implementation, the authors call for reimbursement mechanisms for the adoption and use of AI. This would be a continuation of “pay for effort and cost”, not payment for outcomes, an approach that exists under our dominant fee-for-service payment mechanism. Such an approach has been tried and found wanting, for reason: a payment system based on volume rewards volume, not outcomes.
Fifth, the authors don’t provide any use-case specification for how AI policy mandates would be rolled out. Would incentives only cover clinical decision support for certain conditions, to start? AI is so incredibly immature, it’s likely that evidence to make the case for a specific use or capacity doesn’t exist yet.
The authors also make the case that, without a financial incentive program to spur adoption of AI, there will be a “digital divide,” with AI adoption and value limited to wealthier health systems with the resources and structure to take on such investments. But, is that such a bad thing?
Larger, wealthier systems generally have more financial flexibility to purchase innovative technology and invest in change management programs that, by nature, have uncertain outcomes. Some of these efforts will fail, especially when adopting as-yet untested and unproven (in terms of broad market adoption) technology such as AI; this is part of the broader process by which market forces determine which technologies have merit and which don’t, and the process by which the companies offering these solutions find product-market fit.
In other words, larger, wealthier systems can afford these types of failures; smaller systems cannot. The fact that there may be a “digital divide” is not inherently a bad thing if it allows for market feedback loops that reduce the risk of poor investments for systems that cannot afford it.
Should AI be treated any differently?
The Unintended Consequences of Federal Incentives: Learning from EHR Experience
Lastly, the authors argue for a large-scale set of financial incentives for health systems to adopt and use AI.
Unfortunately, providing federal incentives as a policy mechanism is not well-suited for newer technologies and business models that have yet to be proven. One can look to recent experience – which the STAT authors also point to – to witness the folly of such an endeavor.
The HITECH ACT provided for $35 billion in federal incentives to spur physician and hospital adoption and ‘meaningful use’ of EHRs. To ensure program integrity and that benefits of EHR adoption would be realized, policymakers directed the Office of the National Coordinator (ONC) to develop utilization requirements that physicians and hospitals would need to demonstrate to receive the incentives. This put ONC in the position of predicting the future of how doctors would use and create value from EHRs. Not surprisingly, their best guesses 10 years ago have not proven prescient. This is not a knock on ONC, but an acknowledgment that few of us can accurately predict the future, especially when it involves immature technology that is likely to evolve substantially in the coming years.
Finally, the STAT+ authors themselves acknowledge that an unintended consequence of the EHR Incentive Program (part of HITECH) was that “EHR vendors turned this windfall of taxpayer dollars into a barrier to entry” that in turn they use to promote their own AI solutions. They do not seem to contemplate that another federal incentive program may result in a windfall for AI vendors who erect their own barriers to entry.
Yet this is what the STAT+ authors suggest for an AI incentive program.
The reality is that as new developments in the application of AI in healthcare occur and lessons are learned, the federal government is uniquely ill-suited to administer such an incentive program. It is too slow-moving to keep up with the pace of innovation in AI, and yet too big to fail. Such inevitable market failures, new technology developments and lessons learned are better left to individual AI companies and health systems.
Perhaps the best example of subsidized health IT adoption done right is e-prescribing. Federal incentives to promote e-prescribing adoption beginning in 2009 was a remarkable success, and by 2010 40% of doctors who had adopted did so in direct response to the program. The market – and competitive landscape – for e-prescribing grew in large part because e-prescribing was an established technology, standards were in place to ensure interoperability between doctors and pharmacies, there was an ecosystem and network infrastructure in place already, and studies had been done demonstrating the benefits.
For e-prescribing, the tech’s value was already proven. For AI, we are not there yet.
If Value Is There, The Market Will Find It. So What Role Should The Government Play?
As the EHR incentives program’s $35B failure reinforces, health IT adoption is not something that can, or should, be solved by a policy intervention alone – especially when a technology is this immature.
There may well be roles for the government to play. As an industry convener, it could bring industry, technology and academic experts in to educate agencies and make standards recommendations to address policy and technical issues that AI developers and implementers face. As the nation’s largest payer (CMS), the government can encourage adoption once standards are established and use cases have proven value by tying incentives to reimbursement; alternatively, by increasing its own use of value-based payment systems, creates the conditions by which health systems will naturally adopt AI that is proven to improve quality of care and outcomes.
Beyond this, the authors of the STAT+ article argue that the Joint Commission, a not-for-profit organization response for standards-setting and accreditation, has a role to play in the validation and monitoring of AI software. This is indeed a good idea, one played by a private and reputable organization.
If AI does deliver enough value, the market should, and will, find that value. But if not, the government shouldn’t be responsible for shepherding AI’s adoption through funding and payment mechanism, especially not by using the previous HITECH incentive framework as a starting point.