Artificial intelligence is a highly complex technology that, once implemented, requires ongoing oversight to make sure it is doing what is expected of it and ensure it is operating at optimal levels.
Healthcare provider organizations using AI technologies also need to make sure they’re getting the biggest bang for their buck. In other words, they need to optimize the AI so that the technologies are meeting the specific needs of their organizations.
We spoke with six artificial intelligence experts, each with extensive experience in healthcare deployments, who offered comprehensive advice on how CIOs and other health IT workers can optimize their AI systems and approaches to best work for their provider organizations.
Applying AI to the right problem
Optimizing AI depends on the understanding of what AI is capable of and applying it to the right problem, said Joe Petro, chief technology officer at Nuance Communications, a vendor of AI technology for medical image interpretations.
“There is a lot of hype out there, and, unfortunately, the claims are somewhat ridiculous,” he said. “To optimize AI, we all need to understand: the problem we are trying to solve; how AI can solve the problem; can an existing capability be augmented with AI; and, when AI is not helpful.”
“There is a lot of hype out there, and, unfortunately, the claims are somewhat ridiculous.”
Joe Petro, Nuance Communications
For example, is “traceability” important? AI has a well-known “black box limitation” — every fact or piece of evidence that contributed to a decision or conclusion made by the neural net is not always known.
“It is sometimes impossible to trace back through the bread crumb trail leading to the conclusion made by the neural net,” Petro explained. “Therefore, if traceability is a requirement of the solution, you may need to retreat to a more traditional computational methodology, which is not always a bad thing.”
Is the problem conditioned for AI?
Also, is the problem well-behaved and well-conditioned for AI? For example, he said, are there clear patterns in the solution to the problem that repeat, do not vary widely, and are essentially deterministic.
“For example, if you give the problem to a series of experts, will they all arrive at the same answer,” he posed. “If humans are given the same inputs and disagree on the answer, then AI may not be able to make sense of the data, and the neural nets may deliver results that do not agree with the opinions of certain experts. Rest assured that AI will find a pattern – the question is whether or not the pattern is repeatable and consistent.”
So in today’s world of AI, the problems being solved by AI, especially in healthcare, are deliberately narrowly defined, thereby increasing the accuracy and applicability of AI, Petro explained. Choosing the right problem to solve and narrowing the scope of that problem is key in delivering a great outcome, he advised.
“Furthermore, training data needs to be readily available at the volume necessary to create dependable AI models that produce consistently verified results,” he added. “Unfortunately, sometimes there is no data available in the form that is required to train the neural nets. For example, in some cases, AI requires marked-up and annotated data. This kind of markup is sometimes not available.”
When a radiologist reads an image, they may or may not indicate exactly where in the image the diagnosis was made. No data markup makes training sometimes impossible. When a CDI specialist or care coordinator reads through an entire case, they most likely will not indicate every piece of evidence that prompted a query back to a physician.
“Again, no data markup makes training sometimes impossible,” Petro stated. “Therefore, someone needs to go back over the data and potentially add the markup and annotations to train the initial models. Markup is not always necessary, but we need to realize that the data we need is not always available and may need to be expensively curated. The fact is that data is essentially the ‘new software.’ Without the right data, AI cannot produce the wanted results.”
Pick the right applications
Ken Kleinberg, practice lead, innovative technologies, at consulting firm Point-of-Care Partners, cautioned that AI is being promoted as being able to solve just about any problem that involves a decision.
“Many applications that were formerly addressed with proven rules-based or statistical approaches now are routinely being identified as AI targets,” he explained. “Given the many extra considerations for AI involving model selection, training, validation, the expertise required, etc., this may be overkill. In addition to ROI concerns, using AI may expose organizations to problems unique or common to AI that simpler or alternative approaches are less susceptible to.”
“AI should not be an expensive hammer looking for a simple nail.”
Ken Kleinberg, Point-of-Care Partners
Even basic machine learning approaches that may do little more than try out a bunch of different statistical models require some degree of expertise to use, he added.
“Considerations of which applications to pick for AI include how many possible variables are in play, known complexities and dependencies, data variability, historical knowledge, availability of content experts, transparency of decision requirements, liability concerns, and how often the system might need to be retrained and tested,” Kleinberg advised.
“Experience of the model builders and sophistication and investment with an AI platform also should be considered, but AI should not be an expensive hammer looking for a simple nail.”
For example, it may be that only a handful of known variables are key to decision making to decide whether an intervention with a patient suffering from a specific condition is needed – if the patient has these specific triggers, they are going to be brought in.
“Why attempt to train a system on what is already known?” he said. “Sure, if the goal is to discover unknown nuances or dependencies, or deal with rare conditions, AI could be used with a broader set of variables. For most organizations, they will be safer to go with basic rules-based models where every aspect of the decision can be reviewed and modified as new knowledge is accumulated – especially if there are a manageable number of rules, up to a few hundred. That could be a better initial step than going directly to an AI model.”
Get widespread input on optimization
In order to get the most out of an AI investment and optimize the technology for a specific healthcare provider organization, bring in members from across the organization – not just the IT team or clinical leadership, said Sanjeev Kumar, vice president of artificial intelligence and data engineering at HMS (Healthcare Management Systems).
It’s important to invest time to understand – in detailed nuance – the full workflow from patient scheduling to check-in to clinical workflow to discharge and billing, he said.
“Each member of the team will be able to communicate how AI technology will impact the patient experience from their perspective and how these new, rich insights will impact the everyday office workflow.”
Sanjeev Kumar, HMS
“Each member of the team will be able to communicate how AI technology will impact the patient experience from their perspective and how these new, rich insights will impact the everyday office workflow,” Kumar said. “Without this insight at the beginning of the implementation, you risk investing a significant amount of money in technology that is not used by the staff, that negatively impacts the patient experience or, worst of all, gives inappropriate insights.”
Collectively, incorporating staff early on may cause additional investment in manpower, but will result in an output that can be used effectively throughout the organization, he added.
Handling data carefully
On another technology optimization front, healthcare provider organizations have to be very careful with their data.
“Data is precious, and healthcare data is at the outer extreme of sensitive information,” said Petro of Nuance Communications. “In the process of optimizing AI technology, we need to make sure the AI vendor is a trusted partner that acts as a caretaker of the PHI. We have all heard the horror stories in the press about the misuse of data. This is unacceptable.”
Partnering with AI vendors that are legitimate custodians of the data and only use the data within the limitations and constraints of the contract and HIPAA guidelines is a table stakes-governing dynamic, he added.
“Make sure to ask the hard questions,” he advised. “Ask about the use of the data, what is the PHI data flow, how does it move, where does it come to rest, who has access to it, what is it used for, and how long does the vendor keep it. Healthcare AI companies need to be experts in the area of data usage and the limitations around data usage. If a vendor wobbles in these areas, move on.”
Process and data variability
Another very important consideration for providers optimizing AI technology is the amount of variability in the processes and data they are working with, said Michael Neff, vice president of professional services at Recondo Technology.
“For example, clinical AI models created for a population of patients with similar ethnic backgrounds and a small range of ages is most likely simpler than the same model created for an ethnically diverse population,” he explained. “In the latter population, there will probably be a lot more ‘edge cases,’ which will either require more training data or will need to be excluded from the model.”
“A model trained with data sent from a specific payer may not be valid for other payers that a provider works with.”
Michael Neff, Recondo Technology
If the decision is made to exclude those cases, or if a model is built from a more cohesive data set, it will be very important to limit the use of the AI model to the cases where its predictions are valid, he continued.
“The same argument,” he said, “holds for business variability: A model trained with data sent from a specific payer may not be valid for other payers that a provider works with.”
Building an audit trail
When using AI approaches – and especially natural language processing – it is key to provide an audit trail to justify recommendations and findings, advised Dr. Elizabeth Marshall, associate director of clinical analytics at Linguamatics IQVIA.
“Any insights or features taken from clinical notes and used in AI algorithms need to be easily traced to the exact place in the document they came from,” she cautioned. “This enables clinical staff to validate the findings and build confidence in AI.”
“Any insights or features taken from clinical notes and used in AI algorithms need to be easily traced to the exact place in the document they came from.”
Dr. Elizabeth Marshall, Linguamatics IQVIA
For example, when ensuring a hospital is receiving the right repayment for chronic condition comorbidities such as hepatitis (chronic viral B and C) and HIV/AIDS. It is not only important to capture the data but also to ensure one is able to link the data back to the patient’s EHR encounter where the information was obtained, she said.
“Further, it’s critical to consider how any insights will be made actionable and incorporated into clinical workflow; having a series of AI algorithms with no way to actually improve patient care is not impactful,” Marshall said. “For example, clinicians may want to improve the identification of patients who might be missed in a busy emergency department. Time is of the essence, and manually re- reviewing every radiology report, to look for missed opportunities of follow-up, wastes precious time.”
Instead, they could use natural language processing to review unstructured sections for critical findings within the reports such as identifying patients with incidental pulmonary nodules, she advised.
“When high-risk patients are identified, it’s critical to have a process in place for appropriate follow-up,” she said. “To actually improve care, the results need to be flagged in a risk register for follow-up by care coordinators after the patients are no longer in immediate danger.”
How AI fits into workflows
On the AI spectrum, full replication of human thought is sometimes referred to as “strong” or “full” AI. This does not exist yet, certainly not in medicine.
“In healthcare, we primarily are focused on ‘narrow or weak’ AI, which could be described as the use of software or algorithms to complete specific problem-solving or reasoning tasks, at various levels of complexity,” said Dr. Ruben Amarasingham, president and CEO of Pieces Technologies. “These include specific focal tasks like reading a chest X-ray, interpreting the stage of a skin wound, reading a doctor’s note and understanding the concerns.”
“If the AI is not reducing stress and complexity of the workflow, it is either not working, not optimized or not worth it.”
Dr. Ruben Amarasingham, Pieces Technologies
One optimization best practice is to understand how the proposed AI technology is being inserted into the workflow, whether the insertion truly decreases “friction” in the workflow from the perspective of the stakeholder (provider, patient and family) and effectively measuring and evaluating that value-add immediately after go-live, he said.
“If the AI is not reducing stress and complexity of the workflow, it is either not working, not optimized or not worth it,” he added.
Localize the training
AI models built by third parties may well serve the local needs of an organization – at least as a starting point, but that could be a risk – and be an opportunity for optimization, said Kleinberg of Point-of-Care Partners.
“As the number of prebuilt models proliferate – for example, sepsis, length-of-stay prediction, no-shows – it becomes more important to understand the quality of the training and test sets used and attempt to recognize what assumptions and biases the prebuilt model may contain,” he said. “There has been a ton of recent research on how easily AI – particularly deep learning – models can be fooled, for example, not paying enough attention to what’s in the background of an image. Are the models validated by any independent parties?”
Consider an application that recommends the most appropriate type of medical therapy management program for a patient with a goal of increasing medication adherence, Kleinberg advised. To what degree might the test set have been chosen for individuals affected by certain environmental factors (warm versus cold climate), fitness levels, ethnic/economic background, number of medications taken, etc., and how does that compare to the local population to be analyzed, he added.
“Retraining and testing the model with a more tuned-to-local demographic data set will be a key practice to achieve more optimized results,” he advised.
Governance over AI technology
Amarasingham of Pieces Technologies offered another AI optimization best practice: Health systems should set up governance systems for their AI technology.
“I am excited to see the development of AI committees at health systems similar to the development of evidence-based medicine, data governance or clinical order set committees in the recent past,” he said. “These groups should be involved with evaluating the performance of their AI systems in the healthcare workplace and not rely solely on vendors to oversee their products or services.”
They also could be tasked with developing the AI roadmap for their institution over time and as new AI technologies emerge, he added. These committees could be a mix of clinicians, administrators, informaticists and information system team members, he suggested.
Implementing any artificial intelligence technology can require a little more investment than originally anticipated, but if a healthcare organization starts small and plans properly, it will see true returns on that capital and manpower, advised Kumar of HMS.
“All healthcare organizations – provider, payer and employer – will attest that AI has the ability to help transform healthcare operations,” he stated. “However, AI by itself is not a silver bullet to revolutionizing the system – it requires people, process and technology planning, workflow transformation, and time to make sure that it is successful.”
This means that to correctly optimize the technology, one needs to go slowly and make sure that one considers all factors that will impact the output – from the data going in to how the insights are reported back to providers for action, he said.
“In order to ensure that you get the most out of your investment,” he concluded, “know that you will need to invest more and take longer to see the results.”
Email the writer: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article