Check out all the on-demand sessions from the Intelligent Security Summit here.


These days, more and more healthcare providers are riding the wave of artificial intelligence (AI) innovation to provide better healthcare services. These include aiding drug discovery, predicting the risk of terminal diseases, developing novel drugs and using data-driven algorithms to improve the quality of patient care — all with the support of AI-powered solutions.

Pera Labs, for instance, claims to be a groundbreaking fertility company that uses AI and lab-on-a-chip technology to “help aspirational parents by assisting fertility clinics [to] reduce their standard 70% treatment failure rate.” For its part, HyperAspect deploys its AI solutions for tracking things like patient records and equipment — providing healthcare facilities with comprehensive visibility of all their data, so they can make better decisions. 

NeuraLight’s AI-driven platform integrates multiple digital markers to accelerate and improve drug development, monitoring and precision care for patients with neurological disorders. And Tel Aviv-based AI-powered drug discovery startup Protai claims it’s “reshaping the drug discovery and development process using proteomics and an end-to-end AI-based platform.”

However, while more healthcare providers are using AI and data to improve patient care, several issues with AI-powered technologies persist — especially around AI ethics and the accuracy of datasets. In an earlier VentureBeat article, reporter Kyle Wiggers highlighted an IDC study which “estimates the volume of health data created annually, which hit over 2,000 exabytes in 2020 [and] will continue to grow at a 48% rate year over year.” Although this vast amount of data provides a huge opportunity to train machine learning models, Wiggers noted that “the datasets used to train these systems come from a range of sources, but in many cases, patients aren’t fully aware their information is included among them.”

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

AI systems may become virtually indispensable as ever more data is amassed about every aspect of health. But the future of AI in healthcare rests on how healthcare providers can navigate around “technological, systemic, regulatory and attitudinal roadblocks to successful implementation; and integrating AI into the fabric of health care,” according to a PubMed paper.​​

3 challenges for AI in healthcare

Here are three of the biggest AI bottlenecks in healthcare today. And read on for some ways organizations can progress toward overcoming them.

1. AI bias

Data is the fuel on which AI runs. Large volumes of data help organizations train AI models effectively. But too much data can also cause “analysis paralysis.” AI bias often occurs because of issues along the data pipeline — inaccurate data labeling and poor data integration, for example — and the healthcare industry isn’t immune to this problem.

Experts point to inherent risks in predictions made by AI models when the models are taken into real-life situations. For example, a 2019 study (published in Science) assessing an algorithm used by U.S. hospitals found that millions of Black patients received a lower standard of care than white patients.

AI is great at learning from datasets, according to Micah Breakstone, cofounder and CEO at NeuraLight, but “when these datasets are inaccurate, messy [or] hard to process (e.g. if they appear in unstructured forms such as free text or untagged images), it’s much harder to unleash the power of machine learning.” Furthermore, he noted that “in many cases, relevant datasets simply don’t exist, and there is a challenge of either learning from a small number of examples or leveraging AI to construct a good proxy for these datasets.”

Pavel Pavlov, CEO at HyperAspect, said that while the healthcare space is data-rich and suitable for deterministic and nondeterministic analytics, building up proper datasets is difficult. He added that convoluted internal processes and the quest for fast ROI are obstructing long-term positive outcomes in the commercial and clinical areas. So, while there’s a lot of data in the healthcare industry, bias in datasets — leading to AI bias — is hindering organizations from getting the best and most accurate results from their AI models.

2. Explainability

Explainable AI (also called XAI) “enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making,” as noted in an earlier VentureBeat article. One of the major challenges with AI is trust: Humans still don’t fully trust AI. That’s especially because of biases and errors associated with AI models. This is a problem that explainable AI aims to solve.

According to NeuraLight’s Micah Breakstone, “it’s not enough to have a mathematical solution to a question without understanding the underlying mechanisms explaining why the AI-discovered solution works.” As an example, he said, “consider an AI-generated model that’s able to predict the progression of a neurodegenerative disease like Parkinson’s from a set of biomarkers. Such a model will indefinitely be met with suspicion from the healthcare community if the underlying mechanisms remain obscure — and rightfully so! Unexplained models are highly susceptible to quirky errors, leaving physicians in the dark and unable to intervene on behalf of patients.”

Understanding an AI solution’s underlying mechanisms can help to “ensure a transparent process for model performance,” according to Pera Labs CEO Burak Özkösem.

3. Regulations

Özkösem told VentureBeat that sustainable AI for the healthcare industry rests on two things: clinical relevance and transparency. But, he said, transparency unfortunately shifts during the commercialization process as AI solutions move from lab to market.

“Most of the AI models for health were developed by researchers at universities with public datasets in the beginning,” Özkösem said. “However, when these models become commercial, the datasets … used for training the models have to come from users and customers. This becomes problematic, with different data privacy rules like HIPAA in the U.S. and GDPR in the EU. This black-box AI approach is very dangerous for future treatments.”

According to HyperAspect’s Pavel Pavlov, “the major [AI] bottleneck in healthcare is around legislation and regulations.” But he quickly added that they are “a necessary guardrail to avoid data privacy and other issues around highly sensitive personal information.”

Some solutions  

To tackle AI bias, Breakstone noted that “it’s all about building better, cleaner, unbiased, large datasets.” For explainability, he added that “it’s important for AI experts to work hand-in-hand with physicians and scientists to ensure that the AI doesn’t remain an unexplainable black box, but rather a truly insightful solution.”

Regarding regulations, Özkösem said that “clinics must ensure their AI technologies are compliant with patient data privacy.” He also explained that “the first step for organizations to be ready for the AI revolution in healthcare is to digitize their records. This would provide secure, private yet more efficient treatment suggestions by AI, save more lives and increase the clinics’ performances.”

Özkösem also said innovation is a key ingredient in solving some of these challenges, with Pavlov noting that “the main precursor for enabling innovation in any field is keeping an open mind for emerging tech and being patient for achieving the desired outcome.

“Additionally, streamlining internal processes that allow rapid integrations within the enterprise ecosystem will certainly be major enablers for overcoming [these] AI bottlenecks.”

The future of AI in healthcare

The AI healthcare software market is growing rapidly. A report by Omdia predicts that the market will move past the $10 billion mark by 2025. While several challenges still surround the use of AI in healthcare today, the trends and data show that AI’s future in healthcare isn’t in jeopardy (at least for now).

Breakstone believes that right now it’s all about precision medicine, which he described as “using AI for tailoring a specific treatment to a person based on their entire profile (genetics, environment, lifestyle, etc.) in order to optimize patient outcomes.” In the future, he said, “AI will be able to help physicians take in and process a vast amount of data on every patient, and automatically suggest a highly-customized course of treatment and selection of drugs for a given person, in a way that’s both transparent and explainable — allowing physicians to step in as needed.”

Meanwhile, Pavlov believes, AI will find more use in preventive medicine. “The future of AI in the clinical and dental areas will be more predictive and focused on preventing diseases before they develop, or [discovering them] in early phases in order to improve the patient outcome,” he said. 

Verikai CEO Jeff Chen told VentureBeat that “it’s safe to say that the amount of data produced will only keep growing. There is too much value in that data for AI to be banned completely. So expect the government, industry and advocacy groups to consolidate around a common framework and set of practices that balance the need to protect individuals’ data with the real medical benefits of using that data within AI models.”  

It’s not just AI healthcare company founders who are excited about how AI could change the way things are done in healthcare. A study by the World Economic Forum predicted that 2030 will be a big year for the application of AI in healthcare, with several new use cases touted to find expression in what the study referred to as “a truly proactive, predictive healthcare system.” The study further forecast that “in 2030, healthcare systems will be able to anticipate when a person is at risk of developing a chronic disease, for example, and suggest preventive measures before they get worse. This development would be so successful that rates of diabetes, congestive heart failure and COPD (chronic obstructive heart disease) — which are all strongly influenced by the social determinants of health (SDOH) — will finally be on the decline.”

As AI technologies advance in the healthcare domain, the future leans toward democratization, where patients will have more control. As Damone Altomare, CTO at VIP StarNetwork, wrote in an earlier VentureBeat article: “We are on the cusp of the democratization of healthcare. It is not only possible but hugely beneficial. It will alleviate the stress of navigating the healthcare system, give the patient more choice in service and cost, and help drive healthcare costs down overall by driving more competition in the marketplace.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Kolawole Samuel Adebayo

Source link

You May Also Like

Overlooked No More: Cordell Jackson, Elder Stateswoman of Rock ’n’ Roll

This article is part of Overlooked, a series of obituaries about remarkable…

Boeing Faces Tricky Balance Between Safety and Financial Performance

Less than four weeks after a hole blew open on a Boeing…

I’m a single dad maxing out my retirement accounts and earning $100,000 – how do I make the most of my retirement dollars?

Dear MarketWatch,  I make over $100,000 a year, and expect to for…

Stability AI co-founder claims he was duped into selling stake for $100

Receive free Artificial intelligence updates We’ll send you a myFT Daily Digest…