What will these mean as we go forward in health IT and health reform? In the past, Google struggled with Health Vault which was a major fail at the time electronic health records were going mainstream.
It's led by former Geisinger CEO David Feinberg, who reports to Google AI chief Jeff Dean, and key players include Google veteran Paul Muret, who runs product, and Chief Health Officer Karen DeSalvo. Google and its Alphabet holding company are becoming a landing page and exit strategy for former HHS employees of high rank.
Google Health, which represents the first major new product area at Google since hardware, began to organize in 2018, and now numbers more than 500 people working under David Feinberg, who joined the company in early 2019. Most of these people were reassigned from other groups within Google, although the company has been hiring and currently has over a dozen open roles.
Google and its parent company, Alphabet, are counting on new businesses as growth slows in its core digital advertising business. Alphabet CEO Sundar Pichai, who was recently promoted from Google's CEO to run the whole conglomerate, has said health care offers the biggest potential for Alphabet to use artificial intelligence to improve outcomes over the next 5 to ten years.
Google's health efforts date back more than a decade to 2006 when it attempted to create a repository of health records and data. Back then, it aimed to connect doctors and hospitals and help consumers aggregate their medical data. However, those early attempts failed in the market and the company terminated this first "Google Health" product in 2012. Google then spent several years developing artificial intelligence to analyze imaging scans and other patient documents and identify diseases with the intent of predicting outcomes and reducing costs. It also experimented with other ideas, like adding an option for people searching for medical information to talk to a doctor.
Google is already harnessing many of its strengths to be used or integrated into electronic health records either as a patch, plug-in or API. When HIPAA became law many of entities such as Google were not included in the law. There is a stipulation for health care entities to sign business associate agreement with subcontractors which details HIPAA's regulations. It is unknown whether this is being enforced. The author will investigate this in the coming month, and the results will be published here.
Google AI already created algorithms to analyze images of the retina, CT, and MRI images.
On the left is a fundus image graded as having proliferative (vision-threatening) DR by an adjudication panel of ophthalmologists (ground truth). On the top right is an illustration of our deep learning model’s predicted scores (“P” = proliferative, the most severe form of DR). On the bottom right is the set of grades given by physicians without assistance (“Unassisted”) and those who saw the model’s predictions (“Grades Only”).
We saw clear evidence that showing model predictions could help physicians catch pathology they otherwise might have missed. In the retinal image below, our adjudication panel found signs of vision-threatening DR. This was missed by 2 of 3 doctors who graded it without assistance; but caught by all 3 doctors who graded it when they saw the model predictions (which accurately detected the pathology).
Jeffrey Dean co-founded Google Brain in 2010, which catapulted the company's deep learning technology into the medical analysis. Some of the first health-related projects out of Google Brain included a new computer-based model to screen for signs of diabetic retinopathy in eye scans, and an algorithm to detect breast cancer in X-rays. In 2019, Dean took the helm of the company's AI unit, reporting to Pichai.
Researchers found that the AI system reduced false positives by 5.7 percent for US women — a significant improvement when you consider how distressing it would be to be told you have cancer when you actually do not. It also reduced false negatives by 9.4 percent, meaning it caught instances of cancer that would’ve otherwise gone undetected.
And it did this by “looking” at mammograms alone, without access to any of the other health data that human doctors have on their patients.
This does not mean AI will soon replace radiologists — that’s a common but false narrative. While AI systems catch things that doctors miss, doctors also catch things that AI systems miss. Their abilities are complementary, best used together.o build the AI system, researchers got anonymized mammograms from some 76,000 women in the UK and 15,000 women in the US. They used that data to train the system. Then they tested it on the X-rays of a different group — 25,000 UK women and 3,000 US women — analyzing how often the AI was right about whether the woman actually ended up having cancer, as determined by biopsies and follow-ups.
In both the UK and the US, the AI outperformed a single radiologist.
In the UK, the standard of care is to have two radiologists read the X-rays, which can be tricky to analyze. The AI didn’t do better than two radiologists combined, but it didn’t do worse than them, either — and it reduced the workload of the second reader by 88 percent.
In the near future, I predict there will be an additional disclosure that AI is used by the entity to diagnose and/or treat the patient.
Google (Alphabet) has also dabbled with Verily and Calico Labs a company that focuses on aging. Verily has since been shuttered.
No comments:
Post a Comment