Toggle light / dark theme

A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis

Circa 2018 #artificialintelligence #doctor


Abstract: Online symptom checkers have significant potential to improve patient care, however their reliability and accuracy remain variable. We hypothesised that an artificial intelligence (AI) powered triage and diagnostic system would compare favourably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by both an AI system and human doctors. Differential diagnoses and triage outcomes were evaluated by an independent judge, who was blinded from knowing the source (AI system or human doctor) of the outcomes. Independently of these cases, vignettes from publicly available resources were also assessed to provide a benchmark to previous studies and the diagnostic component of the MRCGP exam. Overall we found that the Babylon AI powered Triage and Diagnostic System was able to identify the condition modelled by a clinical vignette with accuracy comparable to human doctors (in terms of precision and recall). In addition, we found that the triage advice recommended by the AI System was, on average, safer than that of human doctors, when compared to the ranges of acceptable triage provided by independent expert judges, with only a minimal reduction in appropriateness.

From: Yura Perov N [view email]

[v1] Wed, 27 Jun 2018 21:18:37 UTC (54 KB)

Dr. Mona Flores, M.D., Global Head of Medical AI, NVIDIA — Bridging Technology And Medicine

Bridging Technology And Medicine For The Modern Healthcare Ecosystem — Dr. Mona G. Flores, MD, Global Head of Medical AI, NVIDIA.


Dr. Mona Flores M.D., is the Global Head of Medical AI, at NVIDIA (https://blogs.nvidia.com/blog/author/monaflores/), the American multinational technology company, where she oversees the company’s AI initiatives in medicine and healthcare to bridge the chasm between technology and medicine.

Dr. Flores first joined NVIDIA in 2018 with a focus on developing their healthcare ecosystem. Before joining NVIDIA, she served as the chief medical officer of digital health company Human-Resolution Technologies after a 25+ year career in medicine and cardiothoracic surgery.

Dr. Flores received her medical degree from Oregon Health and Science University, followed by a general surgery residency at the University of California at San Diego, a Postdoctoral Fellowship at Stanford, and a cardiothoracic surgery residency and fellowship at Columbia University in New York.

Dr. Flores also has a Masters of Biology from San Jose State and an MBA from the University at Albany School of Business. She initially worked in investment banking for a few years before pursuing her passion for medicine and technology.

Future Chip Innovation Will Be Driven By AI-Powered Co-Optimization Of Hardware And Software

To say we’re at an inflection point of the technological era may be an obvious declaration to some. The opportunities at hand and how various technologies and markets will advance are nuanced, however, though a common theme is emerging. The pace of innovation is moving at a rate previously seen by humankind at only rare points in history. The invention of the printing press and the ascension of the internet come to mind as similar inflection points, but current innovation trends are being driven aggressively by machine learning and artificial intelligence (AI). In fact, AI is empowering rapid technology advances in virtually all areas, from the edge and personal devices, to the data center and even chip design itself.

There is also a self-perpetuating effect at play, because the demand for intelligent machines and automation everywhere is also ramping up, whether you consider driver assist technologies in the automotive industry, recommenders and speech recognition input in phones, or smart home technologies and the IoT. What’s spurring our recent voracious demand for tech is the mere fact that leading-edge OEMs, from big names like Tesla and Apple, to scrappy start-ups, are now beginning to realize great gains in silicon and system-level development beyond the confines of Moore’s Law alone.

AI can reliably spot molecules on exoplanets, and might one day even discover new laws of physics

Do you know what the Earth’s atmosphere is made of? You’d probably remember it’s oxygen, and maybe nitrogen. And with a little help from Google you can easily reach a more precise answer: 78% nitrogen, 21% oxygen and 1% Argon gas. However, when it comes to the composition of exo-atmospheres—the atmospheres of planets outside our solar system—the answer is not known. This is a shame, as atmospheres can indicate the nature of planets, and whether they can host life.

As exoplanets are so far away, it has proven extremely difficult to probe their atmospheres. Research suggests that artificial intelligence (AI) may be our best bet to explore them—but only if we can show that these algorithms think in reliable, scientific ways, rather than cheating the system. Now our new paper, published in The Astrophysical Journal, has provided reassuring insight into their mysterious logic.

Astronomers typically exploit the transit method to investigate exoplanets, which involves measuring dips in light from a star as a planet passes in front of it. If an atmosphere is present on the planet, it can absorb a very tiny bit of light, too. By observing this event at different wavelengths—colors of light—the fingerprints of molecules can be seen in the absorbed starlight, forming recognizable patterns in what we call a spectrum. A typical signal produced by the atmosphere of a Jupiter-sized planet only reduces the stellar light by ~0.01% if the star is Sun-like. Earth-sized planets produce 10–100 times lower signals. It’s a bit like spotting the eye color of a cat from an aircraft.

Tesla releases new footage of auto labeling tool for its self-driving effort

Tesla’s head of AI has released new footage of the automaker’s auto labeling tool for its self-driving effort.

It’s expected to be an important accelerator in improving Tesla’s Full Self-Driving Beta.

Tesla is often said to have a massive lead in self-driving data thanks to having equipped all its cars with sensors early on and collecting real-world data from a fleet that now includes over a million vehicles.

Ex-Googler Timnit Gebru Starts Her Own AI Research Center

ONE YEAR AGO Google artificial intelligence researcher Timnit Gebru tweeted, “I was fired” and ignited a controversy over the freedom of employees to question the impact of their company’s technology. Thursday, she launched a new research institute to ask questions about responsible use of artificial intelligence that Gebru says Google and other tech companies won’t.

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male —and to recruit people from parts of the world rarely represented in the tech industry.

Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies. Google has said she resigned and was not fired, but acknowledged that it later fired Margaret Mitchell, another researcher who with Gebru co-led a team researching ethical AI. The company placed new checks on the topics its researchers can explore. Google spokesperson Jason Freidenfelds declined to comment but directed WIRED to a recent report on the company’s work on AI governance, which said Google has published more than 500 papers on “responsible innovation” since 2018.

Sydney-based medtech startup Harrison.ai gets $129M AUD led by Horizons Ventures

Harrison.ai, a Sydney-based company that creates medical devices with AI technology, announced today it has raised $129 million AUD (about $92.3 million USD) in what it called one of the largest Series B rounds ever for an Australian startup.

The funding was led by returning investor Horizons Ventures and included participation from new investors Sonic Healthcare and I-MED Radiology Network. Existing backers Blackbird Ventures and Skip Capital also returned for the round, which brings Harrison.ai’s total raised over the past two years to $158 million AUD.

Harrison.ai announced it has also formed a joint venture with Sonic Healthcare, one of the world’s largest medical diagnostics providers, to develop and commercialize new clinical AI solutions in pathology. The partnership will focus first on histopathology, or the diagnosis of tissue diseases.

/* */