CLOSE
Loading...
12° Nicosia,
22 November, 2024
 
Home  /  News

Could robot doctors fully replace human ones?

When it comes to healthcare, the popular ChatGPT platform is becoming increasingly intelligent. Why does it matter?

Source: Axios

When it comes to healthcare, the popular artificial intelligence platform ChatGPT is becoming increasingly intelligent.

In its analysis, Axios seeks to answer the question: Why does it matter?

Many people turn to "Dr. Google" for medical advice. The analysis concludes that they may turn to 'Dr. ChatGPT' in the future.

First, many clinical diagnoses and decisions could be made by machines rather than human doctors in the future.

Also, ChatGPT recently passed all three parts of the US Medical License Exam as part of an experiment, albeit narrowly.

According to researchers, second-year medical students frequently spend hundreds of hours preparing for Part 1, while medical school graduates typically pass Part 3.

Ansible Health, a Silicon Valley startup specializing in the treatment of COPD (Chronic Obstructive Pulmonary Disease), had been looking into artificial intelligence and machine learning tools to help it improve its work.

"When ChatGPT was released, there was a lot of excitement in the tech world, so we wanted to see if it was just a fad or something useful," explains Jack Poe, CEO of Ansible and former Google product manager. "We were surprised by the results while evaluating. Not only was what was correct but so was the explanation."

Po and others then decided to ask ChatGPT to take the USMLE (US medical licensing exam), but only after ensuring that "none of the answers, explanations, or related content is already on Google." They then published their findings, which are currently being reviewed.

The fact that ChatGPT could perform so well despite never having been trained on a medical dataset was a huge surprise. ChatGPT was programmed to avoid anything that could be interpreted as medical advice, so the researchers ruled out a set of "undefined" responses.

"These responses were so general that it was difficult to tell if they were correct or incorrect," study author Morgan Cheetham, a student at Brown University School of Medicine, explains.

Because productive AI learning models are still in their early stages, they are enhancing rather than replacing medical work for the time being. Ansible, for example, employs ChatGPT to explain concepts to patients after they have been reviewed by a trained professional.

What comes next: Perhaps it will be used for wellness checks and other general medical tasks in the future. When the technology progresses beyond text generation, it may include additional data such as voice tone, body language, and facial expressions.

The direct integration of a patient's medical records is expected to provide a certain benefit. There are times when doctors do not have enough time to review a patient's history.

Axios continues its analysis, predicting that machines will not be able to self-diagnose anytime soon. AI models, such as ChatGPT, can make confident claims that turn out to be false, which can be dangerous in medical applications.

"I think we're in the middle of a 20-year arc, similar to what we've seen with finance," said Vijay Pandey, a healthcare investor and Stanford University assistant professor of bioengineering.

"It was crazy to think that a computer could beat a top Wall Street stockbroker in 2000. To think that this broker could beat a computer today is insane."

Many people turn to "Dr. Google" for medical advice. The analysis concludes that they may turn to 'Dr. ChatGPT' in the future.

TAGS
Cyprus  |  health

News: Latest Articles

X