Med-PaLM 2: Google AI health chatbot cracks US medical exam

Google AI health chatbot cracks US medical exam

Artificial intelligence, which could previously only be used to terrify people through works of fiction, can now answer real-life queries, develop codes, and even seduce individuals. In what can be described as the next triumph (or tragedy, depending on your point of view), an AI chatbot has passed a rigorous medical examination in the United States. Google created the chatbot, which is a health chatbot. The Google AI chatbot Med-PaLM has passed the medical licensing exam, but according to a peer-reviewed study, its responses still fall short of those of human doctors.

The release of ChatGPT, a generative AI built by Microsoft-backed OpenAI, skyrocketed interest in artificial intelligence. Much has been said about the benefits and potential risks of AI, but health is one area where the technology has made significant progress. According to media reports, algorithms can read certain medical scans as well as people.

In December, Google revealed Med-PaLM in a preprint paper. The AI chatbot has not yet been made available to the general public. Google researchers reported in a peer-reviewed study published in the journal Nature that Med-PaLM scored 67.6 percent on the US Medical Licensing Examination (USMLE). This examination requires a minimum passing percentage of 60% to pass. “Med-PaLM performs encouragingly, but remains inferior to clinicians,” the study said. 

Med-PaLM 2 scored 86.5 percent on the USMLE exam

Google has announced the development of a new evaluation standard to identify and reduce ‘hallucinations,’ the term for when AI models provide erroneous information. Karan Singhal a Google researcher, the study’s principal author, told AFP that the team used the benchmark to evaluate a more recent version of their model and found “super exciting” findings.

According to preprint research published in May that was not peer-reviewed, Med-PaLM 2 scored 86.5 percent on the USMLE exam, outperforming the prior edition by about 20%. When it comes to AI-powered medical chatbots, according to James Davenport, a computer scientist at the University of Bath in the UK, “there is an elephant in the room.” By AFP, he was quoted.

Davenport further said that there was a big difference between answering “medical questions and actual medicine,” Because of their statistical character, hallucinations, according to Anthony Cohn, an AI scientist at the United Kingdom’s Leeds University, will likely always be an issue for such huge language models.

As a result, these models “should always be regarded as assistants rather than final decision makers,” according to Cohn. Singhal believes that in the future, Med-PaLM could be used to assist doctors in suggesting solutions that they might not have considered otherwise.

Exit mobile version