WHO has asked to stay cautious on using artificial intelligence (AI) generated large language model tools (LLMs) for the sake of protecting human safety, autonomy and human well-being. The usage of LLMs to support the people’s health needs is increasing its excitement around. However to protect people’s health and reduce inequity it is important to examine risks associated with LLMs while using it to access any information or as a supporting tool.
WHO is enthusiastic about the appropriate use of LLMs in health-care however, they are also concerned about the caution which is normally exercised for a new technology but not being exercised consistently with LLMs. WHO mentioned that the untested systems may lead to error and cause harm to patients, hence rigorous oversight is required to improve the process and the effectiveness of LLMs.
Here are a few concerns which need improvement:
- The data which is used to train AI can be biased and generate misleading information which in-turn may cause risks to the patient’s health.
- LLMs generate responses can be completely incorrect and full of errors.
- The technology (LLMs) can be misused to disseminate the wrong information which may be difficult for the public to differentiate from the real information.
WHO has asked for these concerns to be addressed and clear evidence of benefit of using LLMs in health care will be measured before using the technology in healthcare and medicines.
Further, WHO has mentioned the importance of applying ethical principles while deploying AI for health in their guidance on Ethics & Governance of AI for Health.
For more updates on latest health news please visit our website.