top of page

Artificial Intelligence (AI) and Large Language Model (LLM)

There has been an explosion of machine learning studies in healthcare since 2009, which is attributed to the increasing computation power, big data, and cloud storage.

Background and Aim

There has been an explosion of machine learning studies in healthcare since 2009, which is attributed to the increasing computation power, big data, and cloud storage. The AI-based solution in healthcare and medical domain are utilizing the following core Principles of ML in Healthcare and we expect the solution you develop to incorporate these as a guideline; 


Principle 1: Computer-Aided Diagnosis

Computer-aided Detection, Selection, Diagnosis, and Decision analysis support to increase physician efficiency and potentially improve patient care.

Applied Example: Robust artificial intelligence tools to predict future cancer 

https://news.mit.edu/2021/robust-artificial-intelligence-tools-predict-future-cancer-0128


Principle 2: Constant Improvement

The goal of ML is continued evolution through learning, decreasing bias (such as sex, race, etc.), adding or subtracting different parameters on an ongoing basis, and decreasing self-fulfilling prophecies for Accurate healthcare predictions..


Applied Example DeepVariant: Highly Accurate Genomes With Deep Neural Networks

https://ai.googleblog.com/2017/12/deepvariant-highly-accurate-genomes.html


Principle 3: Real-Time Feedback and Prediction

The aim is to use of machine learning models to search medical data and uncover insights to help improve health outcomes and patient experiences.

Applied Example: Machine learning for real-time aggregated prediction of hospital admission for emergency patients

https://www.nature.com/articles/s41746-022-00649-y


Principle 4: Streamlined Workflow

AI can help provide around-the-clock support through chatbots that can answer basic questions and give patients resources when their provider’s office isn’t open. AI could also potentially be used to triage questions and flag information for further review, which could help alert providers to health changes that need additional attention and optimising the patient doctor interaction.


Applied Example:

Project EmpowerMD: Medical conversations to medical intelligence

https://www.microsoft.com/en-us/research/project/empowermd/


Principle 5: Accountability

clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety.

Example: Artificial intelligence in health care: accountability and safety

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133468/#:~:text=We%20focus%20on%20two%20aspects,assigning%20blame%20and%20assuring%20safety](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133468/#:~:text=We%20focus%20on%20two%20aspects,assigning%20blame%20and%20assuring%20safety).


Applied Example:

The AI Clinician

https://www.imperial.ac.uk/artificial-intelligence/research/healthcare/ai-clinician/


Resources:

[Machine Learning in Healthcare - A Primer for Physicians - Principles of ML In Healthcare](https://www.youtube.com/watch?v=DWbVwWp5IPk)


[Disruptive Innovation in Tech, Healthcare & Energy #BigIdeas2023 #Part 2 #ArtificialIntelligence](https://www.youtube.com/watch?v=j5SCoTF0ITY&t=152s)


[Machine Learning in Healthcare & Neurology - Trends (2021)](https://www.youtube.com/watch?v=MpC9sytUs6E)

bottom of page