Dr. Jagreet Kaur1, Suryakant2 and Kuldeep Kaur3*
1Chief AI Officer at (XenonStack Private Limited), Founder and CEO at (Xenon DigiLabs Private Limited), AI and Analytics Department, Xenonstack, Punjab, India E-mail: firstname.lastname@example.org
2ModelOps Specialist, AI and Analytics Department, Xenonstack, Punjab, India
3AI Ethics Researcher, AI and Analytics Department, Xenonstack, Punjab, India
*Corresponding Author: Kuldeep Kaur, AI Ethics Researcher, AI and Analytics Department, Xenonstack, Punjab, India.
Received: August 25, 2021; Published: September 22, 2021
Use of AI in healthcare improves the industry services. Discovering patterns from data using ML improves the decision making process. It allows the industry specialist to make data-driven and fact-based decisions. The use of ML models in Healthcare is continuously increasing but it proliferates the concerns of stakeholders due to complexity and black box functioning of ML models.
Therefore Explainable AI approaches come into existence to make the ML model transparent and trustworthy. In this document a case study is represented in which the ML model is used to detect diabetes and for transparency Explainable AI approaches are deﬁned to understand the AI system based on the concerns and queries  that would be raised by stakeholders. There are several approaches, libraries and packages that can be used to implement Explainable AI such as LIME , SHAP  etc. It allows the industry practitioner to use the right tools and approaches for making their AI system trustworthy and transparent.
Keywords: Diabetes; Healthcare; AI System
Citation: Dr. Jagreet Kaur., et al. “Explainable AI in Diabetes Prediction System”.Acta Scientific Medical Sciences 5.10 (2021): 131-136.
Copyright: © 2021 Dr. Jagreet Kaur., et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.