In this live stream, we’ll discuss how TrustyAI toolbox can help you to understand your AI-based automation
Link to the live streaming: http://red.ht/KieLive13
KieLive#13: Can you trust your AI?
Given the ubiquity of decision automation and predictive models across different business domains, it is becoming more and more important that such systems can be trusted by all the involved stakeholders. This becomes even more crucial when AI "predictions" can have a tangible impact on humans, e.g. in domains like healthcare or finance.
Explainable AI (XAI) is a research field whose aim is to provide insights about how AI / predictive models generate predictions by means of explanations (understandable human representations of a possibly complex model inner workings), to make such models less opaque and more trustworthy.
The TrustyAI initiative at Red Hat aims at embracing explainability to foster the trustworthiness of decisions in the area of business process automation, together with runtime tracing of operational metrics and accountability.
About the invited speaker:
Daniele Zonca is the architect of Red Hat Decision Manager and TrustyAI initiative where he contributes to open source projects Drools and Kogito focusing in particular on predictive model runtime support (PMML), ML explainability, runtime tracing and decision monitoring. Before that he led Big Data development team in one of the major Italian banks designing and implementing analytical engines.