Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

McCombs School of Business Hosts Global Summit on Explainable AI

4_RGB_McCombs_School_Brand_Formal.pngThe Center for Analytics and Transformative Technologies at The University of Texas McCombs School of Business will host its flagship annual conference online Nov. 11-12 to discuss the future of Explainable AI. The global AI summit would specifically focus on the most contentious topics related to cracking open the “black box” of artificial intelligence using next-gen data analytics and human intelligence.

This year’s CATT Global Analytics Summit, organized by McCombs School of Business, invites experts from industry and academia to explore the theme, “Explainable AI: Building Trust and Transparency in Data Systems and Machine Learning Models.”

“In many organizations, AI and machine learning tools are becoming more powerful and sophisticated, but the problem is that this level of sophistication can lead to black boxes,” said Michael Sury, CATT managing director and member of McCombs’ finance faculty.

“There can be an opacity to AI processes, and that opacity may not be comfortable for organizational leaders, corporate stakeholders, regulators or customers,” he said.

So, the task of explaining how AI arrives at its decisions — including whether those decisions are trustworthy, reliable, and free of bias — has grown increasingly important for business executives and data scientists alike, Sury said. “If the black box is a barrier to adoption of these powerful techniques, how do we overcome that barrier?”

Related Posts
1 of 7,829

Helping to answer questions such as these are summit keynote speakers Charles Elkan, professor of computer science at the University of California, San Diego and former global head of machine learning at Goldman Sachs in New York; and Cynthia Rudin, professor of computer science at Duke University and principal investigator at Duke’s Interpretable Machine Lab.

The summit will also feature talks and panel discussions moderated by McCombs faculty members. Among the nearly two dozen speakers will be Scott Lundberg, senior researcher at Microsoft, on his trademark “SHAP values” as tools for interpreting AI outputs. Krishnaram Kenthapadi, principal scientist at Amazon, will address responsible AI in the industry.

The topics are broadly applicable across organizations, Sury said: “The implications of AI explainability are sweeping, including in areas like risk management, ethics, compliance, reliability and customer relationship management.”

Last year’s summit, also online, drew more than 1,700 registrants from 50 countries. Registration is free again this year, and registrants will receive a program booklet after the event.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.