Putting AI Governance to Work
Pragmatic Perspectives on managing algorithmic bias, model explainability, fairness, and trust in the Industry.
As AI infused systems become commonplace, this rapid growth of machine learning capabilities and increasing presence in our lives raise pressing questions about the impact, governance, ethics, and accountability of these technologies.
AI governance is about AI being explainable, transparent, and ethical. How do financial institutions, telecommunication companies, retail, and media enterprises implement policies to avoid bias perpetuation, administer principles of trustworthy AI, and deploy their systems in an ethical, inclusive, and accountable manner? Our panel of experts will answer these practical questions with data driven KPIs to measure, and tools to equip the attendees for narrowing down the knowledge gap between data science experts and the variety of people who use, interact with, and are impacted by these technologies.
Today, organizations seek clarity and structure around AI systems and grapple with harnessing the potential of AI systems while ensuring that they do not exacerbate existing inequalities and biases, or even create new ones. A comprehensive AI governance framework is the answer to address these challenges along with model management, data consistency, and algorithmic transparency.
In this session, panelist will outline the tools, technologies, and provide practical advice needed to understand and monitor AI life-cycle. Please join the UST Team of experts to explore policy remedies to address the complex challenges associated with emerging technologies, operationalize these concepts in a production AI system—and crucially, create a culture of trust in AI.
Presented by
Maor Ivgi, CTO, Demystify (formerly Stardat), Adnan Masood, Chief Architect, AI and Machine Learning, UST & Heather Dawe, UK Head of Data, UST