AI and the Black Box
August
|TBD
What happens when an AI model lies to us about its biases? Dr. Lauren Alvarez will discuss her work on AI explainability and interpretability, including how to deal with malicious AI models.


Time & Location
August
TBD
About The Event
Dr. Lauren Alvarez is an IBM Research Fellow, former Responsible AI Scientist for Apple, and data scientist at WillowTree whose research has been focused on explainability and interpetability in artificial intelligence and opening up the "black box" of AI model functioning.
In particular, she has worked on ethics and trustworthiness problems with AI models. This included developing the "STEALTH" method for deriving reliable explanations of how an AI model works - without triggering awareness by the model that could result in it lying about its functioning and tendency for bias. This has become particularly relevant when assessing the biases of machine learning algorithms used in criminal justice (e.g. COMPAS), medicine, and other fields. Dr. Alvarez has been published in the IEEE journal and presented at various conferences on her work.