Hinda Haned

Fair, Transparent and Accountable AI

Functional AI

My latest research project focuses on developing a way to measure the functionality of A-I and data-driven projects. I am developing a “functionality” label that helps measure how well a given solution will actually be able to tackle a business or societal issue. The Data Analytics Functionality Index (DAFI) is a new tool that empowers organizations to make informed decisions about their data analytics efforts. Similar to the European Nutri-Score system that helps consumers compare the nutritional value of food products, DAFI provides a standardized, visual assessment of a project's functionality and potential impact. You can read more about DAFI on its dedicated website.

Explainable AI

One of my reserach interests focuses on developing methods for generating contextualized explanations of algorithmic systems. I particularity focus on challenges of XAI from a practitioner's perspective. Some of the latests works I contributed in the last years on this topic:

  • To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions This paper introduces RETRO-VIZ, a method for (i) estimating and (ii) explaining trustworthiness of regression predictions. It consists of RETRO, a quantitative estimate of the trustworthiness of a prediction, and VIZ, a visual explanation that helps users identify the reasons for the (lack of) trustworthiness of a prediction (ICML HILL workshop 2021).
  • Why does my model fail? How does explaining forecasting errors help users accept and/ore trust a model enough to deploy it? Read more on the algorithm explaining model errors here (FAT* 2020, best student paper award)
  • How well do local explanations approximate the actual global model behavior? We develop a new way of evaluating the usefulness of local explanations in understanding global model behavior, read more here (SIGIR Paris 2019 - FACTS-IR workshop),

Fair AI: bridging theory and practice

What does it mean for an algorithm to be fair? What are the best practices for data scientists to ensure fairness in the models they design and deploy? To tackle such questions. Join the conversation in this Fair-AI Slack community. Request an invitation.

Forensic science

Although I no longer work as a forensic statistician, I am still involved in a number of initiatives. My latest contributions:

  • How can machine learning help complex DNA profiles interpretation? Using a simple classifier to determine the number of contributors to a sample can be more efficient than more traditional methods such as allele count or maxLikelihood. Check out a recent paper on this topic here.
  • Forensic Practitioner's Guide to the Interpretation of Complex DNA Profiles is a book led by Prof. Peter Gill, that centralizes a number of contributions in the field of complex DMA profile interpretation, supported by open-source software. Two of the chapters of this book relate to some of the work i have done during my time at the NFI. The book is available from here.

Pay equality

An important aspect of gender equality is Equal Pay for Equal Work. My experience as a data scientist is that there are two main obstacles for many organizations to assess their gender pay gap: 1) poor data quality and 2) lack of available tools to compute the relevant metrics in a transparent and interpretable way. I am looking into initiating a collaborative effort to demystify the pay gap analysis and offer a set of open-source tools and educational material to enable decisions-makers make progress on the topic of gender pay equality. If you would like to help our or just have a chat, get in touch! I have made some basic code available here.

For an up to date list of publications see my Google Scholar page.