Hi! Paris Webinar Debate on Bias and Data Privacy
Six months after opening, France’s new scientific research hub, Hi! Paris, launched its first-ever series of webinars with an uncompromising debate on the challenges of bias in machine learning algorithms, as well as the hotly debated issue of data privacy*. Four top researchers from HEC Paris and IP Paris shared their latest explorations into AI bias and data privacy. They were joined by specialists in these fields, experts whose on-the-ground applications of research gave further depth to the three-hour exchange. The webinar centered on possible solutions and practices promoting fairness and transparency in AI systems.
When asked what Hi! Paris students should learn first and foremost about AI and the algorithms that feed them, Isabelle Falque-Pierrotin did not hesitate: “To develop a critical mind!” she said with a laugh, before elaborating: “Students must not take these AI tools for granted, they should be encouraged to question them. They are complex and not above criticism, so knowing the intricacies of machine learning, AI and ethics are critical for students because the impact of these factors on their professional lives is overwhelming.”
Falque-Pierrotin should know: she was president of France’s National Commission on Informatics and Liberty, CNIL, between 2011 and 2020, a decade in which AI bias and privacy issues exploded into the public arena. As a result of this development, it seemed natural for the new interdisciplinary center Hi! Paris to focus its first-ever webinar on topics which have become central to decision-making at all levels of society. “From banks to love and freedom, algorithms are being used to drive life-changing policy,” said the first of four keynote speakers, HEC Professor Christophe Pérignon. “Algorithms are seen as neutral, eliminating sexism, racism, religious bias and other human prejudices. However, our research shows that reality can be quite different. AI can, in fact, systematically treat unfavorably a group of individuals sharing a protected attribute such as gender, age or race.” Perignon cited several incidents, including the well-known case of the Apple credit card which, in 2019, appeared to offer smaller lines of credit to women than to men. He and fellow-researchers Christophe Hurlin and Sébastien Saurin have just published an important paper on the fairness of credit scoring models.
Télécom Paris researcher Stéphan Clémençon concurred with Pérignon’s diagnosis. In the course of his 15-minute talk, he said that the success of predictive algorithms “cannot be not guaranteed by the massiveness of the information at disposal solely.” He called on rigorous control of the conditions in which data is being acquired as well as the fairness constraints behind all automated rules. “My research shows that it is not easy to guarantee that algorithms and/or the data feeding them have no bias. In addition, reweighting data in facial recognition to correct possible selection bias may not be enough to improve the performance of certain systems regarding specific strata of the population for instance. Fairness constraints must be incorporated in the algorithms in certain situations and one has to wonder then if there is a satisfactory trade-off between accuracy and fairness."
Consumer Privacy Challenged
The complexity of algorithm de-biasing is also found in preserving the privacy of consumers. “A simplistic view on consumer privacy does not work,” noted Ruslan Momot, HEC Paris Assistant Professor of Operations Management. Momot, speaking from a pre-dawn California, has been researching consumer privacy preservation in social networks, platforms and marketplaces for years. His work points to the fact that more and more policies, guided by companies ranging from the huge GAFA conglomerates to smaller luxury firms, are data-driven. Firms from all walks of life are collecting a massive amount of consumer data which, says Momot, may be leaked or misused. “What are the measures that both companies and regulators can undertake to preserve consumer privacy?” Abuses of confidence have become notorious: some have swayed elections (the Cambridge Analytica scandal), others have exposed consumers to fraud or identity theft (Equifax and Marriott Hotel). But there are solutions, pursued Momot: “Liability fines and data collection taxes”.
The final of the four speakers, Catuscia Palamidessi of Ecole Polytechnique/INRIA also warned of the risks of privacy violations and unfair decisions. She suggested a hybrid approach which could mitigate these threats, involving what she called the Local Privacy Mechanism, LDP. Palamidessi has been working on a hybrid model with companies seeking to provide the best trade-off between privacy and utility.
The hourlong series of exposés was followed by a talk from the former national coordinator for AI Bertrand Pailhès. There was then a lively panel discussion involving Isabelle Falque-Pierrotin, Télécom SudParis Assistant Professor Nesrine Kaanich and Capgemini VP of Data Science & Engineering, Moez Draief. The consensus between them was that there are converging interests which necessitate a diversity of professional profiles. “AI is not only the preserve of data scientists,” said Kaanich. “Companies are calling for graduates with transversal interests and specialties. They must pool their expertise.”
Putting Practice Before Theory
Meanwhile, Moez Draief (a former researcher now working for CapGemini, one of Hi!Paris corporate donors) urged students to learn by doing: “This is a priority in working in AI and Anglo-Saxon universities have understood this. In France, there is great attention applied to academic theory and a rigorous understanding of principles. Unfortunately, students here don’t see how applicable this theory is until they begin working. French universities are slowly evolving towards the practical applications of AI in which students learn about ethics, bias, etc. These factors are dominant as so much can go wrong, so it’s good to experiment when studying and not in a work situation – in the latter, the pressure to deliver becomes so great you make mistakes.”
Parallel to the webinars, Hi!Paris continues to actively recruit PhD students, to create fellowships, to drive collaborative projects on fundamental methods for AI and to organize events such as a hackathon scheduled to begin on March 12. The theme of the four-day event is AI for energy efficiency. And there will of course be other webinars. The next one will be held in April, on the topic of AI in healthcare. Stay tuned!
* Support for the program is provided by its founding corporate sponsors L’Oréal, Capgemini, Total, Kering and Rexel.
Related Articles
HEC Paris and the Institut Polytechnique de Paris ( IP Paris, which unites Ecole Polytechnique, ENSTA Paris, ENSAE Paris, Télécom Paris and Télécom SudParis) launched a new center devoted to Data Science...