Responsible AI talks

Author: Giuseppina Schiavone

At weeks before the start, Andreea Moga invited me and other members of TechLabs Rotterdam (Morraya Benhammou, Marvin Kunz, Rosaline Pahud de Mortanges, Paulo Mota, Gaspard Bos) to attend the World Summit AI 2022. My attendance to the summit has been incredibly insightful.

The program included several tracks and workshops: innovation in action with a number of inspiring start-ups, the metaverse, responsible AI, next generation of NLP and conversational AI, smart cites, deep learning and autonomous driving, talent recruitment and workplaces, finance and cybersecurity, space exploration and satellite imaging, AI for good challenges and workshops and much more.

In this post, I share some of my notes and take away messages from my journey into the summit.

I have attended the talks related to responsible AI. Responsible AI is the underlying thread that any AI systems developer, seller and consumer should be aware of, primarily in light of the recent publication of the European Union AI Act (AIA) aiming at establishing the first comprehensive regulatory scheme for AI products and services.

On the 12th of October, the track on Responsible AI was beautifully chaired by Cathy Cobey (Global Trusted AI leader, EY), it brought up concrete examples on how organizations are setting up strategies to adhere to the trustworthy AI principles outlined in the AIA. The session talked about how development in responsible AI is moving from awareness and principles to practice and spell the need for methodologies with clear specifications and requirements to demonstrate the impact of AI systems. It showed the necessary shift from most accurate algorithms to fairest algorithms and the need for regulations in testing and deployment with analogy to the testing and distribution of pharma products.

In this context, I liked a lot the expression used by Mark Surman from Mozilla Foundation about ‘designing new seatbelts for AI platform’. Mark presented the work that Mozilla is doing to make sure that social media platforms such as YouTube and TikTok operate in a transparent way with respect to recommendations to the users and shared content, he proposed some actions for improvement that include making the controls for the user more accessible, making the feedback system more affective and making more visible to the user how their feedback is accounted for in the recommendation engine. The work that organizations like Mozilla are doing to monitor the way commercial social media AI platforms operate is extremely relevant, considering that these platforms are powerful and influential, think about effects on political orientation, cultural and gender discrimination and education. External and independent auditing is fundamental to ensure that services providers truly adhere to the AIA and take serious action to preserve users’ integrity.

In the same session, a quite dramatic debate was brought up: lethal autonomous weapon systems or so-called killer robots. Ioana Puscas, researcher at the UNIDIR gave a definition of killer robots, which are a new type of weapons that once launched are able to act autonomously following predefined sensory signatures (such as heat, acoustics, visuals), such systems are currently being employed prevalently in defense operations. While AI has been largely used in military operations for detection purposes (e.g. automatic target recognition ATR) or in navigation systems, it is becoming more and more alarming the extension of AI applications to autonomous decision making in military context. Legitimate questions on responsibility and accountability arise to this development which poses undoubtable threat to human rights. While the UNIDIR is there to flag these risks (also considering the proliferation from army to illegal weapons trafficking), humanitarian organizations such as Amnesty International, Human Rights Watch and 100 more launched in 2013 the Campaign to Stop Killer Robots to encourage governments to create laws to ban and regulate the use of killer robots ‘while we still can’. So far about 66 countries have been embracing the campaign, still a number of powerful states, including Russia, Israel, the US continue to use and to invest in development of autonomous weapons and consider too premature the creation of new international law. Verity Coyle, as Amnesty International representative, called the attention to all the audience to stand for human rights and support the campaign.

Cathy Cobey continued by giving her critical perspective on the AIA and emphasized the difficulty of moving from Trustworthy AI principles to action. The approach followed in the AIA is a risk-based approach, according to which high risk products and services development will require strong governance and high-quality validation procedures. Nevertheless, how to get the risk assessment right remains an open question. Cathy Cobey proposed a trust-by-design approach, in the same line of the privacy-by-design practice, in which higher focus should be given the problem definition for example by changing the objective function in the model training phase, an objective function that do both optimize for accuracy target and for fairness principles. She suggested a complementary terminology to AI for good, AI for better; she talked about bias in data and approaches to tackle sensitive variables through staged-monitoring instead variables removal; she used the terminology ‘narrow AI’ to indicate that boundaries conditions for AI systems should be defined so as methodologies to predict how AI systems are going to fail. She mentioned that as of today, becoming aware of the possible dark side behind AI systems, CEOs are starting to be concerned about reputation and investment and it is necessary to keep working on ways to increase trust in AI systems, for example starting with low-risk and high-value applications, with clear return-on- investments and by looking for AI champions.

It followed the very detailed and structured talk of Francesca Rossi, IBM Fellow and AI Ethics Global Leader. She talked about how AI Ethics came to be as AI became more and more pervasive; about the pillars of AI Ethics (data privacy and governance, fairness, Inclusion, explainability, transparency, accountability, social impact, human and moral agency, social good uses, environmental impact, power imbalance); about the AI Ethics development across time, from awareness, principles to action; about AI Ethics in practice within companies, standard bodies, educational institutions, governments; she showed how the field of AI Ethics has a wider range of return-on-investment, it influences company values, trust, reputation, clients retention and acquisition, media coverage, it plays a role as differentiator, it takes into consideration social justice and equity, it shows compliance with regulation, it opens up new business opportunity. Francesca Rossi demonstrated how AI Ethics is integrated into IBM operations at different levels.

What I liked to hear from Toby Walsh, professor of artificial intelligence at the University of New South Wales, coming on stage for presenting his new book “Machines Behaving Badly: The Morality of AI” has been primarily the word ‘ethics washing’.
And soon after him, the talk of Linda Leopold followed. Linda Leopold is head of Responsible AI&Data at global fashion retailer H&M Group and she focused on the power of storytelling in ‘bringing ethics to light’. She talked about a new format that her department has introduced within the company, the ‘debate club’, with the aim of stimulating awareness on AI Ethics both within the employees and with the customers. The AI team in H&M group was founded in 2018 and works in closed collaboration with the sustainability department. I was really surprised to see the H&M group at the World AI summit and even more within the AI Ethics track, I would have like to hear more about their approach with respect to the use AI Ethics in recommender systems, supply chain traceability, corporate social responsibility (for example which processes, AI and data related, they have in place to enable them to score high on the fashion transparency index?). My surprise has also origin in light of the most recent scandal that saw H&M group sued for ‘green washing’, accused for engaging in false advertising about the sustainability of its clothing.

A final discussion panel with Layla Li (CEO, KOSA AI), Jeroen van den Hoven (Professor of Ethics and Technology, Delft University of Technology), Blanca Escribano Cañas, (Digital Law Partner, EY Law), Nanda Piersma (Scientific Director HvA Centre of Expertise Applied Artificial Intelligence, Hogeschool van Amsterdam), Safiya Umoja Noble (Professor, UCLA, Co-Founder and Co-Director, UCLA Center for Critical Internet Inquiry) was set up to discuss when AI is transparent enough: transparency concerns both developer and users; transparency should say about how a system is made, how does it work, what effects do the system produce and how the system can be controlled; transparency should be linked to agency and enable to act upon a system behavior.

On the 13th of October, I had the opportunity to follow talks and panel discussions on AI in sustainability. I found particularly relevant the fireside chat with Gavin Starks (CEO, Icebreaker One) and Fredrick Royan (Vice President/Global Leader – Sustainability and Circular Economy, Frost & Sullivan) moderated by Carl Pratt, (Founder and Director, FuturePlanet) with the compelling title “We won’t get to net zero without artificial intelligence”. The complexity of sustainability can only be solved through AI, such complexity originates not only by the various systems (environment, society, government) involved but also by the amount of data required in the decision-making process, which is untreatable by human operators. Gavin Starks talked about the necessity to build a trust framework for sharing information at a large scale and gave as reference the Icebreaker One foundation, which aims at becoming a web of net-zero data, enabling secure and scalable non-financial reporting and data flows. This approach intends to solve the current situation of data stored in siloes, which impedes the understanding of problem at global a scale, delays the definition of standards and is fertile field for green washing. Gavin Starks work is primarily oriented to help cities to reach their net-zero target (60% of emission is produced by cities). In addition to this, Fredrick Royan reinforced the 4P principles (profit, people, planet and partnership) particularly stressing on the partnership aspects as the key to reach sustainability goals. Fredrick Royan was also moderator of the panel “The Water Crisis, Risk, Leakage + AI” which remind us how water is fundamental for our life and how intense the power of water can be. The analogy “If climate change is the shark, then water is its teeth” by the Canadian hydrogeologist James P. Bruce will stick to my mind forever.

Another panel discussion, I followed was titled “21st Century Industry: Reducing Waste and Minimising Supply Chain Impact” with representatives from computer industry (Christina Geierlehner, Sustainability Manager Benelux and Northwest Europe Market Lead, HP), agriculture (Sophia Savvides, Leader – Digital Ventures for Sustainability, Cargill), fashion (Alexander Kaunas, CTO, POMPOM) and academy (Ralf Herbrich, Professor, Hasso-Plattner Institute), the panel was moderated by Maria Morais ( Strategy Director – Consumer Industries EMEA, SAP + Chair, Circklo). Unfortunately, I found the discussion a bit shallow with some mention to recycling material, and improvement of packaging (Christina Geierlehner), monitoring through AI and imaging for supply chain (Sophia Savvides), trajectory optimization to address transport emission (Alexander Kaunas). Interestingly Christina Geierlehner from HP also stressed the need for educating the consumers to make more responsible chooses, to think about recycling and the value retention in products within the circular economy framework.

At the end of the Summit, the icing on the cake has been the remote interview with Peter Norvig (Distinguished Education Fellow at the Stanford Institute for Human-Centered AI) who summarized the history of AI starting from rigorous attention to get the best algorithms, followed by the explosion of big data and the rise deep learning to today’s research on the optimal objective functions for accuracy and fairness. It was comforting to hear the positive stand one of the Pioneer of AI towards the development achieved so far particularly with respect to NLP and conversational AI. Most important, I was glad to feel the shared concerned about detrimental and dangerous effects that the wrong use of AI systems can produce and the necessity to establish boundaries and framework for AI systems to safely operate.

Worth mentioning

I find worth mentioning, the work of two young companies show-casing at the summit: 

  • Zetane developing software and services for testing and improving AI models for high-risk operations, moving one more step towards AI system quality certification
  • Leafclean is challenging big cloud service providers with greener cloud services solutions; they install cloud centers at the bottom of buildings and reuse the heat produced by the servers to heat up the building water pipes, resulting in a 10x efficiency energy compared to typical data centers


Under the radar