Ethical principles for trustworthy Artificial Intelligence

The digital transformation, driven in particular by Artificial Intelligence (AI), is impacting our daily lives and our work environment. Adopting an ethical framework for AI is therefore necessary to have a responsible approach.

The Michelin group wants the use of Artificial Intelligence to be fully in line with its "All Sustainable" approach:
People x Profit x Planet.

image-20250715-072652

Definition

AI is generally understood through its concrete application in an AI system, hereinafter "AI" or "AI system".

The "AI system" is defined by the European Regulation of 13 June 2024, as "an automated system that is designed to operate at different levels of autonomy and can demonstrate adaptability after deployment, and which, for explicit or implicit objectives, deduces, from the inputs it receives, how to generate outputs such as predictions, content,
recommendations or decisions that may influence physical or virtual environments".

AI encompasses several technologies and fields of application that are based on models or algorithms


Examples:

- Generative AI: creation of content (text, image, video, sound, etc.) based on a query;

- Prediction AI: prediction of an outcome based on a history;

- Recognition and classification AI: visual object recognition, image classification, anomalous behavior
detection, etc.

Opportunities

AI has undergone significant developments in recent years (complex algorithms, generative AIs based on large language models) and is transforming the way companies use data. There are many applications using AI in various fields (transport, health, marketing, public services, etc.).

Indeed, AI brings opportunities both for the company (e.g. innovative products and services, productivity gains, detection of non-conformities, etc.) and for people (decision support, improvement of well-being at work, inclusion of people with disabilities, etc.).

Michelin has a long tradition of innovation, environmental and societal commitments and intends to continue to play a major role in a more sustainable future for all.

 

Risks

Beyond the opportunities it offers, AI also presents proven or potential risks at the level of companies but also of society, through:

- Its design and training of its models: possible biases in AI training datasets that can lead to cases of discrimination, poor data quality and/or pollution, loss of confidentiality of personal data, etc.

- Its use: opacity of the functioning of the AI system, instability of its performance (i.e. hallucinations), impact on the health and well-being at work of people using AI systems, trust in systems without critical reflection bias, loss of free will and control, sharing of data covered by trade secrets or intellectual property, infringement of competition law, etc.

- Societal and environmental concerns: deepfakes, manipulation of populations by disseminating false information, use of AI systems for mass surveillance of populations via smart cameras, profound
transformation of certain professions, energy-intensive technology, etc.

Collective efforts by both public and private actors are needed to ensure trusted AI systems.

Framework

The challenge is to reconcile the use of AI systems with the Group's values while ensuring compliance with applicable regulations.

Michelin's desire is to anticipate and control the risks induced by the development and deployment of AI systems within the Group and to ensure that the innovation driven by these AI systems will be in line with our values.

This reconciliation is the sine qua non for the creation of an environment that offers everyone a better way forward.

communaute_locale_opaque@2x

Principles

To ensure that AI systems are used in a way that creates both value and trust, Michelin is committed to using and developing AI systems according to three fundamental principles.

These three principles complement the Group's commitments in particular with regard to the environment and the security of information systems.

They are intended to evolve over time to consider regulatory changes, technological progress, impact analysis, but
also the expectations of the Group's employees and stakeholders.

Principle 1: Human-centric AI
AI systems must be designed and used in such a way as to be at the service of humans and respect their fundamental rights and freedoms (dignity, self-determination, privacy, equity, non-discrimination, etc.).

To achieve this, the Group relies on the following elements:

Human supervision: any AI system will be designed and used with an adequate level of human appreciation and supervision (e.g. prohibiting any decision likely to have significant impacts on a person, such as a dismissal, promotion, etc. that would be based exclusively on a decision of an AI system).

human centrics (002)

Safeguarding the health and well-being of employees: The AI systems we design, and use must be reliable, safe, and secure throughout their life cycle, to meet the objectives set and not to pose unacceptable risks to people, in particular to their health and well-being (loss of autonomy, disempowerment, misinformation, etc.).

Respect for personal data: Human-centrics AI also imposes appropriate controls on the personal data processed to prevent privacy breaches or loss of data confidentiality and ensure the quality, integrity and relevance of data in relation to the purposes for which they are processed.

Non-discrimination and equity: The AI systems we deploy or acquire must be configured in an appropriate and proportionate manner to avoid bias, discrimination or reproduction of stereotypes.

Awareness and training: Awareness of issues related to the responsible development or deployment of AI is essential. The development of employees' skills and career paths must be supported in order to develop confidence in these technologies, to fairly distribute the benefits of the use of AI within Michelin, but also to allow everyone to protect themselves against the risks it presents such as overconfidence in the content generated by AI systems, loss of key skills, etc.).

By integrating the use of AI within the Group, Michelin is committed to preserving the well-being and rights of individuals through an adapted organization and an assessment of its potential negative impacts.

Principle 2: Transparent and explainable AI
Michelin Group provides training to ensure that users of AI solutions have the fundamental knowledge, such as understanding the basic functions of AI, its practical implications and ethical considerations.

For operational and technical teams, training is tailored to their roles and responsibilities so that any AI system developed or deployed within the Group is sufficiently transparent and explainable in order to ensure user
confidence.

29290272_l (002)

Transparent means :
- Provide an explanatory sheet for AI decision support systems to specify its framework and limits of use;
- Ensure that people using AI are properly informed to enable them to identify possible biases or errors and make decisions responsibly;
- Inform people in a suitable format when they interact with an AI system, are exposed, or are exposed with certain AI-generated content.

Explainable means :
- Be able to describe how the AI system produced a particular result, a posteriori or in real time depending on the context;
- To propose an AI system whose result is intelligible (at least interpretable) and reproducible;
- Prioritize AI systems with capabilities to understand how the AI system works, with explanatory notes. The level of explanation must depend on the context and the severity of the consequences if the result is erroneous or imprecise (e.g. the proposal of a tourist itinerary by a chatbot with the ViaMichelin application has less impact than a vital medical prognosis).

Transparency and explainability must take into account other requirements, in particular in terms of intellectual property, and/or protection against piracy, etc.

Principle 3: Accountability
Michelin is committed to ensuring the proper functioning of the AI systems developed and deployed and to putting in place appropriate governance to supervise and manage the use of AI within the Group.

This governance ensures that the risks identified are known and controlled and that the AI systems comply with the Group's ethical principles.

58304886_xxl (002)

In particular, when developing AI systems, Michelin is committed to:
- Perform tests to improve the quality of results (e.g. human supervision during the training phase, etc.);
- Verify and demonstrate that the precautions taken reduce potential risks to an acceptable level;
- Take into account multiculturalism, multilingualism and the diversity of teams;
- Verify that systems are effectively performing their intended purpose (indicators of success, etc.);
- Ensuring that these systems have a sufficient level of legal compliance, robustness and security, etc. (e.g.
demonstration of compliance, compliance with standards, etc.).

When Michelin deploys AI systems, it entrusts the supervision of them to a responsible person with the necessary skills, training and authority to ensure compliance with the Group's ethical principles and applicable regulations.

Michelin is also committed to examining, where appropriate, ethical dilemmas and trade-offs within the framework of cross-functional and appropriate governance.

DO : I MUST

Each employee must contribute to the respect of the three principles set out above and must:

  • Follow the mandatory training courses defined by the Group
  • Respect the conditions of use, rules and processes defined when setting up a new AI tool
  • Have a relevant and reasoned use of AI systems provided by Michelin (questioning upstream the "why", i.e.the added value that an AI would bring compared to a solution without AI and the "for whom", what are the expected benefits for employees or any other external partner of the Group (e.g. customers, suppliers), etc.).

Specific requirements apply to employees depending on their role and responsibilities in the development or
provision of AI systems:

  • Provide transparent, verifiable and explainable AI systems in order to help users of these AI systems make informed choices (explanatory notes, etc.) and by ensuring the traceability of decisions.
  • Deliver robust AI systems so that they function properly for the purpose for which they were intended.
  • Implement secure AI systems, for which the security and confidentiality of data are ensured throughout their life cycle, taking into account the risks that AI can accentuate (dissemination of data due to overly broad access rights, protection of trade secrets, misclassification of documents, etc.)
  • Ensure traceability of data, processes and decisions made during the life cycle of the AI system.
  • Designate a person responsible for each AI system made available for its performance, accuracy and impact on the Group's results.
  • Provide accurate documentation for each AI system, including its framework and limitations of use.
  •  Identify risks related to each AI system, monitor them, and implement preventive and corrective actions.
  • Designing AI systems to be sustainable by optimizing the use of resources to reduce our carbon footprint.
  • Promote the diversity of teams working on AI in terms of profiles, skills, experience, the best guarantors against the risks of bias and ethical risks in general.

DON’T : I MUST NOT

Each employee must not:

  • Implement or acquire AI systems whose use cases are contrary to the Group's ethical values and principles or prohibited by regulations (AI aimed at manipulating or deceiving, etc.).
  • Acquiring AI systems without first insuring, or requiring guarantees (contractual, certifications, conformity
    assessment, etc.) from the supplier on compliance with existing regulations (intellectual property, confidentiality, etc.).
  • Rely entirely on an AI system in a decision-making process that has a significant impact on an employee (e.g., job change, promotion, etc.)
  • Reuse the results proposed by an AI system without first checking for errors (meeting minutes, briefing notes, etc.)
  • Implement AI systems that carry a risk of disseminating confidential or personal information (e.g.
    geolocation data, etc.)

Whom to contact?

For any questions relating to the ethical issues of AI, you can call on the Corporate Legal Department / AI Ethics and Compliance (DCJ) or the AI Transformation Department (DOTI/DAI)