The Ethics Guidelines for trustworthy artificial intelligence and the trust requirements imposed onto the Artificial Intelligence (AI)

The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document drafted by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent High-Level Expert Group was created by the European Commission in 2018, as a part of their strategy in respect of the artificial intelligence.

The parties interested in creating such AI systems should be aware of the key requirements which must be observed by AI technologies. These requirements have been issued pursuant to the fundamental rights and ethical principles and shall apply to all interested parties taking part in the life cycle of AI systems, namely – developers (the ones researching, designing and/ or developing AI systems), implementers (the public or private entities using AI systems within their business processes and for supplying products or services to third parties) and the final users (the ones involved directly or indirectly into the AI system), as well as the society in general (meaning any other parties which are directly or indirectly affected by the AI systems).
Each of the parties mentioned above have obligations in respect of ensuring the observation of requirements, as follows:

  • The developers – must implement and apply the requirements for design and development processes;
  • The implementers – must ensure that the systems they are using and the products and services they are offering observe the requirements;
  • The final users and the society in general – should be informed in respect of these requirements and should be able to request the observance thereof.
    The list of requirements published in the Guideline is not exhaustive and it provides systemic, individual and social aspects:
  1. Human agency and supervision – the AI systems should support the human autonomy and decision making as provided by the principle of observing the human autonomy. This requires that the artificial intelligence systems should act as favouring factors for the society, supporting the agency of the user, promoting the fundamental rights and allowing the human supervision;
  2. Technical robustness and safety (including the resistance to security attacks an, withdrawal plan and general safety, accuracy, fiability and replicability) – this requirement represents a crucial component of a trustworthy AI and it is closely connected to the principle of avoidance of harm. The technical robustness requires that the AI systems should be developed in a preventive manner in respect of risks and their behaviour should be according to the intent, by reducing at the same time the unintended and unexpected damages and preventing the unacceptable damages. This principle should also apply to the operation environment or to the presence of other agents (human or artificial) which may interact with the system in a contradictory manner. Moreover, the physical and mental integrity of humans must be ensured through legal provisions;
  3. Confidentiality and data governance (including the observance of confidentiality, quality and data integrity and also data access) the confidentiality which is also a fundamental right especially affected by the AI systems, is in close connection to the principle of prevention of harm. The prevention of harm towards the private life also needs an adequate data governance to cover the quality and integrity of the data used, the relevance thereof in the light of the field where the AI systems shall be implemented, the access protocols and the capacity to process the data in a way which protects the intimacy;
    Transparency (including traceability, explicability and communication) – this requirement is closely connected to the principle of explicability and comprises the transparency of the relevant elements for an AI system: the data, system and business models;
  4. Diversity, non-discrimination and correctness (including the avoidance of unfair bias, accessibility, the universal design and the participation of the interested parties) – in order to obtain a trustworthy AI we must allow the inclusion and diversity durng the entire life cycle of the AI system. Besides the taking into consideration and the involvement of all affected interested parties during the entire process, this shall also imply the ensurance of equal access by inclusive design processes as well as equal treatment. This requirement is closely connected to the principle of equity;
  5. The walfare of society and envirnonment (including the durability and respecting the environment, the social impact, the society and the democracy) – according to the principles of equity and prevention of harm, the society in general, other persons and the environment should also be considered to be interested parties during the entire life cycle of the AI system. The sustenability and the ecological responsibility of the artificial intelligence systems should be encouraged within the AI solutions aproaching domains of global interest, as, for example, the objectives of durable development. Ideally, the AI systems should be used to the benefit of all human beings, including the future generations;
  6. Responsibility (including auditability, minimisation and reporting of the negative impact, compromising and repairing) – this responsibility request supplements the above mentioned requirements and is closely connected to the principle of equity. Certain legal mechanisms need to be established in order to ensure the responsibility for the AI systems and the results therefrom for the period before and after the development, implementation and use thereof.

All the above requirements are equally important, are supporting each other and must be implemented and evaluated during the entire life cycle of the AI system.
The implementation of these requirements should take place during the entire life cycle of an AI system. While most of the requirements apply to all AI systems, a special attention is given to the ones affecting directly or indirectly the individuals. Therefore, for some of the apps, as the ones for the industrial field, this can have a lower relevance.
The above requirements include also elements which in some cases are already reflected in the existent legislation. According to the first component of Trustworthy AI – it is the AI practitioner’s responsibility to ensure they are observing the horizontally applicable regulations as well as the specific regulations in the field.