mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 18:51:52 +01:00
Add ethics1 AI4People + human rights
This commit is contained in:
@ -8,7 +8,8 @@
|
||||
\begin{document}
|
||||
|
||||
\makenotesfront
|
||||
\include{./sections/_trustworthiness.tex}
|
||||
\include{./sections/_morality.tex}
|
||||
\include{./sections/_trustworthiness.tex}
|
||||
\include{./sections/_human_rights.tex}
|
||||
|
||||
\end{document}
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 124 KiB |
60
src/year2/ethics-in-ai/module1/sections/_human_rights.tex
Normal file
60
src/year2/ethics-in-ai/module1/sections/_human_rights.tex
Normal file
@ -0,0 +1,60 @@
|
||||
\chapter{Human rights}
|
||||
|
||||
|
||||
\begin{description}
|
||||
\item[Human rights] \marginnote{Human rights}
|
||||
Human rights, primarily intended as ethical demands, are related to individual freedoms. They can be negative liberties (i.e., require non-interference from third-parties) and positive liberties (i.e., require active provisioning).
|
||||
|
||||
\begin{remark}
|
||||
Not all human rights are necessarily laws as they might not always be legally enforceable.
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
|
||||
\section{List of rights}
|
||||
|
||||
\begin{description}
|
||||
\item[Freedom and dignity]
|
||||
AI systems could undermine this right if used for surveillance, profiling, automated assessment, manipulation, or interference.
|
||||
|
||||
\item[Right to equality and non-discrimination]
|
||||
Digitally disadvantaged individuals might be excluded from accessing AI system or exploited. AI systems themselves can be biased and reproduce existing discriminatory practices.
|
||||
|
||||
\item[Right to privacy]
|
||||
Related to the right of a person to make autonomous decisions, and to have control of the data collected and how it is processed.
|
||||
|
||||
\item[Right to life, liberty, and security]
|
||||
Protection of the physical and digital (i.e., digital memories) integrity.
|
||||
|
||||
\item[Right to property]
|
||||
In the context of information systems, it includes the right of portability of the data from a platform to another.
|
||||
|
||||
\item[Freedom of assembly and association]
|
||||
|
||||
\item[Right to an effective remedy]
|
||||
AI systems can support judicial proceedings. However, its application should not be too mechanical and there should be the possibility for automated decisions to be reviewed by humans.
|
||||
|
||||
\item[Right to hearing]
|
||||
|
||||
\item[Presumption of innocence]
|
||||
|
||||
\item[Freedom of opinion, expression, and information]
|
||||
AI systems should not undermine the freedom of expression. It could also be used to moderate online interactions and filter out hate speech or fake news.
|
||||
|
||||
\item[Right to take part in government]
|
||||
AI system should not be used to undermine political rights through surveillance, pervasive data collection, opinion polarization, \dots
|
||||
|
||||
\item[Right to social security]
|
||||
|
||||
\item[Right to work]
|
||||
Workers that are substituted with AI systems should be protected.
|
||||
|
||||
\item[Right to adequate living standards]
|
||||
Deployment of AI systems should be more concerned with regard to environmental impacts.
|
||||
|
||||
\item[Right to education]
|
||||
AI system in education should not deprive students of human relationships.
|
||||
|
||||
\item[Right to culture]
|
||||
Content generated by AI can harm content creators and reduce originality in the overall cultural landscape.
|
||||
\end{description}
|
||||
@ -1,46 +1,160 @@
|
||||
\chapter{Trustworthy AI}
|
||||
\chapter{Trustworthy AI in the EU}
|
||||
|
||||
|
||||
The European Commission's vision for artificial intelligence is based on three pillars:
|
||||
\begin{enumerate}
|
||||
\item Increase public and private investments,
|
||||
\item Prepare for socio-economic changes (e.g., protect who gets substituted with AI),
|
||||
\item Ensure a proper ethical and legal framework to strengthen European values.
|
||||
\end{enumerate}
|
||||
|
||||
To achieve this, in 2018 the Commission established the \textbf{High-Level Expert Group on Artificial Intelligence (AI HLEG)}\marginnote{High-Level Expert Group on Artificial Intelligence (AI HLEG)}: an independent group tasked to draft:
|
||||
\begin{itemize}
|
||||
\item Guidelines for AI ethics,
|
||||
\item Policy and investments recommendations.
|
||||
\end{itemize}
|
||||
\begin{remark}
|
||||
The European Commission's vision for artificial intelligence is based on three pillars:
|
||||
\begin{enumerate}
|
||||
\item Increase public and private investments,
|
||||
\item Prepare for socio-economic changes (e.g., protect who gets substituted with AI),
|
||||
\item Ensure a proper ethical and legal framework to strengthen European values.
|
||||
\end{enumerate}
|
||||
\end{remark}
|
||||
|
||||
|
||||
|
||||
\section{AI HLEG's AI Ethics Guidelines}
|
||||
\section{AI4People's Ethical Framework for a Good AI Society}
|
||||
|
||||
Voluntary framework addressed to all AI stakeholders (from designers to end-users) that bases AI trustworthiness on three components:
|
||||
\begin{descriptionlist}
|
||||
\item[Lawful] \marginnote{Lawful}
|
||||
AI must adhere to laws and regulations. The main legal sources are:
|
||||
\begin{description}
|
||||
\item[AI for people (AI4People)] \marginnote{AI for people (AI4People)}
|
||||
Multi-stakeholder forum created in 2018 with the goal of defining the founding principles, policies, and practices to build a ``good AI society''.
|
||||
|
||||
\item[Ethical Framework for a Good AI Society] \marginnote{Ethical Framework for a Good AI Society}
|
||||
White paper that:
|
||||
\begin{enumerate}
|
||||
\item EU primary law (i.e., EU Treaties and Fundamental Rights).
|
||||
\item EU secondary law (e.g., GDPR, \dots).
|
||||
\item International treaties (e.g., UN Human Rights treaties, Council of Europe conventions, \dots).
|
||||
\item Member State laws.
|
||||
\item Domain-specific laws (e.g., regulations for medical data, \dots)
|
||||
\item Identifies opportunities of AI on society and its risks.
|
||||
\item Defines the guiding principles for AI.
|
||||
\item Presents recommendations for a good AI society.
|
||||
\end{enumerate}
|
||||
\end{description}
|
||||
|
||||
\begin{remark}
|
||||
The guidelines do not provide legal guidance. Therefore, this component is not explicitly covered in the document.
|
||||
\end{remark}
|
||||
|
||||
\item[Ethical] \marginnote{Ethical}
|
||||
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or unsuited for the purpose.
|
||||
|
||||
\item[Robust] \marginnote{Robust}
|
||||
AI must be technically and socially robust in order to minimize intentional or unintentional harm.
|
||||
\subsection{Opportunities and risks of AI for society}
|
||||
|
||||
This chapter identifies four opportunity-risk points of AI systems:
|
||||
\begin{descriptionlist}
|
||||
\item[Enable self-realization] \marginnote{Enable self-realization} (``who we can become'')
|
||||
AI systems can automate mundane aspects of life and leave more free time for cultural, intellectual, and social activities. However, this should not devalue human skills.
|
||||
|
||||
\item[Enhance human agency] \marginnote{Enhance human agency} (``what we can do'')
|
||||
AI systems can enhance human decision-making. However, it should not lift humans from responsibilities.
|
||||
|
||||
\item[Increase societal capabilities] \marginnote{Increase societal capabilities} (``what we achieve'')
|
||||
AI systems can support in solving problems and achieving goals. However, they should still be supervised by humans.
|
||||
|
||||
\item[Cultivate societal cohesion] \marginnote{Cultivate societal cohesion} (``how can we interact with each other and the world'')
|
||||
AI systems can support in coordinating complex problems that require societal cohesion. However, its decisions should not undermine human self-determination.
|
||||
\end{descriptionlist}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.65\linewidth]{./img/ai4people_opportunities_risks.png}
|
||||
\end{figure}
|
||||
|
||||
|
||||
\subsection{Unified framework of principles for AI in society}
|
||||
|
||||
This chapter groups and presents the common principles used by different organizations and initiatives.
|
||||
|
||||
Most of them overlap with the principles of bioethics:
|
||||
\begin{descriptionlist}
|
||||
\item[Beneficence] \marginnote{Beneficence}
|
||||
AI should be created to benefit humanity.
|
||||
|
||||
\item[Non-maleficence] \marginnote{Non-maleficence}
|
||||
AI systems should not cause harm.
|
||||
|
||||
\item[Autonomy] \marginnote{Autonomy}
|
||||
There should be a balance between the decision-making power we delegate to an AI system and the one we keep.
|
||||
|
||||
\item[Justice] \marginnote{Justice}
|
||||
AI systems should contribute to global justice and equality.
|
||||
\end{descriptionlist}
|
||||
|
||||
In addition, an AI specific principle is added:
|
||||
\begin{descriptionlist}
|
||||
\item[Explicability] \marginnote{Explicability}
|
||||
AI systems should be understandable and their decisions accountable.
|
||||
\end{descriptionlist}
|
||||
|
||||
|
||||
\subsection{Recommendations for a good AI society}
|
||||
|
||||
This chapter presents 20 action points of four types:
|
||||
\begin{description}
|
||||
\item[Assessment] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Assess the capabilities of existing institutions in dealing with harms caused by AI systems.
|
||||
\item Assess which task should not be delegated to AI systems.
|
||||
\item Assess whether current regulations are sufficiently grounded in ethics.
|
||||
\end{itemize}
|
||||
|
||||
\item[Development] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Develop a framework to enhance AI systems' explicability.
|
||||
\item Develop adequate legal procedures to evaluate AI decisions.
|
||||
\item Develop auditing mechanisms for AI systems to deal with unfairness and risks.
|
||||
\item Develop a mechanism to fix or compensate AI mistakes.
|
||||
\item Develop metrics for trustworthiness of AI systems.
|
||||
\item Develop an EU oversight agency to evaluate AI systems.
|
||||
\item Develop a European observatory for AI.
|
||||
\item Develop legal instruments and contractual templates.
|
||||
\end{itemize}
|
||||
|
||||
\item[Incentivization] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Incentivize the development of AI systems that are socially preferable and environmentally friendly.
|
||||
\item Incentivize European research.
|
||||
\item Incentivize cross-disciplinary and cross-sectoral cooperation.
|
||||
\item Incentivize the inclusion of ethical, legal, and social considerations in AI research.
|
||||
\item Incentivize the development of de-regulated testing zones for AI systems.
|
||||
\item Incentivize research about public perception and understanding of AI.
|
||||
\end{itemize}
|
||||
|
||||
\item[Support] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Support the development of self-regulatory code of conduct for data and AI professions.
|
||||
\item Support AI companies' corporate board to understand the ethical implications of their products.
|
||||
\item Support public awareness about societal, legal, and ethical impact of AI.
|
||||
\end{itemize}
|
||||
\end{description}
|
||||
|
||||
|
||||
|
||||
\section{AI HLEG's Ethics Guidelines for Trustworthy AI}
|
||||
|
||||
\begin{description}
|
||||
\item[High-Level Expert Group on Artificial Intelligence (AI HLEG)] \marginnote{High-Level Expert Group on Artificial Intelligence (AI HLEG)}
|
||||
Independent group established by the European Commission in 2018 tasked to draft:
|
||||
\begin{itemize}
|
||||
\item Guidelines for AI ethics,
|
||||
\item Policy and investments recommendations.
|
||||
\end{itemize}
|
||||
|
||||
\item[Ethics Guidelines for Trustworthy AI] \marginnote{Ethics Guidelines for Trustworthy AI}
|
||||
Voluntary framework addressed to all AI stakeholders (from designers to end-users) that bases AI trustworthiness on three components:
|
||||
\begin{descriptionlist}
|
||||
\item[Lawful] \marginnote{Lawful}
|
||||
AI must adhere to laws and regulations. The main legal sources are:
|
||||
\begin{enumerate}
|
||||
\item EU primary law (i.e., EU Treaties and Fundamental Rights).
|
||||
\item EU secondary law (e.g., GDPR, \dots).
|
||||
\item International treaties (e.g., UN Human Rights treaties, Council of Europe conventions, \dots).
|
||||
\item Member State laws.
|
||||
\item Domain-specific laws (e.g., regulations for medical data, \dots)
|
||||
\end{enumerate}
|
||||
|
||||
\begin{remark}
|
||||
The guidelines do not provide legal guidance. Therefore, this component is not explicitly covered in the document.
|
||||
\end{remark}
|
||||
|
||||
\item[Ethical] \marginnote{Ethical}
|
||||
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or unsuited for the purpose.
|
||||
|
||||
\item[Robust] \marginnote{Robust}
|
||||
AI must be technically and socially robust in order to minimize intentional or unintentional harm.
|
||||
\end{descriptionlist}
|
||||
\end{description}
|
||||
|
||||
\begin{remark}
|
||||
Each individual component is necessary but not sufficient. Ideally, they should all be respected. If in practice there are tensions between them, it is responsibility of the society to align them.
|
||||
\end{remark}
|
||||
|
||||
Reference in New Issue
Block a user