mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 18:51:52 +01:00
Fix typos <noupdate>
This commit is contained in:
@ -18,7 +18,7 @@
|
|||||||
AI systems could undermine this right if used for surveillance, profiling, automated assessment, manipulation, or interference.
|
AI systems could undermine this right if used for surveillance, profiling, automated assessment, manipulation, or interference.
|
||||||
|
|
||||||
\item[Right to equality and non-discrimination]
|
\item[Right to equality and non-discrimination]
|
||||||
Digitally disadvantaged individuals might be excluded from accessing AI system or exploited. AI systems themselves can be biased and reproduce existing discriminatory practices.
|
Digitally disadvantaged individuals might be excluded from accessing AI systems or exploited by it. AI systems themselves can be biased and reproduce existing discriminatory practices.
|
||||||
|
|
||||||
\item[Right to privacy]
|
\item[Right to privacy]
|
||||||
Related to the right of a person to make autonomous decisions, and to have control of the data collected and how it is processed.
|
Related to the right of a person to make autonomous decisions, and to have control of the data collected and how it is processed.
|
||||||
|
|||||||
@ -6,7 +6,7 @@
|
|||||||
|
|
||||||
\begin{description}
|
\begin{description}
|
||||||
\item[Morality] \marginnote{Morality}
|
\item[Morality] \marginnote{Morality}
|
||||||
There is no widely agreed definition of morality. On a high level, it refers to norms to determine which actions are right and wrong.
|
There is no widely agreed definition of morality. On a high-level, it refers to norms to determine which actions are right or wrong.
|
||||||
\end{description}
|
\end{description}
|
||||||
|
|
||||||
|
|
||||||
@ -316,17 +316,17 @@
|
|||||||
\end{remark}
|
\end{remark}
|
||||||
|
|
||||||
\item[Contractarianism (moral)] \marginnote{Contractarianism (moral)}
|
\item[Contractarianism (moral)] \marginnote{Contractarianism (moral)}
|
||||||
Ethical theory which states that actions are morally right if and only if they would be accepted by free, equal, and rational people, on the condition that everyone obey to these rules.
|
Ethical theory which states that actions are morally right if and only if they would be accepted by free, equal, and rational people, on the condition that everyone obeys to these rules.
|
||||||
\end{description}
|
\end{description}
|
||||||
|
|
||||||
\begin{description}
|
\begin{description}
|
||||||
\item[Prisoner's dilemma] \marginnote{Prisoner's dilemma}
|
\item[Prisoner's dilemma] \marginnote{Prisoner's dilemma}
|
||||||
Situation where the best outcome would be obtained if everyone stops pursuing their self-interest.
|
Situation where the best outcome would be obtained if everyone stops pursuing its self-interest.
|
||||||
|
|
||||||
\begin{table}[H]
|
\begin{table}[H]
|
||||||
\caption{
|
\caption{
|
||||||
\parbox[t]{0.7\linewidth}{
|
\parbox[t]{0.7\linewidth}{
|
||||||
Scenario that the dilemma takes inspiration from: two prisoners are interrogated separately, they can either stay silent (cooperate) or snitch the other (betray). The numbers are the years in prison each of them would get.
|
Scenario that the dilemma takes inspiration from: two prisoners are interrogated separately; they can either stay silent (cooperate) or snitch the other (betray). The numbers are the years in prison each of them would get.
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
\centering
|
\centering
|
||||||
@ -363,7 +363,7 @@
|
|||||||
\item Each person will prioritize basic liberties, which will match those of everyone.
|
\item Each person will prioritize basic liberties, which will match those of everyone.
|
||||||
\item Social and economic inequalities are allowed if everyone has equal access to those positions and the benefits should be aimed to the least advantaged members of society.
|
\item Social and economic inequalities are allowed if everyone has equal access to those positions and the benefits should be aimed to the least advantaged members of society.
|
||||||
\end{enumerate}
|
\end{enumerate}
|
||||||
Overall, what will be selected is going match the basic moral rules.
|
Overall, what will be selected is going to match the basic moral rules.
|
||||||
\end{descriptionlist}
|
\end{descriptionlist}
|
||||||
\item There is a procedure to determine if an action is right or wrong: ask whether free, equal, and rational people would agree to rules that allow that action.
|
\item There is a procedure to determine if an action is right or wrong: ask whether free, equal, and rational people would agree to rules that allow that action.
|
||||||
\item Contractarianism justifies the origin of morality as originated from the same society we live in, but in a more rational and free version.
|
\item Contractarianism justifies the origin of morality as originated from the same society we live in, but in a more rational and free version.
|
||||||
|
|||||||
@ -54,7 +54,6 @@ This chapter identifies four opportunity-risk points of AI systems:
|
|||||||
\subsection{Unified framework of principles for AI in society}
|
\subsection{Unified framework of principles for AI in society}
|
||||||
|
|
||||||
This chapter groups and presents the common principles used by different organizations and initiatives.
|
This chapter groups and presents the common principles used by different organizations and initiatives.
|
||||||
|
|
||||||
Most of them overlap with the principles of bioethics:
|
Most of them overlap with the principles of bioethics:
|
||||||
\begin{descriptionlist}
|
\begin{descriptionlist}
|
||||||
\item[Beneficence] \marginnote{Beneficence}
|
\item[Beneficence] \marginnote{Beneficence}
|
||||||
@ -148,7 +147,7 @@ This chapter presents 20 action points of four types:
|
|||||||
\end{remark}
|
\end{remark}
|
||||||
|
|
||||||
\item[Ethical] \marginnote{Ethical}
|
\item[Ethical] \marginnote{Ethical}
|
||||||
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or unsuited for the purpose.
|
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or be unsuited for the purpose.
|
||||||
|
|
||||||
\item[Robust] \marginnote{Robust}
|
\item[Robust] \marginnote{Robust}
|
||||||
AI must be technically and socially robust in order to minimize intentional or unintentional harm.
|
AI must be technically and socially robust in order to minimize intentional or unintentional harm.
|
||||||
@ -326,7 +325,7 @@ The main requirements the framework defines are:
|
|||||||
The impact of AI systems should also consider society in general and the environment (principles of fairness and prevention of harm):
|
The impact of AI systems should also consider society in general and the environment (principles of fairness and prevention of harm):
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item The environmental impact of the lifecycle of an AI system should be assessed.
|
\item The environmental impact of the lifecycle of an AI system should be assessed.
|
||||||
\item The effects of AI systems on people's physical and mental well-being, as well as institutions, democracy, and society should be assessed and monitored.
|
\item The effects of AI systems on people's physical and mental well-being, as well as on institutions, democracy, and society should be assessed and monitored.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
% AI should not have negative impacts on the society and environment.
|
% AI should not have negative impacts on the society and environment.
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user