mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 10:41:53 +01:00
Compare commits
2 Commits
e0143d5c9f
...
bc77ff5740
| Author | SHA1 | Date | |
|---|---|---|---|
|
bc77ff5740
|
|||
|
cebd07759c
|
@ -10,10 +10,6 @@
|
||||
{
|
||||
"name": "Ethics module 2",
|
||||
"path": "module2/ethics2.pdf"
|
||||
},
|
||||
{
|
||||
"name": "Ethics module 3",
|
||||
"path": "module3/ethics3.pdf"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -133,7 +133,7 @@
|
||||
\begin{itemize}
|
||||
\item It is too demanding on the individuals as it requires constant self-sacrifice.
|
||||
\item It does not provide a decision procedure or a way to assess decisions.
|
||||
\item It has no room for impartiality (i.e., a family member is as important as a stranger).
|
||||
\item It has no room for partiality (i.e., a family member is as important as a stranger).
|
||||
\item If the majority of society is against a minority group, unjust actions against the minority increases the overall world's well-being.
|
||||
\end{itemize}
|
||||
\end{remark}
|
||||
@ -239,7 +239,7 @@
|
||||
\end{remark}
|
||||
|
||||
\item[Categorical imperatives] \marginnote{Categorical imperatives}
|
||||
Imperatives that do not depend on a single individual but are applicable to every rational beings. Categorical imperatives command to do things that one might want or not want to do. Disregarding them makes one irrational.
|
||||
Imperatives that do not depend on a single individual but are applicable to every rational being. Categorical imperatives command to do things that one might want or not want to do. Disregarding them makes one irrational.
|
||||
|
||||
\begin{remark}
|
||||
According to Kant's \textit{argument for the irrationality of immorality}, moral duties are categorical imperatives:
|
||||
@ -293,7 +293,7 @@
|
||||
|
||||
\begin{description}
|
||||
\item[Proceduralism] \marginnote{Proceduralism}
|
||||
Approach to ethics that does not start by make assumptions on any basic moral views but rather follows a procedure to show that they are morally right.
|
||||
Approach to ethics that does not start by making assumptions on any basic moral views but rather follows a procedure to show that they are morally right.
|
||||
|
||||
\begin{remark}
|
||||
The golden rule, rule consequentialism, Kant's principle of universalizability are all instances of proceduralism.
|
||||
@ -354,7 +354,7 @@
|
||||
\begin{description}
|
||||
\item[Contractarianism characteristics] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Morality is a social phenomenon: moral rules are basically rules of cooperation. There are no self-regarding moral duties, so any action that do not have bearing on others is morally right.
|
||||
\item Morality is a social phenomenon: moral rules are basically rules of cooperation. There are no self-regarding moral duties, so any action that does not have bearing on others is morally right.
|
||||
\item Basic moral rules are justified.
|
||||
\begin{descriptionlist}
|
||||
\item[Veil of ignorance] \marginnote{Veil of ignorance}
|
||||
|
||||
@ -28,7 +28,7 @@
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Opportunities and risks of AI for society}
|
||||
\subsection{Opportunities and risks of AI on society}
|
||||
|
||||
This chapter identifies four opportunity-risk points of AI systems:
|
||||
\begin{descriptionlist}
|
||||
@ -83,7 +83,7 @@ This chapter presents 20 action points of four types:
|
||||
\item[Assessment] \phantom{}
|
||||
\begin{itemize}
|
||||
\item Assess the capabilities of existing institutions in dealing with harms caused by AI systems.
|
||||
\item Assess which task should not be delegated to AI systems.
|
||||
\item Assess which tasks should not be delegated to AI systems.
|
||||
\item Assess whether current regulations are sufficiently grounded in ethics.
|
||||
\end{itemize}
|
||||
|
||||
@ -216,10 +216,10 @@ The concept of AI ethics presented in the framework is rooted to the fundamental
|
||||
Seen as legally enforceable rights, fundamental rights can be considered as part of the \textsc{lawful} AI component. Seen as the rights of everyone, from a moral status, they fall within the \textsc{ethical} AI component.
|
||||
\end{remark}
|
||||
|
||||
This chapter describes four ethical principle for trustworthy AI based on fundamental rights:
|
||||
This chapter describes four ethical principles for trustworthy AI based on fundamental rights:
|
||||
\begin{descriptionlist}
|
||||
\item[Principle of respect for human autonomy] \marginnote{Principle of respect for human autonomy}
|
||||
AI users should keep full self-determination. AI systems should be human-centric leaving room for human choices and they should not manipulate them.
|
||||
AI users should keep full self-determination. AI systems should be human-centric leaving room for human choices and be without manipulation.
|
||||
|
||||
% AI should empower individuals and not control and restrict freedom. Vulnerable groups need extra protection.
|
||||
|
||||
@ -283,7 +283,7 @@ The main requirements the framework defines are:
|
||||
\item Possible unintended uses or abuse of the system should be taken into account and mitigated.
|
||||
\item There should be fallback plans in case of problems (e.g., switching from a statistical to a rule-based algorithm, asking a human, \dots).
|
||||
\item There should be an explicit evaluation process to assess the accuracy of the AI system and determine its error rate.
|
||||
\item The output of an AI system should be reliable (robust to a wide range of inputs) and reproducible.
|
||||
\item The output of an AI system should be reliable (i.e., robust to a wide range of inputs) and reproducible.
|
||||
\end{itemize}
|
||||
|
||||
\item[Privacy and data governance] \marginnote{Privacy and data governance}
|
||||
@ -378,7 +378,7 @@ The chapter also describes some technical and non-technical methods to ensure tr
|
||||
Organizations should appoint a person or a board for decisions regarding ethics.
|
||||
|
||||
\item[Education and awareness]
|
||||
Educate, and train involved stakeholders.
|
||||
Educate and train involved stakeholders.
|
||||
|
||||
\item[Stakeholder participation and social dialogue]
|
||||
Ensure open discussions between stakeholders and involve the general public.
|
||||
|
||||
@ -130,7 +130,7 @@
|
||||
\end{figure}
|
||||
|
||||
\item[Trolley problem (fat person)] \marginnote{Trolley problem (fat person)}
|
||||
Variation of the trolley problem where the trolley goes towards a single path that it will kill some people and can be stopped by pushing a fat person on the track.
|
||||
Variation of the trolley problem where the trolley goes towards a single path that will kill some people and can be stopped by pushing a fat person on the track.
|
||||
|
||||
This scenario tests whether a direct physical involvement affecting someone not in danger changes the morality in the decision.
|
||||
\end{description}
|
||||
|
||||
@ -16,7 +16,7 @@
|
||||
|
||||
\begin{description}
|
||||
\item[Historical bias] \marginnote{Historical bias}
|
||||
System trained on intrinsically biased data will reproduce the same biased behavior.
|
||||
Systems trained on intrinsically biased data will reproduce the same biased behavior.
|
||||
|
||||
\begin{remark}
|
||||
Data can be biased because it comes from past human judgement or by the hierarchies of society (e.g., systems working on marginalized languages will most likely have lower performance compared to a widespread language).
|
||||
@ -34,7 +34,7 @@
|
||||
\end{example}
|
||||
|
||||
\begin{example}[UK AI visa and asylum system]
|
||||
System used by the UK government to assess visa and asylum applications. It was found that:
|
||||
System used by the UK government to assess visa and asylum applications. It was found out that:
|
||||
\begin{itemize}
|
||||
\item The system ranked applications based on nationality.
|
||||
\item Applicants from certain countries were automatically flagged as high risk.
|
||||
|
||||
@ -137,7 +137,7 @@ Processing of personal data is lawful if at least one of the following condition
|
||||
As a rule of thumb, legitimate interests of the controller can be pursued if only a reasonably limited amount of personal data is used.
|
||||
\end{remark}
|
||||
\begin{example}
|
||||
The gym one is subscribed in can send (contextual) advertisements by email to pursue economic interests.
|
||||
The gym one is subscribed in can send (contextual) advertisement by email to pursue economic interests.
|
||||
\end{example}
|
||||
\begin{remark}
|
||||
Targeted advertising is in principle prohibited. However, companies commonly pair legitimate interest with the request for consent.
|
||||
@ -465,7 +465,7 @@ When personal data is collected, the controller should provide the data subject
|
||||
\begin{itemize}
|
||||
\item The identity of the controller, its representative (when applicable), and its contact details should be available.
|
||||
\item Contact details of the data officer (referee of the company that ensures that the GDPR is respected) should be available.
|
||||
\item Purposes and legal basis of the processing.
|
||||
\item Purposes and legal basis of processing.
|
||||
\item Categories of data collected.
|
||||
\item Recipients or categories of recipients.
|
||||
\item Period of time or the criteria to determine how long the data is stored.
|
||||
@ -486,7 +486,7 @@ Moreover, in case of automated decision-making, the following information should
|
||||
|
||||
\subsection{Right to access (article 15)} \marginnote{Right to access}
|
||||
|
||||
Data subjects have the right to have confirmation from the controller on whether their data has been processed and access both input and inferred personal data.
|
||||
Data subjects have the right to have confirmation from the controller on whether their data has been processed and can access both input and inferred personal data.
|
||||
|
||||
This right is limited if it affects the rights or freedoms of others.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user