mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 18:51:52 +01:00
Fix typos <noupdate>
This commit is contained in:
@ -122,7 +122,7 @@
|
||||
\item[Trolley problem] \marginnote{Trolley problem}
|
||||
A trolley is headed towards a path where it will kill five people. If a lever is pulled, the trolley will be diverted and kill one person.
|
||||
|
||||
The dilemma is whether to do nothing and kill five people or pull the lever and kill one.
|
||||
The dilemma is whether to do nothing and kill five people or pull the lever and actively kill one.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
@ -145,8 +145,8 @@
|
||||
Consider the following scenarios:
|
||||
\begin{enumerate}[label=(\Alph*)]
|
||||
\item The car can either kill many pedestrians crossing the street or a single person on the side of the road.
|
||||
\item The car can either kill a single pedestrian crossing the street or hit a wall killing its passengers.
|
||||
\item The car can either kill many pedestrians crossing the street or hit a wall killing its passengers.
|
||||
\item The car can either kill a single pedestrian crossing the street or hit a wall killing its driver.
|
||||
\item The car can either kill many pedestrians crossing the street or hit a wall killing its driver.
|
||||
\end{enumerate}
|
||||
|
||||
\begin{figure}[H]
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
|
||||
\begin{description}
|
||||
\item[CLAUDETTE] \marginnote{CLAUDETTE}
|
||||
Clause detector (CLAUDETTE) is a system to classify clauses in terms of services or privacy policies as:
|
||||
Clause detector (CLAUDETTE) is a system to classify clauses in a terms of service or privacy policy as:
|
||||
\begin{itemize}
|
||||
\item \textsc{Clearly fair},
|
||||
\item \textsc{Potentially unfair},
|
||||
@ -35,6 +35,13 @@
|
||||
\item \textsc{potentially unfair}, if the provider can unilaterally modify the terms of service or the service.
|
||||
\end{itemize}
|
||||
|
||||
\item[Unilateral termination] \marginnote{Unilateral termination}
|
||||
A clause is classified as:
|
||||
\begin{itemize}
|
||||
\item \textsc{potentially unfair}, if the provider has the right to suspend or terminate the service and the reasons are specified.
|
||||
\item \textsc{clearly unfair}, if the provider can suspend or terminate the service for any reason.
|
||||
\end{itemize}
|
||||
|
||||
\item[Jurisdiction clause] \marginnote{Jurisdiction clause}
|
||||
A clause is classified as:
|
||||
\begin{itemize}
|
||||
@ -65,13 +72,6 @@
|
||||
\item \textsc{clearly unfair}, if the provider is never liable (intentional damage included).
|
||||
\end{itemize}
|
||||
|
||||
\item[Unilateral termination] \marginnote{Unilateral termination}
|
||||
A clause is classified as:
|
||||
\begin{itemize}
|
||||
\item \textsc{potentially unfair}, if the provider has the right to suspend or terminate the service and the reasons are specified.
|
||||
\item \textsc{clearly unfair}, if the provider can suspend or terminate the service for any reason.
|
||||
\end{itemize}
|
||||
|
||||
\item[Content removal] \marginnote{Content removal}
|
||||
A clause is classified as:
|
||||
\begin{itemize}
|
||||
|
||||
@ -19,12 +19,12 @@
|
||||
System trained on intrinsically biased data will reproduce the same biased behavior.
|
||||
|
||||
\begin{remark}
|
||||
Data can be biased because it comes from past human judgement or by the hierarchies of the society (e.g., systems working on marginalized languages will most likely have lower performance compared to a widespread language).
|
||||
Data can be biased because it comes from past human judgement or by the hierarchies of society (e.g., systems working on marginalized languages will most likely have lower performance compared to a widespread language).
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
\begin{example}[Amazon AI recruiting tool]
|
||||
Tool that Amazon used in the past to review job applications. It was heavily biased towards male applicants and, even with the gender removed, it was able to infer it from the other features.
|
||||
Tool that Amazon was using in the past to review job applications. It was heavily biased towards male applicants and, even with the gender removed, it was able to infer it from the other features.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
@ -66,7 +66,7 @@
|
||||
|
||||
In the context of language models, some systems implement a refusal mechanism to prevent a biased response. However:
|
||||
\begin{itemize}
|
||||
\item Using a different prompt on the same topic might bypass the filter.
|
||||
\item Using a different prompt of the same topic might bypass the filter.
|
||||
\item Refusal might be applied unequally depending on demographics or domain.
|
||||
\end{itemize}
|
||||
\end{example}
|
||||
@ -168,10 +168,10 @@
|
||||
\begin{itemize}
|
||||
\item The overall accuracy is moderate-low ($61.2\%$),
|
||||
\item Black defendants were more likely labeled with a high level of risk, leading to a higher probability of high risk misclassification ($45\%$ blacks vs $23\%$ whites).
|
||||
\item White defendants were more likely labeled with a low level of risk, leading to a higher probability of low risk misclassification ($48\%$ blacks vs $28\%$ whites).
|
||||
\item White defendants were more likely labeled with a low level of risk, leading to a higher probability of low risk misclassification ($48\%$ whites vs $28\%$ blacks).
|
||||
\end{itemize}
|
||||
|
||||
Northpointe, the software house of COMPAS, stated that ProPublic made several statistical and technical errors as:
|
||||
Northpointe, the software house of COMPAS, stated that ProPublica made several statistical and technical errors as:
|
||||
\begin{itemize}
|
||||
\item The accuracy of COMPAS is higher that human judgement.
|
||||
\item The general recidivism risk scale is equally accurate for blacks and whites,
|
||||
@ -202,7 +202,7 @@
|
||||
\item The training data is composed of 3000 defendants divided into 1500 blues (1000 previous offenders) and 1500 greens (500 previous offenders).
|
||||
\end{itemize}
|
||||
|
||||
Therefore, the real aggregated outcomes are:
|
||||
Therefore, the ground-truth aggregated outcomes are:
|
||||
\begin{center}
|
||||
\footnotesize
|
||||
\begin{tabular}{c|cc}
|
||||
@ -336,7 +336,7 @@
|
||||
\item[Conditional use error/false rate] \marginnote{Conditional use error/false rate}
|
||||
The proportion of incorrect predictions should be equal for each class within each group.
|
||||
\begin{example}[SAPMOC]
|
||||
SAPMOC satisfies conditional use error/:
|
||||
SAPMOC satisfies conditional use error:
|
||||
\begin{center}
|
||||
\footnotesize
|
||||
\begin{tabular}{c|cc}
|
||||
|
||||
@ -407,7 +407,7 @@ There are two main opinions on AI systems:
|
||||
Agreement of the data subject that allows to process its personal data. Consent should be:
|
||||
\begin{descriptionlist}
|
||||
\item[Freely given]
|
||||
The data subject have the choice to give consent for profiling
|
||||
The data subject has the choice to give consent for profiling.
|
||||
|
||||
\begin{remark}
|
||||
A common practice is the ``take-or-leave'' approach, which is illegal.
|
||||
@ -432,7 +432,7 @@ There are two main opinions on AI systems:
|
||||
\end{remark}
|
||||
|
||||
\item[Unambiguously provided]
|
||||
Consent should be explicitly provided by the data subject through a statement of affirmative action.
|
||||
Consent should be explicitly provided by the data subject through a statement or affirmative action.
|
||||
\begin{remark}
|
||||
An illegal practice in many privacy policies is to state that there can be changes and continuing using the service implies an implicit acceptance of the new terms.
|
||||
\end{remark}
|
||||
@ -512,7 +512,7 @@ Data subjects have the right to have their own personal data erased without dela
|
||||
\begin{itemize}
|
||||
\item The data is no longer necessary for the purpose it was collected for.
|
||||
\begin{example}
|
||||
An e-shop cannot delete the address until the order is arrived.
|
||||
An e-shop cannot delete the address until the order has arrived.
|
||||
\end{example}
|
||||
|
||||
\item The data subject has withdrawn its consent, unless there are other legal basis.
|
||||
|
||||
Reference in New Issue
Block a user