mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 18:51:52 +01:00
Add ethics2 autonomous vehicles
This commit is contained in:
@ -11,5 +11,6 @@
|
||||
\include{./sections/_gdpr.tex}
|
||||
\include{./sections/_claudette.tex}
|
||||
\include{./sections/_discrimination.tex}
|
||||
\include{./sections/_autonomous_vehicles.tex}
|
||||
|
||||
\end{document}
|
||||
BIN
src/year2/ethics-in-ai/module2/img/human_driving_crash.png
Normal file
BIN
src/year2/ethics-in-ai/module2/img/human_driving_crash.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 96 KiB |
BIN
src/year2/ethics-in-ai/module2/img/self_driving_crash.png
Normal file
BIN
src/year2/ethics-in-ai/module2/img/self_driving_crash.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 178 KiB |
BIN
src/year2/ethics-in-ai/module2/img/trolley_problem.png
Normal file
BIN
src/year2/ethics-in-ai/module2/img/trolley_problem.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 14 KiB |
353
src/year2/ethics-in-ai/module2/sections/_autonomous_vehicles.tex
Normal file
353
src/year2/ethics-in-ai/module2/sections/_autonomous_vehicles.tex
Normal file
@ -0,0 +1,353 @@
|
||||
\chapter{Autonomous vehicles}
|
||||
|
||||
\begin{description}
|
||||
\item[Autonomous vehicle] \marginnote{Autonomous vehicle}
|
||||
Unmanned vehicle that senses the environment and navigates without human input.
|
||||
\end{description}
|
||||
|
||||
|
||||
\begin{description}
|
||||
\item[Level of automation taxonomy (LoAT)] \marginnote{Level of automation taxonomy (LoAT)}
|
||||
Describes the four steps for action decision, each with different levels of automation:
|
||||
\begin{enumerate}
|
||||
\item Information acquisition,
|
||||
\item Information analysis,
|
||||
\item Decision and action selection,
|
||||
\item Action implementation.
|
||||
\end{enumerate}
|
||||
|
||||
\begin{remark}
|
||||
An intermediate level of automation is the most subject to accidents.
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
|
||||
\begin{description}
|
||||
\item[Autonomous vehicles taxonomy] \marginnote{Autonomous vehicles taxonomy}
|
||||
Autonomy for vehicles ranked on six levels:
|
||||
\begin{enumerate}
|
||||
\setcounter{enumi}{-1}
|
||||
\item Traditional car,
|
||||
\item Hands-on autonomy: driver and automated system share the controls (e.g., parking assistance).
|
||||
\item Hands-off autonomy: the driver must be prepared to intervene.
|
||||
\item Eyes-off autonomy: the driver's attention is not required in some cases.
|
||||
\item Mind-off autonomy: the driver's attention is not required.
|
||||
\item No steering wheel autonomy: no human intervention is possible.
|
||||
\begin{remark}
|
||||
The current legislation still requires a steering wheel.
|
||||
\end{remark}
|
||||
\end{enumerate}
|
||||
\end{description}
|
||||
|
||||
|
||||
|
||||
\section{Liability}
|
||||
|
||||
\begin{description}
|
||||
\item[Liability] \marginnote{Liability}
|
||||
State under which an individual is legally responsible for something related to a harmful event and it is subject to a sanction or damage compensation.
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Criminal liability}
|
||||
|
||||
\begin{description}
|
||||
\item[Criminal liability] \marginnote{Criminal liability}
|
||||
Related to a crime and punished with a fine or detention. It can be related to a natural or legal person.
|
||||
|
||||
It presupposes an act or omission that violates the criminal law. There are two conditions:
|
||||
\begin{descriptionlist}
|
||||
\item[Actus reus] The material element of the crime (an act or an omission).
|
||||
\item[Mens rea] The subjective element of the crime, the mental state of the perpetrator (e.g., intention, negligence, \dots).
|
||||
\end{descriptionlist}
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Civil liability}
|
||||
|
||||
\begin{description}
|
||||
\item[Civil liability] \marginnote{Civil liability}
|
||||
Presupposes a tort or a breach of contract and involves the obligation to repair.
|
||||
|
||||
\begin{description}
|
||||
\item[Fault liability] Breach a duty intentionally or negligently.
|
||||
\item[Special cases] \phantom{}
|
||||
\begin{description}
|
||||
\item[Product liability] In the case of autonomous vehicles, technology counts as a product and the manufacturer is considered the producer. The conditions to be applicable are:
|
||||
\begin{itemize}
|
||||
\item The technology is defective:
|
||||
\begin{itemize}
|
||||
\item By design
|
||||
\item Manufacturing defect
|
||||
\item Warning defect (e.g., missing or unclear instructions)
|
||||
\end{itemize}
|
||||
\item The technology causes damage.
|
||||
\end{itemize}
|
||||
\item[Enterprise liability]
|
||||
\item[Vicarious liability]
|
||||
\end{description}
|
||||
\end{description}
|
||||
\end{description}
|
||||
|
||||
\begin{remark}
|
||||
Liability with autonomous vehicle can be distributed as follows:
|
||||
\begin{itemize}
|
||||
\item With high automation, the manufacturer is more liable,
|
||||
\item With medium automation, manufacturer and driver share liability,
|
||||
\item With low automation, the driver is more liable.
|
||||
\end{itemize}
|
||||
\end{remark}
|
||||
|
||||
|
||||
\subsection{Administrative liability}
|
||||
|
||||
\begin{description}
|
||||
\item[Administrative liability] \marginnote{Administrative liability}
|
||||
Related to the violation of administrative rules or regulations.
|
||||
\end{description}
|
||||
|
||||
|
||||
|
||||
\section{Unavoidable accidents}
|
||||
|
||||
\begin{description}
|
||||
\item[Unavoidable accidents]
|
||||
Ethical dilemmas that question how a system should handle certain scenarios.
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Trolley problem}
|
||||
|
||||
\begin{description}
|
||||
\item[Trolley problem] \marginnote{Trolley problem}
|
||||
A trolley is headed towards a path where it will kill five people. If a lever is pulled, the trolley will be diverted and kill one person.
|
||||
|
||||
The dilemma is whether to do nothing and kill five people or pull the lever and kill one.
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.5\linewidth]{./img/trolley_problem.png}
|
||||
\end{figure}
|
||||
|
||||
\item[Trolley problem (fat person)] \marginnote{Trolley problem (fat person)}
|
||||
Variation of the trolley problem where the trolley goes towards a single path that it will kill some people and can be stopped by pushing a fat person on the track.
|
||||
|
||||
This scenario tests whether a direct physical involvement affecting someone not in danger changes the morality in the decision.
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Unavoidable car collision}
|
||||
|
||||
\begin{description}
|
||||
\item[Unavoidable car collision] \marginnote{Self-driving car collision}
|
||||
A human-driven or self-driving car with brakes failure is headed towards one or more pedestrians. The question is whether the car should stay on course or swerve.
|
||||
|
||||
Consider the following scenarios:
|
||||
\begin{enumerate}[label=(\Alph*)]
|
||||
\item The car can either kill many pedestrians crossing the street or a single person on the side of the road.
|
||||
\item The car can either kill a single pedestrian crossing the street or hit a wall killing its passengers.
|
||||
\item The car can either kill many pedestrians crossing the street or hit a wall killing its passengers.
|
||||
\end{enumerate}
|
||||
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.8\linewidth]{./img/human_driving_crash.png}
|
||||
\end{figure}
|
||||
|
||||
% \begin{figure}[H]
|
||||
% \centering
|
||||
% \includegraphics[width=0.3\linewidth]{./img/self_driving_crash.png}
|
||||
% \end{figure}
|
||||
|
||||
|
||||
\item[Human-driven car collision] \marginnote{Human-driven car collision}
|
||||
Unavoidable car collision with a human driver.
|
||||
|
||||
\begin{description}
|
||||
\item[State-of-necessity] \marginnote{State-of-necessity}
|
||||
Under the law, one is not criminally liable if:
|
||||
\begin{itemize}
|
||||
\item There is an unavoidable danger of serious bodily harm to the offender (or others) not voluntarily caused by the offender.
|
||||
\item The fact committed by the offender is proportionate to the danger.
|
||||
\end{itemize}
|
||||
|
||||
\begin{remark}
|
||||
The offender is still civilly liable (i.e., must compensate the damage).
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
In the three scenarios, the legal outcomes are:
|
||||
\begin{enumerate}[label=(\Alph*)]
|
||||
\item As the driver is not in danger, it should swerve to minimize losses as otherwise it would be considered an omission in saving many lives.
|
||||
\item The driver can invoke the state-of-necessity and hit the pedestrian.
|
||||
\item The driver can invoke the state-of-necessity and hit the pedestrians.
|
||||
\end{enumerate}
|
||||
|
||||
|
||||
\item[Self-programmed car collision] \marginnote{Self-programmed car collision}
|
||||
Unavoidable car collision with a car pre-programmed by the manufacturer.
|
||||
|
||||
In this case, who programmed the car cannot invoke the state-of-necessity and the legal outcomes are:
|
||||
\begin{enumerate}[label=(\Alph*)]
|
||||
\item The legally justifiable action should be the one that causes the least damage (i.e., the car should be programmed to kill the lowest number of lives).
|
||||
\item Both choices are ambiguous.
|
||||
\item The legally justifiable action should be the one that causes the least damage.
|
||||
\end{enumerate}
|
||||
|
||||
\begin{remark}
|
||||
Experiments show that people prefer impartial autonomous vehicles (i.e., minimize loss) for others and one that favors the passengers for themselves.
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
|
||||
|
||||
\section{Ethical knob}
|
||||
|
||||
|
||||
\subsection{Ethical knob 1.0}
|
||||
|
||||
\begin{description}
|
||||
\item[Ethical knob 1.0] \marginnote{Ethical knob 1.0}
|
||||
Imaginary tool that allows the passenger to select a level of morality:
|
||||
\begin{descriptionlist}
|
||||
\item[Altruist] Preference is given to others.
|
||||
\item[Impartial] Equal importance is given to passengers and others (i.e., minimize loss).
|
||||
\item[Egoist] Preference is given to passengers.
|
||||
\end{descriptionlist}
|
||||
|
||||
\begin{remark}
|
||||
Liability is the same as for the human-driven car, but there is no distinction between active and omissive behavior.
|
||||
\end{remark}
|
||||
|
||||
The legal outcomes for the car collision scenarios are:
|
||||
\begin{enumerate}[label=(\Alph*)]
|
||||
\item The passenger is not in danger, therefore the autonomous vehicle with the knob in any setting should minimize losses.
|
||||
\item The car will follow the knob setting and the state-of-necessity is applicable. In case of impartiality, the choice can be predefined or randomized.
|
||||
\item The car will follow the knob setting and the state-of-necessity is applicable.
|
||||
\end{enumerate}
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Ethical knob 2.0}
|
||||
|
||||
\begin{description}
|
||||
\item[Ethical knob 2.0] \marginnote{Ethical knob 2.0}
|
||||
Ethical knob that allows the passenger to set the proportional importance of the passengers to the importance of others. In addition, the car can determine the probability of causing harm. The decision is based on the disutility computed as follows:
|
||||
\[ \texttt{disutility} = \texttt{importance} \cdot \texttt{probability\_of\_harm} \]
|
||||
|
||||
\begin{example}
|
||||
Consider a case where:
|
||||
\begin{itemize}
|
||||
\item The passenger is $60\%$ important and has $10\%$ of probability to be harmed,
|
||||
\item The pedestrian is $40\%$ important and has $100\%$ of probability to be harmed.
|
||||
\end{itemize}
|
||||
The disutilities are:
|
||||
\[
|
||||
\begin{split}
|
||||
\texttt{disutility}(\text{passenger}) &= 60\% \cdot 10\% = 6\% \\
|
||||
\texttt{disutility}(\text{pedestrian}) &= 40\% \cdot 100\% = 40\% \\
|
||||
\end{split}
|
||||
\]
|
||||
The autonomous vehicle will put the passenger at risk.
|
||||
\end{example}
|
||||
|
||||
\begin{example}
|
||||
Consider a case where:
|
||||
\begin{itemize}
|
||||
\item The passenger is $95\%$ important and has $10\%$ of probability to be harmed,
|
||||
\item The pedestrian is $5\%$ important and has $100\%$ of probability to be harmed.
|
||||
\end{itemize}
|
||||
The disutilities are:
|
||||
\[
|
||||
\begin{split}
|
||||
\texttt{disutility}(\text{passenger}) &= 95\% \cdot 10\% = 9.5\% \\
|
||||
\texttt{disutility}(\text{pedestrian}) &= 5\% \cdot 100\% = 5\% \\
|
||||
\end{split}
|
||||
\]
|
||||
The autonomous vehicle will put the pedestrian at risk.
|
||||
\end{example}
|
||||
|
||||
\begin{remark}
|
||||
In case of more people involved, the total disutility can be scaled by the number of lives involved and normalized.
|
||||
\end{remark}
|
||||
|
||||
\begin{remark}[Rawls's difference principle] \marginnote{Rawls's difference principle}
|
||||
Rawls's difference principle is an alternative approach to utilitarianism that aims at minimizing the loss of the most disfavored individual.
|
||||
|
||||
\indenttbox
|
||||
\begin{example}
|
||||
Consider a scenario with one pedestrian $c_1$ crossing the road and three others $k_1, k_2, k_3$ on the side of the street, all with the same importance $l$.
|
||||
|
||||
Assume that the disutilities are:\\[1em]
|
||||
\begin{minipage}{0.48\linewidth}
|
||||
\begin{center}
|
||||
\texttt{Proceed forward}
|
||||
\end{center}
|
||||
\[
|
||||
\begin{split}
|
||||
\texttt{disutility}(c_1) &= 0.9 l \\
|
||||
\texttt{disutility}(k_1) &= 0.0 l \\
|
||||
\texttt{disutility}(k_2) &= 0.0 l \\
|
||||
\texttt{disutility}(k_3) &= 0.0 l \\
|
||||
\end{split}
|
||||
\]
|
||||
\end{minipage}
|
||||
\hfill
|
||||
\begin{minipage}{0.48\linewidth}
|
||||
\begin{center}
|
||||
\texttt{Swerve}
|
||||
\end{center}
|
||||
\[
|
||||
\begin{split}
|
||||
\texttt{disutility}(c_1) &= 0.0 l \\
|
||||
\texttt{disutility}(k_1) &= 0.6 l \\
|
||||
\texttt{disutility}(k_2) &= 0.6 l \\
|
||||
\texttt{disutility}(k_3) &= 0.6 l \\
|
||||
\end{split}
|
||||
\]
|
||||
\end{minipage}\\[0.5em]
|
||||
|
||||
An autonomous vehicle based on Rawls's difference will choose to swerve and hit the three pedestrians on the side of the road.
|
||||
\end{example}
|
||||
\end{remark}
|
||||
\end{description}
|
||||
|
||||
\begin{remark}
|
||||
If everyone chooses an egoistic approach, it would lead to a ``tragedy of the commons'' scenario where individuals deplete a shared resource causing more harm (in this case resulting in more pedestrian casualties).
|
||||
\end{remark}
|
||||
|
||||
|
||||
\begin{description}
|
||||
\item[Public good game]
|
||||
Game where subjects choose how many of their private tokens to put in a public pot. The content of the pot is multiplied by a certain factor and represents the public good payoff that is distributed equally to every subject.
|
||||
|
||||
In the case of autonomous vehicles, the public good can represent road or population safety. An agent-based simulation can be performed to assess different possible scenarios:
|
||||
\begin{itemize}
|
||||
\item Initialize the knob of each agent randomly and update them according to past experience.
|
||||
\item Consider as tokens the level of altruism.
|
||||
\item Define a cost for individualist choices.
|
||||
\item Aim to find the value of the knob that maximizes both individual and collective payoff.
|
||||
\end{itemize}
|
||||
Results show that:
|
||||
\begin{itemize}
|
||||
\item A low cost for individualist actions rapidly converges to egoism.
|
||||
\item A medium cost for individualist actions slowly convergences to egoism.
|
||||
\item A high cost for individualist actions converges to altruism.
|
||||
\end{itemize}
|
||||
\end{description}
|
||||
|
||||
|
||||
\subsection{Genetic ethical knob}
|
||||
|
||||
\begin{description}
|
||||
\item[Genetic ethical knob] \marginnote{Genetic ethical knob}
|
||||
Ethical knob that reflects the autonomous vehicle's assessment of the relative importance of passengers and others.
|
||||
|
||||
Experimentally, this is implemented using a neural network to predict the level of the knob and a genetic algorithm to find the best configuration. The fitness function $f(p_i)$ for agent $p_i$ is defined as:
|
||||
\[
|
||||
f(p_i) = \Delta u(p_i) + \texttt{reward}(p_i)
|
||||
\]
|
||||
where $\Delta u(p_i)$ is the difference between the utility of the choice and the expected utility of the alternative choices, and $\texttt{reward}(p_i)$ is based on the action taken by the average individual.
|
||||
|
||||
The goal of the experiment is to find an ideal threshold between egoism and altruism.
|
||||
\end{description}
|
||||
Reference in New Issue
Block a user