Fix typos <noupdate>

This commit is contained in:
2025-07-06 19:51:22 +02:00
parent cebd07759c
commit bc77ff5740
5 changed files with 16 additions and 16 deletions

View File

@ -28,7 +28,7 @@
\end{description}
\subsection{Opportunities and risks of AI for society}
\subsection{Opportunities and risks of AI on society}
This chapter identifies four opportunity-risk points of AI systems:
\begin{descriptionlist}
@ -83,7 +83,7 @@ This chapter presents 20 action points of four types:
\item[Assessment] \phantom{}
\begin{itemize}
\item Assess the capabilities of existing institutions in dealing with harms caused by AI systems.
\item Assess which task should not be delegated to AI systems.
\item Assess which tasks should not be delegated to AI systems.
\item Assess whether current regulations are sufficiently grounded in ethics.
\end{itemize}
@ -216,10 +216,10 @@ The concept of AI ethics presented in the framework is rooted to the fundamental
Seen as legally enforceable rights, fundamental rights can be considered as part of the \textsc{lawful} AI component. Seen as the rights of everyone, from a moral status, they fall within the \textsc{ethical} AI component.
\end{remark}
This chapter describes four ethical principle for trustworthy AI based on fundamental rights:
This chapter describes four ethical principles for trustworthy AI based on fundamental rights:
\begin{descriptionlist}
\item[Principle of respect for human autonomy] \marginnote{Principle of respect for human autonomy}
AI users should keep full self-determination. AI systems should be human-centric leaving room for human choices and they should not manipulate them.
AI users should keep full self-determination. AI systems should be human-centric leaving room for human choices and be without manipulation.
% AI should empower individuals and not control and restrict freedom. Vulnerable groups need extra protection.
@ -283,7 +283,7 @@ The main requirements the framework defines are:
\item Possible unintended uses or abuse of the system should be taken into account and mitigated.
\item There should be fallback plans in case of problems (e.g., switching from a statistical to a rule-based algorithm, asking a human, \dots).
\item There should be an explicit evaluation process to assess the accuracy of the AI system and determine its error rate.
\item The output of an AI system should be reliable (robust to a wide range of inputs) and reproducible.
\item The output of an AI system should be reliable (i.e., robust to a wide range of inputs) and reproducible.
\end{itemize}
\item[Privacy and data governance] \marginnote{Privacy and data governance}
@ -378,7 +378,7 @@ The chapter also describes some technical and non-technical methods to ensure tr
Organizations should appoint a person or a board for decisions regarding ethics.
\item[Education and awareness]
Educate, and train involved stakeholders.
Educate and train involved stakeholders.
\item[Stakeholder participation and social dialogue]
Ensure open discussions between stakeholders and involve the general public.