Fix typos <noupdate>

This commit is contained in:
2025-06-28 11:38:38 +02:00
parent 234455d41d
commit cabdb4fc97
3 changed files with 8 additions and 9 deletions

View File

@ -54,7 +54,6 @@ This chapter identifies four opportunity-risk points of AI systems:
\subsection{Unified framework of principles for AI in society}
This chapter groups and presents the common principles used by different organizations and initiatives.
Most of them overlap with the principles of bioethics:
\begin{descriptionlist}
\item[Beneficence] \marginnote{Beneficence}
@ -148,7 +147,7 @@ This chapter presents 20 action points of four types:
\end{remark}
\item[Ethical] \marginnote{Ethical}
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or unsuited for the purpose.
AI must be in line with ethical principles and values (i.e., moral AI) for which laws might be lacking or be unsuited for the purpose.
\item[Robust] \marginnote{Robust}
AI must be technically and socially robust in order to minimize intentional or unintentional harm.
@ -326,7 +325,7 @@ The main requirements the framework defines are:
The impact of AI systems should also consider society in general and the environment (principles of fairness and prevention of harm):
\begin{itemize}
\item The environmental impact of the lifecycle of an AI system should be assessed.
\item The effects of AI systems on people's physical and mental well-being, as well as institutions, democracy, and society should be assessed and monitored.
\item The effects of AI systems on people's physical and mental well-being, as well as on institutions, democracy, and society should be assessed and monitored.
\end{itemize}
% AI should not have negative impacts on the society and environment.