mirror of
https://github.com/NotXia/unibo-ai-notes.git
synced 2025-12-14 18:51:52 +01:00
Add missing fraction
This commit is contained in:
@ -102,7 +102,7 @@
|
|||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item An estimate $\z_i^k$ of its optimal position $\z_i^*$,
|
\item An estimate $\z_i^k$ of its optimal position $\z_i^*$,
|
||||||
\item An estimate $\s_i^k$ of the aggregation function $\sigma(\z^k) = \frac{1}{N} \sum_{j=1}^{N} \phi_j(\z_j^k)$,
|
\item An estimate $\s_i^k$ of the aggregation function $\sigma(\z^k) = \frac{1}{N} \sum_{j=1}^{N} \phi_j(\z_j^k)$,
|
||||||
\item An estimate $\v_i^k$ of the gradient with respect to the second argument of the loss $\sum_{j=1}^{N} \nabla_{[\sigma(\z^k)]} l_j(\z_j^k, \sigma(\z^k))$.
|
\item An estimate $\v_i^k$ of the gradient with respect to the second argument of the loss $\frac{1}{N} \sum_{j=1}^{N} \nabla_{[\sigma(\z^k)]} l_j(\z_j^k, \sigma(\z^k))$.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
|
|
||||||
The step is based on the centralized gradient method using the local estimates:
|
The step is based on the centralized gradient method using the local estimates:
|
||||||
|
|||||||
8
src/year2/ethics-in-ai/module2/sections/_gen_ai.tex
Normal file
8
src/year2/ethics-in-ai/module2/sections/_gen_ai.tex
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
\begin{remark}
|
||||||
|
Transformative rule: Two purposes of a work:
|
||||||
|
\begin{itemize}
|
||||||
|
\item Original of the work (e.g., creative work)
|
||||||
|
\item Training
|
||||||
|
\end{itemize}
|
||||||
|
It is allowed if the original and training reason is different.
|
||||||
|
\end{remark}
|
||||||
Reference in New Issue
Block a user