diff --git a/src/fundamentals-of-ai-and-kr/module3/sections/_bayesian_net.tex b/src/fundamentals-of-ai-and-kr/module3/sections/_bayesian_net.tex index 3bfe607..a9b58d3 100644 --- a/src/fundamentals-of-ai-and-kr/module3/sections/_bayesian_net.tex +++ b/src/fundamentals-of-ai-and-kr/module3/sections/_bayesian_net.tex @@ -97,7 +97,7 @@ it is possible to estimate \\$\texttt{Intelligence}$. Note that if $\texttt{Grade}$ was not known, - $\texttt{Difficulty}$ and $\texttt{Intelligence}$ would be independent. + $\texttt{Difficulty}$ and $\texttt{Intelligence}$ would have been independent. \begin{center} \includegraphics[width=0.75\linewidth]{img/_explainaway_example.pdf} \end{center} @@ -431,7 +431,7 @@ A node $X$ has $k$ parents $U_1, \dots, U_k$ and possibly a leak node $U_L$ to c Each node $U_i$ has a failure (inhibition) probability $q_i$: \[ q_i = \prob{\lnot x \mid u_i, \lnot u_j \text{ for } j \neq i} \] -The CRT can be built by computing the probabilities as: +The CPT can be built by computing the probabilities as: \[ \prob{\lnot x \mid \texttt{Parents($X$)}} = \prod_{j:\, U_j = \texttt{true}} q_j \] In other words: \[ \prob{\lnot x \mid u_1, \dots, u_n} = @@ -451,7 +451,7 @@ Because only the failure probabilities are required, the number of parameters is \end{split} \] - Known the failure probabilities, the entire CRT can be computed: + Known the failure probabilities, the entire CPT can be computed: \begin{center} \begin{tabular}{c|c|c|rc|c} \hline @@ -543,7 +543,7 @@ Possible approaches are: \end{figure} \item[Density estimation] \marginnote{Density estimation} - Parameters of the conditional distribution are learnt: + Parameters of the conditional distribution can be learned using: \begin{description} \item[Bayesian learning] calculate the probability of each hypothesis. \item[Approximations] using the maximum-a-posteriori and maximum-likelihood hypothesis. diff --git a/src/fundamentals-of-ai-and-kr/module3/sections/_exact_inference.tex b/src/fundamentals-of-ai-and-kr/module3/sections/_exact_inference.tex index 00f3abb..4fd6d04 100644 --- a/src/fundamentals-of-ai-and-kr/module3/sections/_exact_inference.tex +++ b/src/fundamentals-of-ai-and-kr/module3/sections/_exact_inference.tex @@ -93,7 +93,7 @@ A variable $X$ is irrelevant if summing over it results in a probability of $1$. \begin{theorem} Given a query $X$, the evidence $\matr{E}$ and a variable $Y$: - \[ Y \notin \texttt{Ancestors($\{ X \}$)} \cup \texttt{Ancestors($\matr{E}$)} \rightarrow Y \text{ is irrelevant} \] + \[ Y \notin (\texttt{Ancestors($\{ X \}$)} \cup \texttt{Ancestors($\matr{E}$)}) \rightarrow Y \text{ is irrelevant} \] \end{theorem} \begin{theorem} diff --git a/src/fundamentals-of-ai-and-kr/module3/sections/_intro.tex b/src/fundamentals-of-ai-and-kr/module3/sections/_intro.tex index de3f24c..34faa2e 100644 --- a/src/fundamentals-of-ai-and-kr/module3/sections/_intro.tex +++ b/src/fundamentals-of-ai-and-kr/module3/sections/_intro.tex @@ -4,11 +4,11 @@ \section{Uncertainty} \begin{description} \item[Uncertainty] \marginnote{Uncertainty} - A task is uncertain if we have: + A task is uncertain if it has: \begin{itemize} \item Partial observations \item Noisy or wrong information - \item Uncertain action outcomes + \item Uncertain outcomes of the actions \item Complex models \end{itemize} diff --git a/src/fundamentals-of-ai-and-kr/module3/sections/_probability.tex b/src/fundamentals-of-ai-and-kr/module3/sections/_probability.tex index 0e39af0..9498d34 100644 --- a/src/fundamentals-of-ai-and-kr/module3/sections/_probability.tex +++ b/src/fundamentals-of-ai-and-kr/module3/sections/_probability.tex @@ -23,7 +23,7 @@ \item[Probability distribution] \marginnote{Probability distribution} For any random variable $X$: - \[ \prob{X = x_i} = \sum_{\omega \text{ st } X(\omega)=x_i} \prob{\omega} \] + \[ \prob{X = x_i} = \sum_{\omega \text{ s.t. } X(\omega)=x_i} \prob{\omega} \] \item[Proposition] \marginnote{Proposition} Event where a random variable has a certain value.