From 53f901bef21c6b5eaa8e120416aa8e45850d0b09 Mon Sep 17 00:00:00 2001 From: NotXia <35894453+NotXia@users.noreply.github.com> Date: Sun, 14 Jan 2024 14:22:30 +0100 Subject: [PATCH] Fix typos --- .../sections/_classification.tex | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/machine-learning-and-data-mining/sections/_classification.tex b/src/machine-learning-and-data-mining/sections/_classification.tex index d46bddf..23faa2b 100644 --- a/src/machine-learning-and-data-mining/sections/_classification.tex +++ b/src/machine-learning-and-data-mining/sections/_classification.tex @@ -515,7 +515,7 @@ Possible solutions are: \subsection{Complexity} Given a dataset $\matr{X}$ of $N$ instances and $D$ attributes, -each level of the tree requires to evaluate all the dataset and +each level of the tree requires to evaluate the whole dataset and each node requires to process all the attributes. Assuming an average height of $O(\log N)$, the overall complexity for induction (parameters search) is $O(DN \log N)$. @@ -585,7 +585,7 @@ This has complexity $O(h)$, with $h$ the height of the tree. If the value $e_{ij}$ of the domain of a feature $E_i$ never appears in the dataset, its probability $\prob{e_{ij} \mid c}$ will be 0 for all classes. This nullifies all the probabilities that use this feature when - computing the product chain during inference. + computing the chain of products during inference. Smoothing methods can be used to avoid this problem. \begin{description}