Fix typos <noupdate>

This commit is contained in:
2024-01-01 17:09:15 +01:00
parent 9e25277c0d
commit a7933cf3ba
27 changed files with 193 additions and 190 deletions

View File

@ -48,7 +48,7 @@ At level $i$, a value is assigned to the variable $X_i$ and
constraints involving $X_1, \dots, X_i$ are checked.
In case of failure, the path is not further explored.
A problem of this approach is that it requires to backtrack in case of failure
A problem with this approach is that it requires to backtrack in case of failure
and reassign all the variables in the worst case.
@ -81,7 +81,7 @@ If the domain of a variable becomes empty, the path is considered a failure and
\item $X_1 = 2 \hspace{1cm} X_2 :: [\cancel{1}, \cancel{2}, 3] \hspace{1cm} X_3 :: [\cancel{1}, \cancel{2}, 3]$
\item $X_1 = 2 \hspace{1cm} X_2 = 3 \hspace{1cm} X_3 :: [\cancel{1}, \cancel{2}, \cancel{3}]$
\end{enumerate}
As the domain of $X_3$ is empty, search on this branch fails and backtracking is required.
As the domain of $X_3$ is empty, a search on this branch fails and backtracking is required.
\end{example}
@ -103,7 +103,7 @@ If the domain of a variable becomes empty, the path is considered a failure and
Consider the variables and constraints:
\[ X_1 :: [1, 2, 3] \hspace{0.5cm} X_2 :: [1, 2, 3] \hspace{0.5cm} X_3 :: [1, 2, 3] \hspace{1cm} X_1 < X_2 < X_3 \]
We assign the variables in lexicographic order. At each step we have that:
We assign the variables in lexicographic order. At each step, we have that:
\begin{enumerate}
\item $X_1 = 1 \hspace{1cm} X_2 :: [\cancel{1}, 2, \cancel{3}] \hspace{1cm} X_3 :: [\cancel{1}, 2, 3]$ \\
Here, we assign $X_1=1$ and propagate to unassigned constraints.
@ -120,7 +120,7 @@ If the domain of a variable becomes empty, the path is considered a failure and
Consider the variables and constraints:
\[ X_1 :: [1, 2, 3] \hspace{0.5cm} X_2 :: [1, 2, 3] \hspace{0.5cm} X_3 :: [1, 2, 3] \hspace{1cm} X_1 < X_2 < X_3 \]
We assign the variables in lexicographic order. At each step we have that:
We assign the variables in lexicographic order. At each step, we have that:
\begin{enumerate}
\item $X_1 = 1 \hspace{1cm} X_2 :: [\cancel{1}, 2, \cancel{3}] \hspace{1cm} X_3 :: [\cancel{1}, \cancel{2}, 3]$ \\
Here, we assign $X_1=1$ and propagate to unassigned constraints.
@ -252,5 +252,5 @@ This class of methods can be applied statically before the search or after each
Generalization of arc/path consistency.
If a problem with $n$ variables is $n$-consistent, the solution can be found without search.
Usually it is not applicable as it has exponential complexity.
Usually, it is not applicable as it has exponential complexity.
\end{description}

View File

@ -45,7 +45,7 @@ an iteration of the minimax algorithm can be described as follows:
\item[Propagation]
Starting from the parents of the leaves, the scores are propagated upwards
by labeling the parents based on the children's score.
by labeling the parents based on the children's scores.
Given an unlabeled node $m$, if $m$ is at a \textsc{Max} level, its label is the maximum of its children's score.
Otherwise (\textsc{Min} level), the label is the minimum of its children's score.
@ -90,7 +90,7 @@ an iteration of the minimax algorithm can be described as follows:
\section{Alpha-beta cuts}
\marginnote{Alpha-beta cuts}
Alpha-beta cuts (pruning) allows to prune subtrees whose state will never be selected (when playing optimally).
Alpha-beta pruning (cuts) allows to prune subtrees whose state will never be selected (when playing optimally).
$\alpha$ represents the best choice found for \textsc{Max}.
$\beta$ represents the best choice found for \textsc{Min}.

View File

@ -32,7 +32,7 @@ Intelligence is defined as the ability to perceive or infer information and to r
\item[Symbolic AI (top-down)] \marginnote{Symbolic AI}
Symbolic representation of knowledge, understandable by humans.
\item[Connectionist approach (bottom up)] \marginnote{Connectionist approach}
\item[Connectionist approach (bottom-up)] \marginnote{Connectionist approach}
Neural networks. Knowledge is encoded and not understandable by humans.
\end{description}
@ -124,7 +124,7 @@ A \textbf{feed-forward neural network} is composed of multiple layers of neurons
The first layer is the input layer, while the last is the output layer.
Intermediate layers are hidden layers.
The expressivity of a neural networks increases when more neurons are used:
The expressivity of a neural network increases when more neurons are used:
\begin{descriptionlist}
\item[Single perceptron]
Able to compute a linear separation.
@ -158,7 +158,7 @@ The expressivity of a neural networks increases when more neurons are used:
\item[Deep learning] \marginnote{Deep learning}
Neural network with a large number of layers and neurons.
The learning process is hierarchical: the network exploits simple features in the first layers and
synthesis more complex concepts while advancing through the layers.
synthesizes more complex concepts while advancing through the layers.
\end{description}

View File

@ -12,9 +12,9 @@
In other words, for each $s \in \mathcal{S}$, $\mathcal{N}(s) \subseteq \mathcal{S}$.
\begin{example}[Travelling salesman problem]
Problem: find an Hamiltonian tour of minimum cost in an undirected graph.
Problem: find a Hamiltonian tour of minimum cost in an undirected graph.
A possible neighborhood of a state applies the $k$-exchange that guarantees to maintain an Hamiltonian tour.
A possible neighborhood of a state applies the $k$-exchange that guarantees to maintain a Hamiltonian tour.
\begin{figure}[ht]
\begin{subfigure}{.5\textwidth}
\centering
@ -37,7 +37,7 @@
\item[Global optima]
Given an evaluation function $f$,
a global optima (maximization case) is a state $s_\text{opt}$ such that:
a global optimum (maximization case) is a state $s_\text{opt}$ such that:
\[ \forall s \in \mathcal{S}: f(s_\text{opt}) \geq f(s) \]
Note: a larger neighborhood usually allows to obtain better solutions.
@ -55,7 +55,7 @@
\marginnote{Iterative improvement (hill climbing)}
Algorithm that only performs moves that improve the current solution.
It does not keep track of the explored states (i.e. may return in a previously visited state) and
It does not keep track of the explored states (i.e. may return to a previously visited state) and
stops after reaching a local optima.
\begin{algorithm}
@ -169,7 +169,7 @@ moves can be stored instead but, with this approach, some still not visited solu
\marginnote{Iterated local search}
Based on two steps:
\begin{descriptionlist}
\item[Subsidiary local search steps] Efficiently reach a local optima (intensification).
\item[Subsidiary local search steps] Efficiently reach a local optimum (intensification).
\item[Perturbation steps] Escape from a local optima (diversification).
\end{descriptionlist}
In addition, an acceptance criterion controls the two steps.
@ -194,7 +194,7 @@ Population based meta heuristics are built on the following concepts:
\begin{descriptionlist}
\item[Adaptation] Organisms are suited to their environment.
\item[Inheritance] Offspring resemble their parents.
\item[Natural selection] Fit organisms have many offspring, others become extinct.
\item[Natural selection] Fit organisms have many offspring while others become extinct.
\end{descriptionlist}
\begin{table}[ht]
@ -244,10 +244,10 @@ Genetic operators are:
\includegraphics[width=0.2\textwidth]{img/_genetic_mutation.pdf}
\end{center}
\item[Proportional selection]
Probability of a individual to be chosen as parent of the next offspring.
Probability of an individual to be chosen as parent of the next offspring.
Depends on the fitness.
\item[Generational replacement]
Create the new generation. Possibile approaches are:
Create the new generation. Possible approaches are:
\begin{itemize}
\item Completely replace the old generation with the new one.
\item Keep the best $n$ individual from the new and old population.

View File

@ -124,7 +124,7 @@ The direction of the search can be:
\subsection{Deductive planning}
\marginnote{Deductive planning}
Formulates the planning problem using first order logic to represent states, goals and actions.
Formulates the planning problem using first-order logic to represent states, goals and actions.
Plans are generated as theorem proofs.
\subsubsection{Green's formulation}
@ -163,7 +163,7 @@ The main concepts are:
\end{example}
\item[Frame axioms]
Besides the effects of actions, each state also have to define for all non-changing fluents their frame axioms.
Besides the effects of actions, each state also has to define for all non-changing fluents their frame axioms.
If the problem is complex, the number of frame axioms becomes unreasonable.
\begin{example}[Moving blocks]
\[ \texttt{on(U, V, S)} \land \texttt{diff(U, X)} \rightarrow \texttt{on(U, V, do(MOVE(X, Y, Z), S))} \]
@ -247,7 +247,7 @@ Kowalsky's formulation avoids the frame axioms problem by using a set of fixed p
Actions can be described as:
\[ \texttt{poss(S)} \land \texttt{pact(A, S)} \rightarrow \texttt{poss(do(A, S))} \]
In the Kowalsky's formulation, each action requires a frame assertion (in Green's formulation, each state requires frame axioms).
In Kowalsky's formulation, each action requires a frame assertion (in Green's formulation, each state requires frame axioms).
\begin{example}[Moving blocks]
An initial state can be described by the following axioms:\\[0.5em]
@ -381,7 +381,7 @@ def strips(problem):
Since there are non-deterministic choices, the search space might become very large.
Heuristics can be used to avoid this.
Conjunction of goals are solved separately, but this can lead to the \marginnote{Sussman anomaly} \textbf{Sussman anomaly}
Conjunction of goals is solved separately, but this can lead to the \marginnote{Sussman anomaly} \textbf{Sussman anomaly}
where a sub-goal destroys what another sub-goal has done.
For this reason, when a conjunction is encountered, it is not immediately popped from the goal stack
and is left as a final check.
@ -698,7 +698,7 @@ In macro-operators, two types of operators are defined:
\item[Macro] Set of atomic operators. Before execution, this type of operator has to be decomposed.
\begin{description}
\item[Precompiled decomposition]
The decomposition is known and described along side the preconditions and effects of the operator.
The decomposition is known and described alongside the preconditions and effects of the operator.
\item[Planned decomposition]
The planner has to synthesize the atomic operators that compose a macro operator.
\end{description}
@ -708,7 +708,7 @@ In macro-operators, two types of operators are defined:
\begin{itemize}
\item $X$ must be the effect of at least an atomic action in $P$ and should be protected until the end of $P$.
\item Each precondition of the actions in $P$ must be guaranteed by previous actions or be a precondition of $A$.
\item $P$ must not threat any causal link.
\item $P$ must not threaten any causal link.
\end{itemize}
Moreover, when a macro action $A$ is replaced with its decomposition $P$:
@ -747,7 +747,7 @@ def hdpop(initial_state, goal, actions, decomposition_methods):
\section{Conditional planning}
\marginnote{Conditional planning}
Conditional planning is based on the open world assumption where what is not in the initial state is unknown.
Conditional planning is based on the open-world assumption where what is not in the initial state is unknown.
It generates a different plan for each source of uncertainty and therefore has exponential complexity.
\begin{description}
@ -768,7 +768,7 @@ It generates a different plan for each source of uncertainty and therefore has e
\section{Reactive planning}
Reactive planners are on-line algorithms able to interact with the dynamicity the world.
Reactive planners are online algorithms able to interact with the dynamicity of the world.
\subsection{Pure reactive systems}
\marginnote{Pure reactive systems}
@ -777,7 +777,7 @@ The choice of the action is predictable. Therefore, this approach is not suited
\subsection{Hybrid systems}
\marginnote{Hybrid systems}
Hybrid planners integrate the generative and reactive approach.
Hybrid planners integrate the generative and reactive approaches.
The steps the algorithm does are:
\begin{itemize}
\item Generates a plan to achieve the goal.

View File

@ -76,7 +76,7 @@ def expand(node, problem):
\subsection{Strategies}
\begin{description}
\item[Non-informed strategy] \marginnote{Non-informed strategy}
Domain knowledge not available. Usually does an exhaustive search.
Domain knowledge is not available. Usually does an exhaustive search.
\item[Informed strategy] \marginnote{Informed strategy}
Use domain knowledge by using heuristics.
@ -112,7 +112,7 @@ Always expands the least deep node. The fringe is implemented as a queue (FIFO).
\hline
\textbf{Completeness} & Yes \\
\hline
\textbf{Optimality} & Only with uniform cost (i.e. all edges have same cost) \\
\textbf{Optimality} & Only with uniform cost (i.e. all edges have the same cost) \\
\hline
\textbf{\makecell{Time and space\\complexity}}
& $O(b^d)$, where the solution depth is $d$ and the branching factor is $b$ (i.e. each non-leaf node has $b$ children) \\
@ -238,7 +238,7 @@ estimate the effort needed to reach the final goal.
\subsection{Best-first search}
\marginnote{Best-first seacrh}
Uses heuristics to compute the desirability of the nodes (i.e. how close they are to the goal).
The fringe is ordered according the estimated scores.
The fringe is ordered according to the estimated scores.
\begin{description}

View File

@ -2,7 +2,7 @@
\begin{description}
\item[Swarm intelligence] \marginnote{Swarm intelligence}
Group of locally-interacting agents that
Group of locally interacting agents that
shows an emergent behavior without a centralized control system.
A swarm intelligent system has the following features:
@ -15,7 +15,7 @@
\item The system adapts to changes.
\end{itemize}
Agents interact between each other and obtain positive and negative feedbacks.
Agents interact with each other and obtain positive and negative feedbacks.
\item[Stigmergy] \marginnote{Stigmergy}
Form of indirect communication where an agent modifies the environment and the others react to it.
@ -45,7 +45,7 @@ They also tend to prefer paths marked with the highest pheromone concentration.
\begin{itemize}
\item Nodes are cities.
\item Edges are connections between cities.
\item A solution is an Hamiltonian path in the graph.
\item A solution is a Hamiltonian path in the graph.
\item Constraints to avoid sub-cycles (i.e. avoid visiting a city multiple times).
\end{itemize}
\end{example}
@ -123,7 +123,7 @@ The algorithm has the following phases:
\item[Initialization]
The initial nectar source of each bee is determined randomly.
Each solution (nectar source) is a vector $\vec{x}_m \in \mathbb{R}^n$ and
each of its component is initialized constrained to a lower ($l_i$) and upper ($u_i$) bound:
each of its components is initialized constrained to a lower ($l_i$) and upper ($u_i$) bound:
\[ \vec{x}_m\texttt{[}i\texttt{]} = l_i + \texttt{rand}(0, 1) \cdot (u_i - l_i) \]
\item[Employed bees]
@ -139,7 +139,7 @@ The algorithm has the following phases:
Onlooker bees stochastically choose their food source.
Each food source $\vec{x}_m$ has a probability associated to it defined as:
\[ p_m = \frac{\texttt{fit}(\vec{x}_m)}{\sum_{i=1}^{n_\text{bees}} \texttt{fit}(\vec{x}_i)} \]
This provides a positive feedback as more promising solutions have a higher probability to be chosen.
This provides a positive feedback as more promising solutions have a higher probability of being chosen.
\item[Scout bees]
Scout bees choose a nectar source randomly.
@ -166,7 +166,7 @@ The algorithm has the following phases:
\section{Particle swarm optimization (PSO)}
\marginnote{Particle swarm optimization (PSO)}
In a bird flock, the movement of the individuals tend to:
In a bird flock, the movement of the individuals tends to:
\begin{itemize}
\item Follow the neighbors.
\item Stay in the flock.
@ -174,8 +174,8 @@ In a bird flock, the movement of the individuals tend to:
\end{itemize}
However, a model based on these rules does not have a common objective.
PSO introduces as common objective the search of food.
Each individual that finds food can:
PSO introduces as a common objective the search for food.
Each individual who finds food can:
\begin{itemize}
\item Move away from the flock and reach the food.
\item Stay in the flock.
@ -197,7 +197,7 @@ Applied to optimization problems, the bird flock metaphor can be interpreted as:
\end{descriptionlist}
Given a cost function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ to minimize (gradient is not known),
PSO initializes a swarm of particles (agents) whose movement is guided by the best known position.
PSO initializes a swarm of particles (agents) whose movement is guided by the best-known position.
Each particle is described by:
\begin{itemize}
\item Its position $\vec{x}_i \in \mathbb{R}^n$ in the search space.

View File

@ -7,7 +7,7 @@
\item[Business process management] \marginnote{Business process management}
Methods to design, manage and analyze business processes by mining data contained in information systems.
Business processes help in making decisions and automations.
Business processes help in making decisions and automation.
\item[Business process lifecycle] \phantom{}
\begin{description}
@ -50,7 +50,7 @@
\section{Business process modelling}
\section{Business process modeling}
\begin{description}
\item[Activity instance] \marginnote{Activity instance}
@ -73,30 +73,30 @@
\end{description}
\subsection{Control flow modelling}
\subsection{Control flow modeling}
\begin{description}
\item[Process modelling types] \phantom{}
\item[Process modeling types] \phantom{}
\begin{description}
\item[Procedural vs declarative] \phantom{}
\begin{description}
\item[Procedural] \marginnote{Procedural modelling}
\item[Procedural] \marginnote{Procedural modeling}
Based on a strict ordering of the steps.
Uses conditional choices, loops, parallel execution, events.
Subject to the spaghetti-like process problem.
\item[Declarative] \marginnote{Declarative modelling}
\item[Declarative] \marginnote{Declarative modeling}
Based on the properties that should hold during execution.
Uses concepts as: executions, expected executions, prohibited executions.
Uses concepts such as executions, expected executions, prohibited executions.
\end{description}
\item[Closed vs open] \phantom{}
\begin{description}
\item[Closed] \marginnote{Closed modelling}
\item[Closed] \marginnote{Closed modeling}
The execution of non-modelled activities is prohibited.
\item[Open] \marginnote{Open modelling}
\item[Open] \marginnote{Open modeling}
Constraints to allow non-modelled activities.
\end{description}
\end{description}
@ -104,13 +104,13 @@
The most common combination of approaches are:
\begin{descriptionlist}
\item[Closed procedural process modelling]
\item[Open declarative process modelling]
\item[Closed procedural process modeling]
\item[Open declarative process modeling]
\end{descriptionlist}
\section{Closed procedural process modelling}
\section{Closed procedural process modeling}
\begin{description}
\item[Process model]
@ -230,14 +230,14 @@ The most common combination of approaches are:
\begin{table}[H]
\centering
\begin{tabular}{c|c}
\textbf{Petri nets} & \textbf{Business process modelling} \\
\textbf{Petri nets} & \textbf{Business process modeling} \\
\hline
Petri net & Process model \\
Transitions & Activity models \\
Tokens & Instances \\
Transition firing & Activity execution \\
\end{tabular}
\caption{Petri nets and business process modelling concepts equivalence}
\caption{Petri nets and business process modeling concepts equivalence}
\end{table}
@ -299,7 +299,7 @@ De-facto standard for business process representation.
Drawn as a thin-bordered circle.
\item[Intermediate event]
Event occurring after the start of a process, but before its end.
Event occurring after the start of a process but before its end.
\item[End event]
Indicates the end of a process and optionally provides its result.
@ -355,7 +355,7 @@ De-facto standard for business process representation.
\section{Open declarative process modelling}
\section{Open declarative process modeling}
Define formal properties for process models (i.e. more formal than procedural methods).
Properties defined in term of the evolution of the process (similar to the evolution of the world in modal logics)
@ -423,7 +423,7 @@ Based on constraints that must hold in every possible execution of the system.
\end{description}
\item[Semantics]
The semantic of the constraints can be defined using LTL.
The semantics of the constraints can be defined using LTL.
\item[Verifiable properties] \phantom{}
\begin{description}
@ -497,7 +497,7 @@ Based on constraints that must hold in every possible execution of the system.
\item[Process discovery] \marginnote{Process discovery}
Learn a process model representative of the input event log.
More formally, a process discovery algorithm is a function that maps an event log into a business process modelling language.
More formally, a process discovery algorithm is a function that maps an event log into a business process modeling language.
In our case, we map logs into Petri nets (preferably workflow nets).
\begin{remark}
@ -591,7 +591,7 @@ Based on constraints that must hold in every possible execution of the system.
\item[Model evaluation]
Different models can capture the same process described in a log.
This allows for models that are capable of capturing all the possible traces of a log but
are unable provide any insight (e.g. flower Petri net).
are unable to provide any insight (e.g. flower Petri net).
\begin{figure}[H]
\centering
@ -608,7 +608,7 @@ Based on constraints that must hold in every possible execution of the system.
\item[Precision] \marginnote{Precision}
How the model is able to capture rare cases.
\item[Generalization] \marginnote{Generalization}
How the model generalize on the training traces.
How the model generalizes on the training traces.
\end{descriptionlist}
\end{description}
@ -618,7 +618,7 @@ Based on constraints that must hold in every possible execution of the system.
\begin{description}
\item[Descriptive model discrepancies] \marginnote{Descriptive model}
The model need to be improved.
The model needs to be improved.
\item[Prescriptive model discrepancies] \marginnote{Prescriptive model}
The traces need to be checked as the model cannot be changed (e.g. model of the law).
@ -632,14 +632,14 @@ Based on constraints that must hold in every possible execution of the system.
\begin{description}
\item[Token replay] \marginnote{Token replay}
Given a trace and a Petri net, the trace is replayed on the model by moving tokens around.
The trace is conform if the end event can be reached, otherwise it is not.
The trace is conform if the end event can be reached, otherwise, it is not.
A modified version of token replay allows to add or remove tokens when the trace is stuck on the Petri net.
These external interventions are tracked and used to compute a fitness score (i.e. degree of conformance).
Limitations:
\begin{itemize}
\item Fitness tend to be high for extremely problematic logs.
\item Fitness tends to be high for extremely problematic logs.
\item If there are too many deviations, the model is flooded with tokens and may result in unexpected behaviors.
\item It is a Petri net specific algorithm.
\end{itemize}

View File

@ -251,12 +251,12 @@ The following algorithms can be employed:
\subsection{Open world assumption}
\begin{description}
\item[Open world assumption] \marginnote{Open world assumption}
If a sentence cannot be inferred, its truth values is unknown.
\item[Open-world assumption] \marginnote{Open-world assumption}
If a sentence cannot be inferred, its truth value is unknown.
\end{description}
Description logics are based on the open world assumption.
To reason in open world assumption, all the possible models are split upon encountering an unknown facts
Description logics are based on the open-world assumption.
To reason in open world assumption, all the possible models are split upon encountering unknown facts
depending on the possible cases (Oedipus example).

View File

@ -45,7 +45,7 @@ RETE is an efficient algorithm for implementing rule-based systems.
A pattern can test:
\begin{descriptionlist}
\item[Intra-element features] Features that can be tested directly on a fact.
\item[Inter-element features] Features that involves more facts.
\item[Inter-element features] Features that involve more facts.
\end{descriptionlist}
\item[Conflict set] \marginnote{Conflict set}
@ -60,11 +60,11 @@ RETE is an efficient algorithm for implementing rule-based systems.
\begin{descriptionlist}
\item[Alpha-network] \marginnote{Alpha-network}
For intra-element features.
The outcome is stored into alpha-memories and used by the beta network.
The outcome is stored in alpha-memories and used by the beta network.
\item[Beta-network] \marginnote{Beta-network}
For inter-element features.
The outcome is stored into beta-memories and corresponds to the conflict set.
The outcome is stored in beta-memories and corresponds to the conflict set.
\end{descriptionlist}
If more rules use the same pattern, the node of that pattern is reused and possibly outputting to different memories.
\end{description}
@ -83,7 +83,7 @@ The best approach depends on the use case.
\subsection{Execution}
By default, RETE executes all the rules in the agenda and
then checks possible side effects that modified the working memory in a second moment.
then checks for possible side effects that modify the working memory in a second moment.
Note that it is very easy to create loops.
@ -162,7 +162,7 @@ RETE-based rule engine that uses Java.
Event detected outside an event processing system (e.g. a sensor). It does not provide any information alone.
\item[Complex event] \marginnote{Complex event}
Event generated by an event processing system and provides higher informative payload.
Event generated by an event processing system and provides a higher informative payload.
\item[Complex event processing (CEP)] \marginnote{Complex event processing}
Paradigm for dealing with a large amount of information.
@ -185,7 +185,7 @@ Drools supports CEP by representing events as facts.
\end{description}
\item[Expiration]
Mechanism to specify an expiration time to events and discard them from the working memory.
Mechanism to specify an expiration time for events and discard them from the working memory.
\item[Temporal reasoning]
Allen's temporal operators for temporal reasoning.

View File

@ -17,7 +17,7 @@
Properties:
\begin{itemize}
\item Should be applicable to almost any special domain.
\item Combining general concepts should not incur in inconsistences.
\item Combining general concepts should not incur in inconsistencies.
\end{itemize}
Approaches to create ontologies:
@ -35,7 +35,7 @@
\item[Category] \marginnote{Category}
Used in human reasoning when the goal is category-driven (in contrast to specific-instance-driven).
In first order logic, categories can be represented through:
In first-order logic, categories can be represented through:
\begin{descriptionlist}
\item[Predicate] \marginnote{Predicate categories}
A predicate to tell if an object belongs to a category
@ -158,7 +158,7 @@ A property of objects.
\section{Semantic networks}
\marginnote{Semantic networks}
Graphical representation of objects and categories connected through labelled links.
Graphical representation of objects and categories connected through labeled links.
\begin{figure}[h]
\centering
@ -189,7 +189,7 @@ Graphical representation of objects and categories connected through labelled li
\begin{description}
\item[Limitations]
Compared to first order logic, semantic networks do not have:
Compared to first-order logic, semantic networks do not have:
\begin{itemize}
\item Negations.
\item Universally and existentially quantified properties.
@ -202,7 +202,7 @@ Graphical representation of objects and categories connected through labelled li
This approach is powerful but does not have a corresponding logical meaning.
\item[Advantages]
With semantic networks it is easy to attach default properties to categories and
With semantic networks, it is easy to attach default properties to categories and
override them on the objects (i.e. \texttt{Legs} of \texttt{John}).
\end{description}
@ -213,7 +213,7 @@ Graphical representation of objects and categories connected through labelled li
Knowledge that describes an object in terms of its properties.
Each frame has:
\begin{itemize}
\item An unique name
\item A unique name
\item Properties represented as pairs \texttt{<slot - filler>}
\end{itemize}

View File

@ -3,7 +3,7 @@
\begin{description}
\item[Probabilistic logic programming] \marginnote{Probabilistic logic programming}
Adds probability distributions over logic programs allowing to define different worlds.
Joint distributions can also be defined over worlds and allows to answer to queries.
Joint distributions can also be defined over worlds and allow to answer to queries.
\end{description}

View File

@ -44,7 +44,7 @@ It may be useful to first have a look at the "Logic programming" section of
Variables appearing in a fact are quantified universally.
\[ \texttt{A(X).} \equiv \forall \texttt{X}: \texttt{A(X)} \]
\item[Rules]
Variables appearing the the body only are quantified existentially.
Variables appearing in the body only are quantified existentially.
Variables appearing in both the head and the body are quantified universally.
\[ \texttt{A(X) :- B(X, Y).} \equiv \forall \texttt{X}, \exists \texttt{Y} : \texttt{A(X)} \Leftarrow \texttt{B(X, Y)} \]
@ -72,7 +72,7 @@ It may be useful to first have a look at the "Logic programming" section of
\end{descriptionlist}
\item[SLD resolution] \marginnote{SLD}
Prolog uses SLD resolution with the following choices:
Prolog uses a SLD resolution with the following choices:
\begin{descriptionlist}
\item[Left-most] Always proves the left-most literal first.
\item[Depth-first] Applies the predicates following the order of definition.
@ -204,7 +204,7 @@ Therefore, if \texttt{qj, \dots, qn} fails, there won't be backtracking and \tex
Adding new axioms to the program may change the set of valid theorems.
\end{description}
As first-order logic in undecidable, closed-world assumption cannot be directly applied in practice.
As first-order logic is undecidable, the closed-world assumption cannot be directly applied in practice.
\item[Negation as failure] \marginnote{Negation as failure}
A negated atom $\lnot A$ is considered true iff $A$ fails in finite time:
@ -222,9 +222,8 @@ Therefore, if \texttt{qj, \dots, qn} fails, there won't be backtracking and \tex
\begin{itemize}
\item If \texttt{L$_i$} is positive, apply the normal SLD resolution.
\item If \texttt{L$_i$} = $\lnot A$, prove that $A$ fails in finite time.
If it succeeds, \texttt{L$_i$} fails.
\end{itemize}
\item Solve the goal \texttt{:- L$_1$, \dots, L$_{i-1}$, L$_{i+1}$, \dots L$_m$}.
\item Solve the remaining goal \texttt{:- L$_1$, \dots, L$_{i-1}$, L$_{i+1}$, \dots, L$_m$}.
\end{enumerate}
\begin{theorem}
@ -407,7 +406,7 @@ father(mario, paola).
The operator \texttt{T =.. L} unifies \texttt{L} with a list where
its head is the head of \texttt{T} and the tail contains the remaining arguments of \texttt{T}
(i.e. puts all the components of a predicate into a list).
Only one between \texttt{T} and \texttt{L} may be a variable.
Only one between \texttt{T} and \texttt{L} can be a variable.
\begin{example} \phantom{} \\
\begin{minipage}{0.5\textwidth}
@ -458,7 +457,7 @@ father(mario, paola).
Note that \texttt{:- assert((p(X)))} quantifies \texttt{X} existentially as it is a query.
If it is not ground and added to the database as is,
is becomes a clause and therefore quantified universally: $\forall \texttt{X}: \texttt{p(X)}$.
it becomes a clause and therefore quantified universally: $\forall \texttt{X}: \texttt{p(X)}$.
\begin{example}[Lemma generation] \phantom{}
\begin{lstlisting}[language={}]
@ -473,7 +472,7 @@ father(mario, paola).
generate_lemma(T) :- assert(T).
\end{lstlisting}
\texttt{generate\_lemma/1} allows to add to the clauses database all the intermediate steps to compute the Fibonacci sequence
The custom defined \texttt{generate\_lemma/1} allows to add to the clauses database all the intermediate steps to compute the Fibonacci sequence
(similar concept to dynamic programming).
\end{example}

View File

@ -19,7 +19,7 @@
\item[Uniform resource identifier] \marginnote{URI}
Naming system to uniquely identify concepts.
Each URI correspond to one and only one concept, but multiple URIs can refer to the same concept.
Each URI corresponds to one and only one concept, but multiple URIs can refer to the same concept.
\item[XML] \marginnote{XML}
Markup language to represent hierarchically structured data.
@ -74,7 +74,7 @@ xmlns:contact=http://www.w3.org/2000/10/swap/pim/contact#>
\item[Database similarities]
RDF aims to integrate different databases:
\begin{itemize}
\item A DB record is a RDF node.
\item A DB record is an RDF node.
\item The name of a column can be seen as a property type.
\item The value of a field corresponds to the value of a property.
\end{itemize}
@ -87,8 +87,8 @@ xmlns:contact=http://www.w3.org/2000/10/swap/pim/contact#>
Language to query different data sources that support RDF (natively or through a middleware).
\item[Ontology web language (OWL)] \marginnote{Ontology web language (OWL)}
Ontology based on RDF and description logic fragments.
Three level of expressivity are available:
Ontology-based on RDF and description logic fragments.
Three levels of expressivity are available:
\begin{itemize}
\item OWL lite.
\item OWL DL.

View File

@ -5,12 +5,12 @@
\begin{description}
\item[State] \marginnote{State}
The current state of the world can be represented as a set of propositions that are true according the observation of an agent.
The current state of the world can be represented as a set of propositions that are true according to the observation of an agent.
The union of a countable sequence of states represents the evolution of the world. Each proposition is distinguished by its time step.
\begin{example}
A child has a bow and an arrow, then shoots the arrow.
A child has a bow and an arrow and then shoots the arrow.
\[
\begin{split}
\text{KB}^0 &= \{ \texttt{hasBow}^0, \texttt{hasArrow}^0 \} \\
@ -51,7 +51,7 @@
\section{Situation calculus (Green's formulation)}
Situation calculus uses first order logic instead of propositional logic.
Situation calculus uses first-order logic instead of propositional logic.
\begin{description}
\item[Situation] \marginnote{Situation}
@ -142,8 +142,8 @@ Event calculus reifies fluents and events (actions) as terms (instead of predica
\begin{description}
\item[Deductive reasoning]
Event calculus only allows deductive reasoning:
it takes as input the domain-dependant axioms and a set of events, and computes a set of true fluents.
If a new event is observed, the query need to be recomputed again.
it takes as input the domain-dependant axioms and a set of events and computes a set of true fluents.
If a new event is observed, the query needs to be recomputed again.
\end{description}
@ -183,7 +183,7 @@ Allows to add events dynamically without the need to recompute the result.
\section{Allen's logic of intervals}
Event calculus only captures instantaneous events that happen in given points in time.
Event calculus only captures instantaneous events that happen at given points in time.
\begin{description}
\item[Allen's logic of intervals] \marginnote{Allen's logic of intervals}
@ -217,7 +217,7 @@ Event calculus only captures instantaneous events that happen in given points in
\section{Modal logics}
Logic based on interacting agents with their own knowledge base.
Logic-based on interacting agents with their own knowledge base.
\begin{description}
\item[Propositional attitudes] \marginnote{Propositional attitudes}
@ -226,7 +226,7 @@ Logic based on interacting agents with their own knowledge base.
First-order logic is not suited to represent these operators.
\item[Modal logics] \marginnote{Modal logics}
Modal logics have the same syntax of first-order logic with the addition of modal operators.
Modal logics have the same syntax as first-order logic with the addition of modal operators.
\item[Modal operator]
A modal operator takes as input the name of an agent and a sentence (instead of a term as in FOL).
@ -260,7 +260,7 @@ Logic based on interacting agents with their own knowledge base.
\end{itemize}
\begin{example}
Alice is in a room an tosses a coin. Bob is in another room an will enter Alice's room when the coin lands to observe the result.
Alice is in a room and tosses a coin. Bob is in another room and will enter Alice's room when the coin lands to observe the result.
We define a model $M = (S, \pi, K_\texttt{a}, K_\texttt{b})$ on $\phi$ where:
\begin{itemize}
@ -359,7 +359,7 @@ The accessibility relation maps into the temporal dimension with two possible ev
\end{description}
\item[Semantics]
Given a Kripke structure $M = (S, \pi, K_\texttt{1}, \dots, K_\texttt{n})$ where states are represented using integers,
Given a Kripke structure, $M = (S, \pi, K_\texttt{1}, \dots, K_\texttt{n})$ where states are represented using integers,
the semantic of the operators is the following:
\begin{itemize}
\item $(M, i) \models P \iff i \in \pi(P)$.
@ -370,5 +370,5 @@ The accessibility relation maps into the temporal dimension with two possible ev
\end{itemize}
\item[Model checking] \marginnote{Model checking}
Methods to prove properties of linear-time temporal logic based finite state machines or distributed systems.
Methods to prove properties of linear-time temporal logic-based finite state machines or distributed systems.
\end{description}