Skip to content

Commit 2c7679f

Browse files
committed
reformatted learning, some fixes
1 parent e6c9769 commit 2c7679f

File tree

2 files changed

+26
-31
lines changed

2 files changed

+26
-31
lines changed

src/chaps/artificialintelligence.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,6 @@ \section{Artificial Intelligence}%
3131
\citep[p.24ff.]{russell2016artificial, arulkumaran2017brief}. The reasons for this are diverse but it can be argued that
3232
the combination of readily available computing power through cloud computing and advances in the mathematical
3333
underpinnings have allowed for fast-paced advances in recent years. Also, the currently very popular \acf {NN}
34-
architectures often require large amouts of data to learn which have lately been readily available for companies and
35-
:esearchers through the adoption of online technologies by the majority of the population
34+
architectures often require large amounts of data to learn which have lately been readily available for companies and
35+
researchers through the adoption of online technologies by the majority of the population
3636
\citep[p.27]{russell2016artificial}.

src/chaps/learning.tex

Lines changed: 24 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,29 @@
1-
According to \cite{russell2016artificial}, learning agents are those that
2-
\emph{improve their performance on future tasks after making observations about
3-
the world} \cite[p.693]{russell2016artificial}. Learning behavior is present in
4-
many species most notably humans. To create a learning algorithm means that the
5-
creator did not have to anticipate every potential variant of an environment
6-
that the agent is confronted with while still creating an agent that can act
7-
successfully in such environments. Cognitive Sciences define learning as the
8-
change of state due to experiences as a necessary requirement and often limit
9-
the recognition of learning to some observable behavior
10-
\cite[p.96f.]{cognition1999}. This applies to all known species and the same
11-
definition can easily be applied to a learning artificial agent. A learning
12-
agent that doesn't change its behavior is not very helpful and an agent that
1+
According to \cite{russell2016artificial}, learning agents are those that \emph{improve their performance on future
2+
tasks after making observations about the world} \cite[p.693]{russell2016artificial}. Learning behavior is present in
3+
many species most notably humans. To create a learning algorithm means that the creator did not have to anticipate every
4+
potential variant of an environment that the agent is confronted with while still creating an agent that can act
5+
successfully in such environments. Cognitive Sciences define learning as the change of state due to experiences as a
6+
necessary requirement and often limit the recognition of learning to some observable behavior
7+
\cite[p.96f.]{cognition1999}. This applies to all known species and the same definition can easily be applied to a
8+
learning artificial agent. A learning agent that doesn't change its behavior is not very helpful and an agent that
139
doesn't change its state can hardly have learned something.
1410

15-
The \ac {AI} community has for many years employed a \emph{loss function} as a
16-
measure of learning progress. Loss functions describe the difference between the
17-
actual utility of the right actions versus the results of the agents learned
18-
actions. The exact loss function might be a mean squared error function or an
19-
absolute loss depending on the learning algorithm that is used.
11+
The \ac {AI} community has for many years employed a \emph{loss function} as a measure of learning progress. Loss
12+
functions describe the difference between the actual utility of the right actions versus the results of the agents
13+
learned actions. The exact loss function might be a mean squared error function or an absolute loss depending on the
14+
learning algorithm that is used.
2015

21-
Computational learning theory looks at many different problems of learning: How
22-
to learn through a large number of examples, the effects of learning when the
23-
agent already knows something, how to learn without examples, how to learn
24-
through feedback from the environment and how to learn if the origin of the
25-
feedback is not deterministic \cite[]{russell2016artificial}. In this work, two
26-
of those problems are of special interest: The ability to learn from previously
27-
labelled examples and the ability to learn through feedback from the
28-
environment. The former is called \ac {SL} and the latter is mostly
29-
referred to as \ac {RL}.
16+
Computational learning theory looks at many different problems of learning: How to learn through a large number of
17+
examples, the effects of learning when the agent already knows something, how to learn without examples, how to learn
18+
through feedback from the environment and how to learn if the origin of the feedback is not deterministic
19+
\cite[]{russell2016artificial}. In this work, two of those problems are of special interest: The ability to learn from
20+
previously labelled examples and the ability to learn through feedback from the environment. The former is called \acl
21+
{SL} and the latter is mostly referred to as \acl {RL}. To understand the difference, it is also important to
22+
understand algorithms that don't have access to labels for existing data, yet are still able to derive value from the
23+
information. These belong to the class of \acf {UL}. Although this class is not heavily relied upon in the
24+
implementation of the actual agent in the later practical implementation, it is crucial for many tasks in machine
25+
learning such as data exploration or anomality recognition.
3026

31-
The following sections will cover both mentioned forms of learning and
32-
Section~\ref{sec:neural_networks} will introduce an architecture that can be
33-
used as the learning function in these learning problems. Finally,
27+
The following sections will describe both \acl {SL} and \acl {UL} and Section~\ref{sec:neural_networks} will introduce
28+
an architecture that can be used as the learning function in these learning problems. Finally,
3429
Section~\ref{sec:Backpropagation} will explain how exactly \ac {NN} learn.

0 commit comments

Comments
 (0)