Skip to content

Commit fb69e4d

Browse files
terminada la memoria
1 parent 49141d7 commit fb69e4d

File tree

4 files changed

+58
-51
lines changed

4 files changed

+58
-51
lines changed

Memoria TFM/Capitulos/01Capitulo1.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ \subsection{Enterprise security}
6666
even to company applications.
6767

6868
\figuraEx{Vectorial/proposed_diagram.pdf}{width=0.9\textwidth}{fig:proposed_diagram}%
69-
{Architecture approach of an Enterprise Network assuming that the Company has adopted the \ac{BYOD} philosophy.}{Architecture approach of an Enterprise Network assuming that the Company has adopted the \ac{BYOD} philosophy.}
69+
{Architecture approach of an Enterprise Network assuming that the Company has adopted the BYOD philosophy.}{Architecture approach of an Enterprise Network assuming that the Company has adopted the BYOD philosophy.}
7070

7171
This situation leads to a need of protecting the organisation's side, but also the users' side, making non-interfering easy-to-follow \ac{ISP}s, and leaving them to use their devices for personal purposes while working, without putting organisation's information assets under risk. The compliance of these requirements would compose an end-to-end security solution (protecting both enterprise and employee).
7272

Memoria TFM/Capitulos/04Capitulo4.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ \section{Weka}
173173
A reference to each Weka classifier can be found at \citep{Frank2011}. Below are described the top five techniques, obtained from the best results of the experiments done in this stage, along with more specific bibliography. Naïve Bayes method \citep{Bayesian_Classifier_97} has been included as a baseline, normally used in text categorization problems. According to the results, the five selected classifiers are much better than this method. Deeper results are shown in Chapter \ref{cap5:results}.
174174

175175
\begin{description}
176-
\item[Naïve Bayes] It is the classification technique that we have used as a reference for either its simplicity and ease to understand. Its basis relies on the Bayes Theorem and the possibility of represent the relationship between two random variables as a Bayesian network \ref{rish2001empirical}. Then, by assigning values to the variables probabilities, the probabilities of the occurrences between them can be obtained. Thus, assuming that a set of attributes are independent one from another, and using the Bayes Theorem, patterns can be classified without the need of trees or rule creation, just by calculating probabilities.
176+
\item[Naïve Bayes] It is the classification technique that we have used as a reference for either its simplicity and ease to understand. Its basis relies on the Bayes Theorem and the possibility of represent the relationship between two random variables as a Bayesian network \citep{rish2001empirical}. Then, by assigning values to the variables probabilities, the probabilities of the occurrences between them can be obtained. Thus, assuming that a set of attributes are independent one from another, and using the Bayes Theorem, patterns can be classified without the need of trees or rule creation, just by calculating probabilities.
177177
\item[J48] This classifier generates a pruned or unpruned C4.5 decision tree. Described for the first time in 1993 by \citep{Quinlan1993}, this machine learning method builds a decision tree selecting, for each node, the best attribute for splitting and create the next nodes. An attribute is selected as `the best' by evaluating the difference in entropy (information gain) resulting from choosing that attribute for splitting the data. In this way, the tree continues to grow till there are not attributes anymore for further splitting, meaning that the resulting nodes are instances of single classes.
178178
\item[Random Forest] This manner of building a decision tree can be seen as a randomization of the previous C4.5 process. It was stated by \citep{Breiman2001} and consist of, instead of choosing `the best' attribute, the algorithm randomly chooses one between a group of attributes from the top ones. The size of this group is customizable in Weka.
179179
\item[REP Tree] Is another kind of decision tree, it means Reduced Error Pruning Tree. Originally stated by \citep{Quinlan1987}, this method builds a decision tree using information gain, like C4.5, and then prunes it using reduced-error pruning. That means that the training dataset is divided in two parts: one devoted to make the tree grow and another for pruning. For every subtree (not a class/leaf) in the tree, it is replaced by the best possible leaf in the pruning three and then it is tested with the test dataset if the made prune has improved the results. A deep analysis about this technique and its variants can be found in \citep{Elomaa2001}.

0 commit comments

Comments
 (0)