Introduction to ML with Decision Trees

By Sajad Darabi

January 19, 2018

Before I came here I was confused about this subject. Having listened to your lecture I am still confused. But on a higher level. - Enrico Fermi

Content

  • Machine Learning Concepts
  • Maximum Likelihood
  • Decision Tree Algorithm
  • Scikit Learn Decision Tree Classifier

Machine Learning Concepts

Definitions and Terminology

  • Example: a particular instance of data
  • Features: set of attributes, represented as a vector $x_i$
  • Labels:
    • in classification category associaetd with an example
    • in regression a real valued number associated with an example
  • Training data: data used for training an algorithm
  • Test data: data used for evaluating the model trained
  1. In step 1 we collect the data of interest, e.g. cats & dogs, vital signals, etc.
  1. Data collected does not necessarily always come in a way that is convinient to be fed to a ML model. It must first be pre-processed and cleaned prior to using it for training and testing.
  1. Once data is ready a ML model is chosen given the goal we are trying to achieve (classifcation, regression, etc..). Then it is trained using training data.
  1. The trained model is then tested using test data in which the model has not been exposed to before.
  2. Improve the model if it does not meet requirements.

Types of Learning

Supervised learning: given a set of features $x_i$ and labels $y_i$ train a model to predict the label $y_{new}$ for an unseen feature $x_{new}$.

$$F_{model}: x_i \rightarrow y_i$$

Example: a dataset containing images of breast labelled as either cancerous or non-cancerous.

Unsupervised learning: given a set of features $x_i$ find patterns in the data and assign it to a class $y_c$. $$F_{model}: x_i \rightarrow y_c$$

Example: cluster a set of patients according to their weight, height, etc..

Learning Problems

Regression: Here the goal is to estimate a real valued variable $y \in \Re$ given an input vector $x_i$

Example: predict weight of person given height, gender, race, etc.

Classification: given an inpute feature $x_i$ categorize it and assign it a label $y \in Y:= \{y_1, ... y_k\}$

Example: classify a document as written in $\{english, french, spanish, ...\}$

Types of Classifiers

  1. Instance based classification
    • memorize the training data and use it to as an observation for classifying new examples
  1. Generative:
    • build a statistical model that models the underlying system that generated the examples (i.e. learn a statistical model)
  1. Discrimantive
    • estimate a decision rule or boundary that splits the examples into different regions corresponding to the different classes.

Learning Formulation

Goal: to learn a function $f: x \rightarrow y$ to make prediction

  1. Define a learning process.
    • supervised learning: define a loss function $l(x_i, y_i)$ which will encur some loss on bad predictions
    • unsupervised learning: learn a distribution of the data or an underlying structure.
  2. Train the model using training data set
  3. Make prediction using the trained model
    • hope it generalizes well...

Features are Important

Choosing the right features to be used for a particular learning task could be quite a hassle!

  • bad features will be uncorrelated with labels making it hard for the algorithm to learn
  • good features can correlate and be effective for the model.

Generalization

Is synonymous to not memorizing the training dataset.

  • We are interested in the performance of the model on unseen data
  • Minimizing error on training set does not gaurantee good generalization

You can overfit your model to the training set if your hypothesis is too complex!

Or underfit if your model is too simple.

Occam's Razor: Given a simpler model that fits the data adequatly relative to a more complex model, the simpler model should be chosen as it makes less assumptions about the underlying system.

Entropy

Information

Entropy is a foundational concept in information theory.

Information can be thought to be stored in a variable that can take on different values.

We get information by looking at the value of that variable, just the way we get information by going to the next slide and reading it's content..

The entropy defined in information theory is related to the entropy in mechanical systems:

$$H(X) = -\sum_{x \in X} p(x)log(p(x))$$

  • it is the uncertainty of a random variable. In this case the random variable $X$.

It measures the randomness contained in that random variable.

  • The higher the entropy the harder to draw conclusions from the outcome of a random variable.

Conditional Entropy

We can define conditional Entropy

$$H(X|Y) = -\sum_{x\in X}p(x~|~y)log(p(x~|~y))$$

which is the randomness in a random variable given knowledge of the state of another random variable.

if the base of the log used is $e$, or $2$ then the units for $H(x)$ is nats and bits respictevely

Definition of Entropy

Entropy, the way it's defined is derived in different ways; a common one is the axiomatic approach in which we want it to have certain properties, you can look here for details.

You can also think of entropy as the expected value of $\frac{1}{p(x)}$

$$E[\frac{1}{p(x)}] = \sum p(x)log(\frac{1}{p(x)})$$

Example

What is the entropy associated with $X \sim bernoulli(p)$ ?

For what $p$ is the entropy maximum?

Mutual Information

Quantifies the amount of uncertainty removed upon obtaining information on an instance of a variable.

It is calculated using

\begin{align} I(X; Y) &= \sum_{x, y}p(x, y)log\frac{p(x, y)}{p(x)p(y)}\\ &= H(X) - H(X|Y)\\ &= H(Y) - H(Y|X) \end{align}

This is also called the information gain upon observing $Y$.

Decision Trees

Decision tree is a discriminative classifier

  • it estimates a decision rule/boundary amongst examples

  • an intuitive classifier

  • easy to understand, construct and visualize

Given it's simplicity it actually works very well in practice!

The Algorithm

  • We use splits on a particular attribute to split the feature space region in a way that the leaf would represent a certain class.
  • The splitting criteria can be different depending on the type of algorithm used.

ID3 Algorithm

In ID3 algorith we decide which atribute to split on based on entropy

  • a measure of uncertainty (impurity) associated with the attribute

The uncertainty at a node to classify an instance is given by

$$H(D) = -\sum^np_ilog(p_i)$$

We can potentially reduce this uncertainty by splitting that node using an attribute.

$$H_A(D) = \sum^v \frac{|D_j|}{|D|} H(D_j)$$

Which is the weighted average of the entropies, after the split.

The criteria we use in ID3 is information gained after splitting attribute A.

$$Gain(A) = H(D) - H_A(D)$$

Build the tree for the following using ID3

Day

Outlook

Temperature

Humidity

Wind

Play ball

D1Sunny HotHigh WeakNo
D2Sunny HotHigh StrongNo
D3Overcast HotHigh WeakYes
D4Rain MildHigh WeakYes
D5Rain CoolNormal WeakYes
D6Rain CoolNormal StrongNo
D7Overcast CoolNormal StrongYes
D8Sunny MildHigh WeakNo
D9Sunny CoolNormal WeakYes
D10Rain MildNormal WeakYes
D11Sunny MildNormal StrongYes
D12Overcast MildHigh StrongYes
D13Overcast HotNormal WeakYes
D14Rain MildHigh StrongNo

Solution can be found here

That's all for now

Thanks!