Introduction
Here is a not so short note (Rolling Update) for machine learning and artificial intelligience. Main concepts and some references are presented. This note is based on Wikipedia, Andrew Ng’s Lecture Notes CS229 of Standford, Machine Learning course of SJTU IEEE-yangyang, and Machine Learning Crash Course of Google.
Supervised Learning
In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal), like SVM, regression, decision trees, naive Bayes, etc. Classification means the output takes discrete values. Three common classification methods are Logistic Regression, Fisher Linear Discriminant Analysis and Nearest Neighbor Classification.
Logistic Regression
In logistic regression, the dependent variable is categorical. Therefore, logistic regression is a classification model (with discrete labels). Logistic regression can be binomial, ordinal or multinomial.
The binary logistic model is used to estimate the probability of a binary response based on one or more predictor (or independent) variables (features). The output for a dependent variable can have only two values, e.g., “0” and “1”. The conditional distribution is a Bernoulli distribution rather than a Gaussian distribution.
Fig. 1. The standard logistic function
The model function of LR model is called the logistic function:
where is defined as:
We can obtain the generalized linear model function parameterized by :
The probability of the dependent varible is:
Maximum likehood estimation(MLE) is used to find the parameter values that maximize the likelihood function with the given observations. Define the likehood:
In practice, it is often convenient to use the log-likelihood (natural logarithm):
A log-likelihood can be interpreted as a measure of “encoding length” - the number of bits you expect to spend to encode this information, which is called cross-entropy.
The problem is to find the independent variable at which the function values are maximized:
By derivation, we can find the . there are some optimization methods:
- Gradient ascent (to find a local maximum)
- Newton method
- Conjugate gradient
- etc.
Gradient ascent
Apply a gradient ascent algorithm for every :
where means the direction is the same as the gradient.
From the definition of , we have:
Thus,
- batch gradient ascent method:
- stochastic gradient ascent method:
where is a randomly chosen sample. Then we can repeat choosing the sample and update the gradient and weights.
Newton’s method (Newton-Raphson method)
In numerical analysis, Newton’s method is a iterative method for finding successively better approximations to the solution of a real-valued function in the form of based on Taylor expansion. The process is repeated as:
Now we need to solve , the process can be written as:
where is the Hessian matrix with by , whose entries are given by:
The problem is: (why 1/m?)
The Newton’s method is:
where the gradient of the cost function is:
and the Hessian Matrix is:
(H is a matrix, where is ?)
Logistic regression has a linear decision boundary (hyperplane).
Linear discriminant analysis (LDA)
Linear discriminant analysis (LDA), which is also known as Fisher’s linear discriminant, is a simple way for dimensionality reduction in linear classification. By LDA, we can find a linear combination of features that classifies objects in two or more groups.
- Compared with logistic regression, LDA assumes that the independent variables are normally distributed (Gaussian distribution). Logistic regression is more preferable in applications when the assumption is not valid.
- Compared with analysis of variance (ANOVA), ANOVA uses categorical independent variables and a continuous dependent variable while LDA adopts continuous independent variables and a categorical dependent variable.
- Compared with principle component analysis (PCA), PCA is an unsupervised learning method, which does not take into account any difference in class, only for dimensionality reduction. LDA attemps to model the difference between the classes of data using labeled data.
- Compared with factor analysis, factor analysis builds the feature combinations based on differences. LDA deals with similarities. LDA is a dependence technique.
A dependence method is one in which a variable of set of variables is identifies as the dependent variable to be predicted or explained by other, independent variables. Dependence techniques include multiple regression analysis, discriminant analysis, and conjoint analysis. An interdependence method is one in which no single variable or group of variables is defined as being independent or dependent. The goal of interdependence methods is data reduction, or grouping things together. Cluster analysis, factor analysis, and multidimensional scaling are the most commonly used interdependence methods.
Consider the dimensional input vector and project it down to one dimension:
where is the weight vector for linear transformation. Thus we can place a threshold on and then classify as .
The dimensionality reduction leads to a considerable loss of information, and classes may become strongly overlapping in the low dimensional space. The goal is to find a projection that maximizes the class separation by adjusting . Therefore, we need to maxmimize the between-class scatter (use difference of mean values) and minimize the within-class scatter (use covariance matrix).
Fig. 1. Comparison of the good projection and the bad projection in LDAAlgorithm of LDA
Two-class cases
The purpose of LDA considers maximizing the “Rayleigh quotient”:
where is the between class scatter matrix and is the within class scatter matrix.
the means of every class is:
where is a vector in .
The scatter matrix for every class is defined as and :
The solution to maximize the can be obtained by derivation and Lagrange multiplier. Let to find one solution for .
The Fisher linear discrimination can be obtained as:
Thus is the eigenvector of the matrix . Solution is based on solving a generalized eigenvalue problem. In two-class case, is calculated as:
Multi-class cases
The purpose is to maximize the “Rayleigh quotient”:
where is the projection matrix.
where means the group of class . is the number of classes. is the sample number in the class .
The scatter matrix is generalized as:
where is used as the weight. Determinants are the product of all eigenvalues of the matrix.
We also have:
To solve , we need to use the eigenvalues of and obtain the matrix by eigenvectors. is the number of base vectors, namely the dimension of projection matrix as . The maximum of is . The classification will be better for the eigenvectors with respect to the larger eigenvalues.
Multi-class classification can be transferred to binary classification. The techniques can be categorized into One vs Rest and One vs One.
One vs Rest
OVR (or one-vs.-all, OvA, one-against-all, OAA) involves training a single classifier per class, with the samples of that class as positive samples and all other samples as negatives. The base classifiers are required to produce a real-valued confidence score for its decision, rather than just a class label, e.g., a new label vector where if and otherwise. We can use binary classifiers. Each binary classifier solves the problem of separating samples in a particular class from samples not in that class. This popular strategy suffers from several problems. Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives (?).
- Benefits: Small storing cost and short test time.
- Shortcomings: Long training time and imbalanced Samples.
One vs One
OVO involves using binary classifiers for every possible pairs of classes and learning to distinguish these two classes. At prediction time, a voting scheme is applied: all classifiers are applied to an sample. The class that got the highest number of vote amongst the discriminant functions gets predicted by the combined classifier. Sometimes it suffers from ambiguities in that some regions of its input space may receive the same number of votes.
- Benefits: Short training time.
- Shortcomings: Too many classifiers, large storing cost and long test time.
Nearest Neighbor Classification
The basic idea of the nearest neighbor classification is: a new sample is classified by calculating the distance to the nearest training case; the sign of that point then determines the classification of the sample.
The k-NN classifier extends this idea by taking the k nearest neighbors and assigning the sign of the majority. It is common to select k small and odd to break ties (typically 1, 3 or 5). Larger k values help reduce the effects of noisy points within the training data set, and the choice of k is often performed through cross-validation. KNN classifier is non-parametric, which means it does not make any assumptions on the underlying data distribution. The model structure is determined from the data. KNN is a good choice for a classification study when there is little or no prior knowledge about the distribution data. There is no explicit training phase for KNN. KNN makes predictions just-in-time by calculating the similarity between an input sample and each training instance. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors.