Linear Discriminant Analysis is known by several names like the Discriminant Function Analysis or Normal Discriminant Analysis. It separates 2 or more classes and models the group-differences in groups by projecting the spaces in a higher dimension into space with a lower dimension. For Ex: Since classes have many features, consider separating 2 classes efficiently based on their features. To study the advantages and disadvantages of linear discriminant analysis, choose a single feature for analysis among several features of the classes which then causes overlapping in classification. Hence proper classification depends on using multiple features is used in supervised classification problems and is a linear technique of dimensionality reduction using discriminant analysis. 

  1. Example
  2. Applications

1. Example

Consider 2 datapoint sets from 2 different classes for classification as a linear discriminant analysis example. In graphs that are 2-dimensional one cannot separate these classes by a straight line. Hence we use the LDA algorithm for LDA-Linear Discriminant Analysis which replots it into a single-dimensional plot to maximize the separation between the 2 classes using LDA dimensionality reduction. In the figure below the red dotted line is incapable of separating the two classes.

LDA- linear discriminant analysis uses both X/Y axes to project the data onto a 1-D graph in 2 ways using the linear discriminant function.

  • It uses the mean values of the classes and maximizes the distance between them.
  • It uses variation minimization in both the classes for separation.

If using the mean values linear discriminant analysis algorithm method the then the average values are deduced and used while the variations within the class itself are minimized. The newly generated axes between data-points separate the 2 classes effectively. Once the axis of the 1-D plot is generated, the LDA classifiers project the points onto this axis to affect class separation. The graph below shows how the axes can use the mean values and variability minimization to effectively differentiate between the data-points.

 The figure below shows the 1-D graph with the two classes effectively separated and classified by the re-projected lower dimension graph after it undergoes Linear Discriminant Analysis. 

However, if the distribution’s mean values are shared between the classes, Linear Discriminant Analysis cannot find a new linearly separable axis causing the LDA method to fail which is one of the disadvantages of linear discriminant analysis. In such cases, one will have to use the method of non-linear discriminant analysis to separate the 2 classes during classification.

Extensions to LDA: ​There are several other variations of Linear Discriminant Analysis mentioned below.

  1. FDA-Flexible Discriminant Analysis: This method is flexible and uses splines when the inputs are non-linear combinations of data.
  2. QDA-Quadratic Discriminant Analysis: In this method, each of the classes uses the co-variance of multiple input variables or its own variance estimate for classification.
  3. RDA-Regularized Discriminant Analysis: In this technique, the variance estimate/covariance is subject to regularization thus controlling the influences of the input data’s different variables.

2. Applications

LDA and classification techniques have many uses, some of which are described briefly below.

  • Medical: Patient’s diseases can also be classified as moderate, mild or severe using linear discriminant analysis. Such classification plots the medical treatment to the various patient parameters to make a distinction between the classes. This classification then helps doctors to offer the right treatment and can even discriminate using time as a parameter forecasting increase or decrease in treatments.
  • Facial Recognition: In modern science and the use of Computer Vision, the human face is represented by several points with pixel values by linear discriminant analysis in machine learning. Linear discriminant analysis is extremely popular in facial recognition as it provides proper classification by reducing the number of variable features to one that is more manageable and then classifies the reduced and re-projected features during classification. Since a linear combination of pixel values attributes a new dimension that is generated, a template can be created for classification and recognition. The combinations are called Fisher faces and the dimension the Fisher’s linear discriminant.
  • Identification of customers: In the marketing field, it is important to find the right customer for a product. A survey of customers wishing to buy a particular product at a specific shopping mall can easily provide custom features which can then be classified using Linear discriminant analysis to help select and identify the customer features most likely to buy the product at the specified venue.


The uses of linear discriminant analysis are many especially using the advantages of linear discriminant analysis in the separation of data-points linearly, classification of multi-featured data, discriminating between multiple features of a dataset etc.  It is used in facial recognition, medical treatment analysis and tools for marketing and customer identification. In the above article, we have also seen that there are other types of LDA methods and non-linear discriminant methods also. LDA is a very simple and effective classification tool that lends itself to several practical applications.

There are no right or wrong ways of learning AI and ML technologies – the more, the better! These valuable resources can be the starting point for your journey on how to learn Artificial Intelligence and Machine Learning. Do pursuing AI and ML interest you? If you want to step into the world of emerging tech, you can accelerate your career with this Machine Learning And AI Courses by Jigsaw Academy.



Are you ready to build your own career?