Kernel Tricks and Soft Margin in SVM

While learning Support Vector Machine(SVM) you have come across the term Kernal Tricks. In this article I will explain to you what is kernel tricks and why it is so important is machine learning. But first, let see what is linear and non-linear data.

A linear data is a data which can be easily separated by and single line(or hyperplane), while non-linear data cannot be separated by a single line.

Kernel Tricks and Soft Margin in SVM
                                         Linear data

 

 

 

Kernel Tricks and Soft Margin in SVM
                                  Non-Linear data

 

 

In SVM our aim is to find the best line whose distance between the data points and the line on each side is largest. However, in the real world, not all data is linearly separable. There is two methods that can be used to handle non-linear data in SVM Soft Margin and Kernel Trick.

Soft Margin

It is one of the simplest methods in dealing with non-linear data. In Soft Margin it tolerate some misclassified data points. It does this by balancing the trade-off between finding maximum distance and minimizing the misclassification. While implementing soft margin method we have to decide the degree of tolerance or penality SVM should handle while categorizing misclassified data points. As larger the penality more number misclassified data. In Scikit-learn SVM implementation we can adjust penalty value by adjusting parameter value ‘C’.

from sklearn.svm import SVC
model1 = SVC(kernel = "linear",C = 1.0) # default Value
model2 = SVC(kernel = "linear",C = 0.01) 
model3 = SVC(kernel = "linear",C = 10.0) 

Kernel Tricks

Kernel Tricks is the most popular and preferred method in dealing with non-linear data. Basically what kernel tricks do is it take date point applies some transformations, and create features. That is it takes non-linear data and transforms it into higher dimensional data which than can be easily separated by a hyperplane.

Kernel Tricks are so beautiful that it enables us to operate on higher dimensional data without ever knowing or computing the coordinates of data in that space.

Scikit-learn provides the following kernels:

  •  linear
  • poly
  • RBF(default)
  • sigmoid

Sharing is caring!

1 thought on “Kernel Tricks and Soft Margin in SVM”

Leave a Comment

Your email address will not be published. Required fields are marked *

shares
Scroll to Top