Machine Learning — It’s About Technique

August 23, 2017

When it comes to machine learning, one size does not fit all. Different algorithms, and different techniques within those algorithms, are used to build a model that is application appropriate. But how do you determine which technique is best?

Because machine learning is not a concrete set of algorithms used across the board, it depends on what you are trying to achieve. The answer heavily relies on the type of data, and the amount of data, that is available.

Below are a few of the main techniques most frequently seen in machine learning:

Whether or not data has been labeled determines whether it is supervised or unsupervised. Supervised learning uses human- labeled data, and are commonly used when data can predict likely events. In other words, it is an input when the desired output is known. The algorithm learns a set of inputs along with corresponding correct outputs and learns by comparing its actual output with correct outputs to find errors. Once it finds the errors, it can modify the model accordingly.

Classification, which falls under supervised learning, can be defined as trying to predict an output given the input. Classification takes an unknown group of entities and works to identify them into larger known groups. To learn, it requires a set of labeled examples such as an image, text, or speech. As the number of classes grows, the data required to train a classifier to reach high accuracy can be large, reaching thousands or even millions of examples. While classification typically targets simple categories, it can be extended to situations where the target is a structure or a sequence, like in natural language processing.

Unsupervised learning uses unlabeled data. In this situation, the machine discovers new patterns without knowing any prior data or information. This type of learning works well with clustering, which is when data is categorized into groups of similar data.

Inspired by the psychological idea of reinforcement behavior, reinforcement learning is the idea of learning by doing. A machine can determine an ideal outcome by trial and error. Over time, it learns to choose certain actions that result in the desirable outcome. This type of learning is often used in applications such as gaming, navigation, and more.

Deep neural networks (DNNs), also known as artificial neural networks (ANN), represent a set of techniques used to build powerful learning systems. Unlike certain algorithms, they add a number of “hidden” layers that are used to extract intermediate representations. While invented in the 1980s, DNNs took off after 2010 thanks to powerful parallel hardware and easy-to-use open source software.

DNNs cover a huge range of different neural architectures, the best known being:

  • Recurrent Neural Networks (RNN) – A network whose neurons send feedback signals to each other
  • Convolutional Neural Networks (CNN) – A feed-forward ANN typically applied for visual and image recognition


For more in-depth information about how machine learning works, download our whitepaper below.

fundamentals of machine learning


The Fundamentals of Machine Learning

Learn More

Also from Lauren Famiglietti

digital roots social data


Social Listening for Customer Care: 101

We all know the important role social media plays for both B2B and B2C companies. Aside from posting and engaging with followers, social media also allows for a behind-the-scenes look into what your customers are really thinking -- if you know where to look for it.


The AI Creep-O-Meter

While the future of AI doesn’t consist of robots ruling the world, we admit that sometimes AI can be a bit, well...creepy. We conducted a survey with The Harris Poll to learn just how much creepiness Americans are willing to tolerate from AI, and where the line is drawn.


AI on the Creepiness Scale: Real Life Stories

While robots aren’t about to take over the world, it sometimes feels like big brother is watching over us in the shape of a round plastic object named Alexa or Google. We asked consumers to weigh in: what’s the creepiest encounter you’ve ever had with AI?