Skip to main content

Deep learning vs. machine learning: What’s the difference between the two?

deep learning vs machine explained ai 01
In recent months, Microsoft, Google, Apple, Facebook, and other entities have declared that we no longer live in a mobile-first world. Instead, it’s an artificial intelligence-first world where digital assistants and other services will be your primary source of information and getting tasks done. Your typical smartphone or PC are now your secondary go-getters.

Backing this new frontier are two terms you’ll likely hear often: machine learning and deep learning. These are two methods in “teaching” artificial intelligence to perform tasks, but their uses goes way beyond creating smart assistants. What’s the difference? Here’s a quick breakdown.

Computers now see, hear, and speak

With the help of machine learning, computers can now be “trained” to predict the weather, determine stock market outcomes, understand your shopping habits, control robots in a factory, and so on. Google, Amazon, Facebook, Netflix, LinkedIn, and more popular consumer-facing services are all backed by machine learning. But at the heart of all this learning is what’s known as an algorithm.

Simply put, an algorithm is not a complete computer program (a set of instructions), but a limited sequence of steps to solve a single problem. For example, a search engine relies on an algorithm that grabs the text you enter into the search field box, and searches the connected database to provide the related search results. It takes specific steps to achieve a single, specific goal.

Machine learning has actually been around since 1956. Arthur Samuel didn’t want to write a highly-detailed, lengthy program that could enable a computer to beat him in a game of checkers. Instead, he created an algorithm that enabled the computer to play against itself thousands of times so it could “learn” how to perform as a stand-alone opponent. By 1962, this computer beat the Connecticut state champion.

Thus, at its core, machine learning is based on trial and error. We can’t manually write a program by hand that can help a self-driving car distinguish a pedestrian from a tree or a vehicle, but we can create an algorithm for a program that can solve this problem using data. Algorithms can also be created to help programs predict the path of a hurricane, diagnose Alzheimer’s early, determine the world’s most overpaid and underpaid soccer stars, and so on.

Machine learning typically runs on low-end devices, and breaks a problem down into parts. Each part is solved in order, and then combined to create a single answer to the problem. Well-known machine learning contributor Tom Mitchell of Carnegie Mellon University explains that computer programs are “learning” from experience if their performance of a specific task is improving. Machine learning algorithms are essentially enabling programs to make predictions, and over time get better at these predictions based on trial and error experience.

Here are the four main types of machine learning:

Supervised machine learning

In this scenario, you are providing a computer program with labeled data. For instance, if the assigned task is to separate pictures of boys and girls using an algorithm for sorting images, those with a male child would have a “boy” label, and images with a female child would have a “girl” label. This is considered as a “training” dataset, and the labels remain in place until the program can successfully sort the images at an acceptable rate.

Semi-supervised machine learning

In this case, only a few images are labeled. The computer program will then use an algorithm to make its best guess regarding the unlabeled images, and then the data is fed back to the program as training data. A new batch of images is then provided, with only a few sporting labels. It’s a repetitive process until the program can distinguish between boys and girls at an acceptable rate.

Unsupervised machine learning

This type of machine learning doesn’t involve labels whatsoever. Instead, the program is blindly thrown into the task of splitting images of boys and girls into two groups using one of two methods. One algorithm is called “clustering” that groups similar objects together based on characteristics, such as hair length, jaw size, eye placement, and so on. The other algorithm is called “association” where the program creates if/then rules based on similarities it discovers. In other words, it determines a common pattern between the images, and sorts them accordingly.

Reinforcement machine learning

Chess would be an excellent example of this type of algorithm. The program knows the rules of the game and how to play, and goes through the steps to complete the round. The only information provided to the program is whether it won or lost the match. It continues to replay the game, keeping track of its successful moves, until it finally wins a match.

Now it’s time to move on to a deeper subject: deep learning.

Deep Learning

Deep learning is basically machine learning on a “deeper” level (pun unavoidable, sorry). It’s inspired by how the human brain works, but requires high-end machines with discrete add-in graphics cards capable of crunching numbers, and enormous amounts of “big” data. Small amounts of data actually yield lower performance.

Unlike standard machine learning algorithms that break problems down into parts and solves them individually, deep learning solves the problem from end to end. Better yet, the more data and time you feed a deep learning algorithm, the better it gets at solving a task.

In our examples for machine learning, we used images consisting of boys and girls. The program used algorithms to sort these images mostly based on spoon-fed data. But with deep learning, data isn’t provided for the program to use. Instead, it scans all pixels within an image to discover edges that can be used to distinguish between a boy and a girl. After that, it will put edges and shapes into a ranked order of possible importance to determine the two genders.

On an even more simplified level, machine learning will distinguish between a square and triangle based on information provided by humans: squares have four points, and triangles have three. With deep learning, the program doesn’t start out with pre-fed information. Instead, it uses an algorithm to determine how many lines the shapes have, if those lines are connected, and if they are perpendicular. Naturally, the algorithm would eventually figure out that an inserted circle does not fit in with its square and triangle sorting.

Again, this latter “deep thinking” process requires more hardware to process the big data generated by the algorithm. These machines tend to reside in large datacenters to create an artificial neural network to handle all the big data generated and supplied to artificial intelligent applications. Programs using deep learning algorithms also take longer to train because they’re learning on their own instead of relying on hand-fed shortcuts.

“Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon,” writes Nvidia’s Michael Copeland. “With Deep Learning’s help, A.I. may even get to that science fiction state we’ve so long imagined.”

Is Skynet on the way? Not yet

A great recent example of deep learning is translation. This technology is capable of listening to a presenter talking in English, and translating his words into a different language through both text and an electronic voice in real time. This achievement was a slow learning burn over the years due to the differences in overall language, language use, voice pitches, and maturing hardware-based capabilities.

Deep learning is also responsible for conversation-carrying chatbots, Amazon Alexa, Microsoft Cortana, Facebook, Instagram, and more. On social media, algorithms based on deep learning are what cough up contact and page suggestions. Deep learning even helps companies customize their creepy advertising to your tastes even when you’re not on their site. Yay for technology.

“Looking to the future, the next big step will be for the very concept of the ‘device’ to fade away,” says Google CEO Sundar Pichai. “Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an A.I. first world.”

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
The best portable power stations
EcoFlow DELTA 2 on table at campsite for quick charging.

Affordable and efficient portable power is a necessity these days, keeping our electronic devices operational while on the go. But there are literally dozens of options to choose from, making it abundantly difficult to decide which mobile charging solution is best for you. We've sorted through countless portable power options and came up with six of the best portable power stations to keep your smartphones, tablets, laptops, and other gadgets functioning while living off the grid.
The best overall: Jackery Explorer 1000

Jackery has been a mainstay in the portable power market for several years, and today, the company continues to set the standard. With three AC outlets, two USB-A, and two USB-C plugs, you'll have plenty of options for keeping your gadgets charged.

Read more
CES 2023: HD Hyundai’s Avikus is an A.I. for autonomous boat and marine navigation
Demonstration of NeuBoat level 2 autonomous navigation system at the Fort Lauderdale International Boat Show

This content was produced in partnership with HD Hyundai.
Autonomous vehicle navigation technology is certainly nothing new and has been in the works for the better part of a decade at this point. But one of the most common forms we see and hear about is the type used to control steering in road-based vehicles. That's not the only place where technology can make a huge difference. Autonomous driving systems can offer incredible benefits to boats and marine vehicles, too, which is precisely why HD Hyundai has unveiled its Avikus AI technology -- for marine and watercraft vehicles.

More recently, HD Hyundai participated in the Fort Lauderdale International Boat Show, to demo its NeuBoat level 2 autonomous navigation system for recreational boats. The name mashes together the words "neuron" and "boat" and is quite fitting since the Avikus' A.I. navigation tech is a core component of the solution, it will handle self-recognition, real-time decisions, and controls when on the water. Of course, there are a lot of things happening behind the scenes with HD Hyundai's autonomous navigation solution, which we'll dive into below -- HD Hyundai will also be introducing more about the tech at CES 2023.

Read more
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more