Machine Learning vs Natural Language Processing: What’s the difference?
With all the recent buzz about ChatGPT and AI, there has been a flurry of conversations about Machine Learning (ML) and Natural Language Processing (NLP). Often, the terms are used interchangeably, but they have slightly different meanings.
Where ML and NLP intersect, we have incredible new technologies like AI chatbots and generative text programs. These technologies are based on the Deep Learning paradigm, which trains computers to perform complex tasks using data.
Like any other tech field, the future is hard to predict. AI has a history of getting stuck in unproductive hype cycles, but there certainly is much promise for the current generation of ML and NLP programs. They will likely make many tasks easier in the future.
Machine Learning
Machine Learning is a large subfield of Artificial Intelligence that focuses on making computers “learn”. It usually works by training a computer program on tons and tons of data until it recognizes patterns and can make judgments like humans do.
Instead of telling the computer exactly what to do, ML programmers tell a computer how to learn and then train the computer to learn how to solve a problem.
Computer scientists and programmers have applied ML to thousands of applications over the years. ML programs can do everything from translating text to making images to playing video games like Super Mario Brothers.
Instead of telling the computer exactly what to do, ML programmers tell a computer how to learn and then train the computer to learn how to solve a problem.
Computer scientists and programmers have applied ML to thousands of applications over the years. ML programs can do everything from translating text to making images to playing video games like Super Mario Brothers.
Natural Language Processing
Natural Language Processing, or NLP, is a broad discipline that includes any use of computers to analyze and process natural human language. It is used extensively in ML for applications that have to do with text or speech.
Not all of NLP has to do with machine learning, though. The term also refers to applications that process data through preset rules. For example, linguists use NLP to perform tasks like counting the number of nouns in a body of literature. This doesn’t require learning at all, just counting.
Where ML and NLP meet, we have cutting-edge applications like AI chatbots, machine translation, and text generation. Grammarly, one of the most famous of these tools, helps people communicate effectively in addition to correcting errors.
Deep Learning, Neural Nets, and AI
There are several related terms that are often thrown around as if they are the same, but they are not.
AI is the most general term. It refers to anything done by a computer that resembles human intelligence. But not all AI is Machine Learning. Many older AI systems relied heavily on preset rules instead, and some still do today.
Deep Learning refers to the currently most popular and robust method for designing AI systems. Since about 2012, it has been considered state of the art. Deep Learning works by combining Neural Nets, fast graphics processors, and tons and tons of training data.
Neural Nets are a specific type of computing system, partially inspired by the structure of the neural networks in humans and other animals. But instead of using real neurons, they represent data in complex networks made up of matrices of numbers. Neural Nets are actually an old concept in computing, dating back to the 1940s. Still, they have only recently become useful in AI, due to faster processors and bigger training data sets.
Today, most AI programs are built on a Deep Learning paradigm that trains Neural Nets on large quantities of data. This includes NLP applications like ChatGPT and Siri. But keep in mind that this is a somewhat recent development- before 2012, people designed NLP programs using other statistical techniques.
AI is the most general term. It refers to anything done by a computer that resembles human intelligence. But not all AI is Machine Learning. Many older AI systems relied heavily on preset rules instead, and some still do today.
Deep Learning refers to the currently most popular and robust method for designing AI systems. Since about 2012, it has been considered state of the art. Deep Learning works by combining Neural Nets, fast graphics processors, and tons and tons of training data.
Neural Nets are a specific type of computing system, partially inspired by the structure of the neural networks in humans and other animals. But instead of using real neurons, they represent data in complex networks made up of matrices of numbers. Neural Nets are actually an old concept in computing, dating back to the 1940s. Still, they have only recently become useful in AI, due to faster processors and bigger training data sets.
Today, most AI programs are built on a Deep Learning paradigm that trains Neural Nets on large quantities of data. This includes NLP applications like ChatGPT and Siri. But keep in mind that this is a somewhat recent development- before 2012, people designed NLP programs using other statistical techniques.
The Future of ML and NLP
Even if you’ve been living under a rock, you’ve probably seen ChatGPT by now, and you are probably pretty impressed by it. You might even be wondering if ChatGPT wrote this article! (It did not).
ChatGPT is just the latest in a long line of ML-based NLP programs. It can produce incredibly lifelike poetry and prose, far surpassing what was considered possible just five years ago.
ChatGPT is not the only emerging NLP powerhouse. Google and Meta are also working on their version of the technology but have kept their work more secret up until now. Google’s PaLM model is probably the largest in the world right now, at 540 billion parameters.
It’s tempting to look at these new developments and predict how they will end whole industries and take over the world. And while it is inevitable that they will have a significant impact, it is unfortunately unclear just what will happen with any given new technology.
AI has a reputation for getting caught in “hype cycles.” A promising new technique is discovered, it can do things previously thought impossible, and people forecast the end of office work entirely.
But a few years go by, progress stalls, investors lose confidence, and the field winds up in an “AI Winter.” It’s not clear whether that will happen again or not this time.
Nevertheless, as long as developers can access more and more data, programs like ChatGPT will only continue to get more powerful. While I don’t think Terminator is just around the corner, there are undoubtedly important disruptions that will take place.
But a few years go by, progress stalls, investors lose confidence, and the field winds up in an “AI Winter.” It’s not clear whether that will happen again or not this time.
Nevertheless, as long as developers can access more and more data, programs like ChatGPT will only continue to get more powerful. While I don’t think Terminator is just around the corner, there are undoubtedly important disruptions that will take place.