History of Natural Language Processing and Its Direction For Growth

History of Natural Language Processing and Its Direction For Growth

History of Natural Language Processing and Its Direction For Growth

Natural Language Processing (NLP) has developed from the first AI systems to today’s NLP.

The history of natural language processing is a story full of twists and turns. It starts with unproductive research, progresses through years of fruitful work, and finally ends in an era where we are still trying to find out what the limits are for this field. Let us shortly explore the development of this branch of science.

The Origin of Natural Language Processing (NLP)- How has the Idea Been Born?

5 Benefits of NLP in Tourism

Natural language processing has its origin in the late 1940s when the first AI systems were built. They had to process natural languages and recognize words in order to understand human commands. In 1950, Alan Turing published a paper that described the first algorithm for machine translation. The algorithmic process focused on morphology, syntax, and semantics of programming languages. The paper's title was "Computing Machinery and Intelligence". Turing wrote more research papers on natural language, but his work in this area was not continued.

In 1959, he wrote a paper "On Computable Numbers". It introduced the idea of artificial intelligence to solve problems that humans cannot do by themselves. The algorithm processes information and carries out tasks which are beyond human capabilities or have time constraints like playing chess at lightning speeds.

The Birth of Natural Language Processing (NLP)- Who has Made it Possible?

In 1956, John McCarthy published a report describing how natural language could be used to communicate with an AI system. In 1957 he created the term “artificial intelligence”. In 1958, he published a paper describing the SOLO natural language sentence processing program.

In 1959, Frank Rosenblatt created the first perceptron (a neural network). These networks were designed to process information and tackle problems in pattern recognition or classification tasks.  In 1962, these artificial neurons became widely used after Marvin Minsky and Seymour Papert wrote their successful  book “Perceptrons”.

In 1966, an artificial intelligence company called General Automation Incorporated was formed with a focus on natural language processing and pattern recognition.

The Evolution of Natural Language Processing (NLP)- What Changes Have Been Made? 

Through time different methods of analysis were developed gradually. Scientists from  the University of Edinburgh and Cornell university created a computational model in 1964. The first computer program that could converse with a person was ELIZA, created by Joseph Weizenbaum of MIT in 1966. 

In 1966, the first professional conference on computer speech and language processing was held. In 1967, a machine translation program in Russian for English-speaking scientists to read about discoveries in Soviet science became available.

The development of Natural Language Processing (NLP) - How Did it Evolve?

It took until 1979 to make another big step and it was in this year that the first rough English-speaking "chatbot" was created.

In 1984, IBM’s new product, called “chatterbox” could converse with a person in natural language and it used an early version of the conversation management system to filter out uninteresting conversations for its user.

After that, in 1987, a program called PARRY created by Robert Schank was able to have conversations with psychiatrists but it could not answer questions about its own life.

In 1990, ELIZA and Parry were considered as "trivial" examples of artificial intelligence because they used simple pattern matching techniques that couldn't really think or understand natural language like humans. We still couldn't create  a chatbot that can convincingly pass the Turing Test.

In 1994, there was a major breakthrough in natural language processing with statistical machine translation and it increased the ability of machines to read over 400 times faster than humans could do but still not as fast as human translators.

Few years later, in 1997 there was a major breakthrough in natural language processing with the introduction of an algorithm for parsing and understanding speech that has been named as one of the top achievements in artificial intelligence.

In 2006, Google introduced its translation feature without any human intervention which used statistical machine learning to translate words from over 60 languages into other languages by reading millions of texts. Next few years, the algorithm improved and now Google Translate can translate over 100 languages.

In 2010, IBM announced that it has developed a system called Watson which is able to understand questions in natural language and then uses artificial intelligence to give answers based on information available from Wikipedia. And it also beat two human champions of Jeopardy! 

After that in 2013, Microsoft introduced a chatbot called Tay. It was created to learn from interactions with humans on Twitter and other platforms in order to engage people online but it didn’t take long before the bot started tweeting offensive content which lead to its shutdown after just 16 hours of existence.

Now, we have 2021 and the hype for machine learning is at its high.

What are the Limits of Natural Language Processing (NLP)? 

One of them would be to improve natural language processing in interactive dialogue systems, which includes knowledge-based dialogues and conversational agents such as Siri or Alexa - these assistants that we use on a daily basis. However, there's still a long way until they will be able to respond as human beings do.

Another limit is the fact that most machine learning algorithms are not intended to be used in real-time situations such as chatbots, but rather for offline processing data sets with a high amount of input variables and training datasets - which means that there's still no way to predict future events or every possible scenario.

What Do We Want to Achieve with Natural Language Processing (NLP)?

Chatbots NLP NLG and NLU

Scientists want to create algorithms that will be able to understand the meaning and intent of sentences, with as few words as possible. They intend to  create a set of algorithms that will be able to grasp the meaning and intent of sentences so they can extract information from them.This is why there's still no limit on what we want to achieve with natural language processing, as long as it supports human activities in everyday life. They say that developing NLP (natural language processing) has been  a huge help to humans in everyday life. There are some threats standing behind the development of NLP , but there are also many opportunities.

Natural Language Processing has helped people in everyday life by teaching them to speak and read more fluently, as well as allowing them to type faster than they could write sentences on a keyboard. But one of the main threats is that some experts say that developing natural language processing will put humans out of jobs because they will be  replaced by machines.

However, there are also people who say that natural language processing will give humans new jobs and opportunities they never had before because it is so complicated. This means that as long as the development of NLP supports human activities in everyday life, then we might just find what border lies between restrictions and freedom with this technology.

Share this article

Share this article

Anas Bouargane

Business Expert

Anas is the founder of CEF Académie, a platform that provides guidance and support for those willing to study in France. He previously interned at Unissey. Anas holds a bachelor degree in economics, finance and management from the University of Toulon.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline