Machine Learning Has Advanced, But Natural Language is a Hurdle

even with recent advances in AI, computers can’t really understand human language

September 23, 2021 | Scotty Hendricks

Natural LanguageMost of us are familiar with attempts by computers and software systems to understand spoken language—like when you ask your phone to tell you where the nearest sandwich shop is, or ask Alexa to play a favorite song. You’re also likely aware of the limitations of that software, which seizes up if your language gets too complex or if it mishears you.

Advances in machine learning often rely on giving computers ever-larger piles of information to draw connections from. This has worked wonders in a wide variety of fields, especially with the advent of artificial neural networks that allow for multiple layers of processing and adjustments to the system when it makes a mistake. Given enough training and data to work with, such systems can learn almost anything.

However, even with access to more data, machines still have a lot to learn. A chief example of this is the problem of Natural Language Processing—getting computers to understand information from human languages as well as when information is conveyed in coding languages. As it turns out, teaching a machine to understand human languages is a little more difficult than just throwing more words at it.

The difficulties of teaching a machine human languages

Coding languages are limited in scope, well defined, and place a great emphasis on syntax. Natural human languages, on the other hand, are full of ambiguous terminology, contextual changes in meaning, and other difficulties. Getting a machine to understand print (“hello, world”) is one thing. Getting it to understand the sentence: “The rabbit could not get through the hole, as it was too big” is another. The computer gets stuck on what “it” refers to. Computers also get stuck on sentences where words can be either verbs or nouns, as is often the case in English.

It’s no surprise that the early computer scientist Alan Turing made his standard of machine intelligence the ability of a machine to use a human language fluently.

But if these issues can be resolved, the applications would be myriad and exciting. We can imagine future computers that understand the nuances of human language, and are able to interpret your inquiry, provide accurate translations between languages, summarize large documents into short, meaningful pieces, or explain the effect of programming inputs in a human language before programs are run.

The shape of things to come

Some commentators are rather optimistic about the chances of overcoming these issues in the long term. For example, Marjorie McShane, co-author of the book Linguistics for the Age of AI, argues that the solution to the problem is just putting more work in. By giving machine learning tools ever larger, better-curated sets of data to work with, they will eventually understand language, which will allow computers solve more difficult problems than they can now.

On the other hand, professor Douglas Hofstadter of Indiana University explains in an article in The Atlantic that even with impressive advances in machine translation using neural networks, we will still face the significant hurdle of teaching computers to truly understand the “etherealities” of ideas and concepts:

“Another natural question is whether Google Translate’s use of neural networks—a gesture toward imitating brains—is bringing us closer to genuine understanding of language by machines. This sounds plausible at first, but there’s still no attempt being made to go beyond the surface level of words and phrases. All sorts of statistical facts about the huge databases are embodied in the neural nets, but these statistics merely relate words to other words, not to ideas. There’s no attempt to create internal structures that could be thought of as ideas, images, memories, or experiences. Such mental etherealities are still far too elusive to deal with computationally, and so, as a substitute, fast and sophisticated statistical word-clustering algorithms are used. But the results of such techniques are no match for actually having ideas involved as one reads, understands, creates, modifies, and judges a piece of writing.”

Learning how to process and understand a language is a difficult thing; it takes most humans years to get the hang of a new one, and even then, they are prone to making mistakes. Figuring out how to teach a machine to properly understand human language may prove to be a decade away, if that soon.

However, if we get it right, the applications will be fantastic.

  • About The Author
  • Scotty Hendricks is a writer based out of Chicago. He covers a wide variety of topics including science, technology, philosophy, policy, and current events.