Marjorie McShane on Artificial Intelligence, Natural Language Processing & the Frontiers of AI

InfoWorks spoke with the pioneering researcher about computational linguistics, intelligent agents, and the untapped potentials of knowledge-based systems 

November 11, 2021

Most businesspeople are aware of the myriad ways that artificial intelligence has impacted the labor market. Ranging from fast food orders to invoicing, chatbots to supply chain management, AI is taking over roles once performed by human workers. As we recently explored at InfoWorks, the COVID-19 pandemic has only accelerated the transition to AI within key business sectors.

Marjorie McShane

Marjorie McShane

The majority of AI applications use a data analysis method widely known as machine learning, where AI uses datasets and statistical relationships to drive performance. Machine learning is classified as a knowledge-lean system because the AI uses limited knowledge (i.e., datasets, albeit massive ones) to draw conclusions based on statistical relationships. The AI doesn’t understand the meaning, complexities, intent or ideas behind their tasks. An increasingly popular subfield is Natural Language Processing (NLP), where these knowledge-lean techniques are used to translate or converse in human language. Current real-world applications are limited insofar as the AI doesn’t actually understand the language that it’s speaking in, responding to, or translating, which limits its capabilities.

But whatever happened to the tantalizing goal of human-type intelligence—AI with the ability to understand, interpret, learn from its environment, explain its own thoughts and actions, perhaps even understand a human’s emotions and personality traits?

To help us understand these frontiers of NLP, InfoWorks spoke with Marjorie McShane, professor of cognitive science at Rensselaer Polytechnic Institute. McShane has been working for decades as a researcher at the cutting edge of computational linguistics, pursuing types of tech that many have abandoned. Her fascinating new book, Linguistics for the Age of AI (co-authored with fellow researcher Sergei Nirenburg), explores the theoretical underpinnings and emergent applications of what McShane calls LEIAs: language-endowed intelligent agents.

In contrast with knowledge-lean machine learning, LEIAs are knowledge-based systems, which consider the context, meaning, intention and ambiguous interpretations of language. Knowledge-based systems fell out of favor within the AI field years ago, essentially because they were seen as too difficult and ambitious, with too much information required from too many sources.

“The public perception of the futility of any attempt to overcome this so-called knowledge bottleneck profoundly affected the path of development of AI in general and NLP in particular,” McShane and Nirenburg wrote. But, as they argue in their book, the knowledge bottleneck is misunderstood, and there is a lot to gain by returning to knowledge-based models.

McShane corresponded with us via email to elaborate on the possibilities of LEIAs and her work in this arena.

InfoWorks: I see that your PhD was in Slavic Languages and Literatures. Why did you become interested in applying your linguistics knowledge in the tech setting as a computational linguist?

One of the goals of linguistics is to explain how an incredibly complex system – human language – works. This involves things like analyzing data, identifying patterns in it, developing explanatory theories and computational models to try to account for the data, and testing those models using still more data – often from multiple languages. These are the same kinds of skills used by computer scientists and engineers who work on AI.

My shift from Slavic to computational linguistics was geopolitically-grounded: by the time I finished my PhD, the job market for Slavic linguists had dried up. As luck would have it, I found a job in a lab focused on computational linguistics, which was a great fit. It turned out that the linguistic questions I was interested in, and the style of linguistic analysis needed to answer them, were entirely in the spirit of computational linguistics and AI. In fact, my 2005 book, A Theory of Ellipsis, is essentially a reframing of my dissertation work as a contribution to AI-oriented computational linguistics.

InfoWorks: Your recent book, Linguistics for the Age of AIemphasizes the differences between knowledge-based and knowledge-lean systems. Can you explain why knowledge-based systems fell out of favor, despite the general consensus that knowledge-lean systems can’t create AI that truly understands language?

Unfortunately, I don’t think there is yet a consensus that knowledge-lean systems won’t be able to solve the language problem. At a minimum, developers of such systems have to keep trying to demonstrate that understanding language is still a goal and that progress toward that goal is ongoing.

One interesting thing about the ML (machine learning) turn of the mid-1990s is that knowledge-based methods became not merely disfavored, they were effectively run out of town. This seems to have been based on a perfect storm of factors: Knowledge-based systems require a lot of high-quality knowledge, which is difficult and expensive to acquire. System developers encapsulated their ensuing pessimism in the term the knowledge bottleneck – which came to be interpreted by societal decisionmakers as a death knell for knowledge-based AI. When work on AI-ML (AI-Machine Learning) was just taking off, nobody was thinking about what ML couldn’t do, only what it could do.

It turned out that ML could, in fact, support many useful – and lucrative – applications, encouraging developers to seek out more of the same. These successes animated the popular imagination, making people believe that ML is magic and that human-level AI is just around the corner. Practitioners, industry players and advertisers alike were only too happy to stoke this enthusiasm. The shift from general AI to what effectively became narrow AI (dealing in silo applications) was quiet – nobody announced that the field had stopped pursuing meaning or working toward human-level capabilities.

Once the ML mindset had prevailed – and monopolized the field, including the education of new generations of AI practitioners – the insufficiencies of ML did not lead to a renewed discussion of knowledge-based methods. This is a societal, not a scientific issue.

InfoWorks: In the labor market there is concern that AI will replace many of the roles and jobs that humans once performed. If LEIAs become a reality, which types of jobs might LEIAs eventually replace?

We intend LEIAs to be orthotic rather than prosthetic systems – i.e., we intend them to enhance the efficiency of human workers, not replace them. Limiting our projections to the next half century (who knows what might unfold over that time!), we think that robotic LEIAs could be useful as cognitive assistants to people in responsible positions (clinicians, military leaders, decision-makers in business and politics); as tutors in any discipline, including using simulation-based, trial-and-error teaching methodologies; as cognitive enhancements to robotic systems; and as more sophisticated versions of today’s widely-used applications (web search, question-answering, etc.).

InfoWorks: Why do you think it’s important to create these language-endowed intelligent agents (LEIAs)? Can you provide a few examples of things LEIAs can do that our more simplistic AI are unable to do?

The LEIAs that we have implemented to date actually incorporate ML-based results for some rote tasks; and, as AI-ML (Machine Learning) progresses, I expect more such components to contribute to our LEIAs. Delegating rote tasks to ML-based components allows us to concentrate on more advanced capabilities.

For example, a LEIA’s knowledge bases and processing are fully inspectable (apart from the ML-based rote-task results just mentioned), so LEIAs can explain what they did and why. This means that they have the potential to be trusted collaborators even in high-stakes applications. Not so ML systems. Another advantage of LEIAs is that, unlike many approaches to ML, they do not require training data, which expands the scope of applications they can be used for. In fact, the need for extensive training data is a new bottleneck that the ML community does not talk about much – at least not in those pessimistic terms!

Think about the kind of household robot that might be available in the near future. If you want to teach it to do something new, you want to explain and/or demonstrate that process exactly once. Moreover, you want the robot to learn the process, the vocabulary for interactions with people, and the associated concepts to which the vocabulary refers all at the same time. None of this is supported by the popular ML methods [yet can be supported by LEIA methods].

InfoWorks: In your book, you discuss the Maryland Virtual Patient prototype system, which “was designed to train physicians via interactions with simulated LEIAs playing the role of virtual patients.” This is so fascinating! How are the LEIAs performing so far?

After developing the prototype, we were not able to secure sufficient funding to expand it into a deployed system. However, the approach is absolutely feasible and we’d love to work further on it.

In the prototype, we developed a library of virtual patients, LEIAs, suffering from a variety of esophageal diseases. The LEIAs were endowed with a detailed conceptual model of physiology, as well as relevant knowledge about language and the world. They were also endowed with different histories, personality traits, and socioeconomic situations. They could perceive, through interoception, signals that their simulated body sent them (such as pain) and they understood language sufficiently to interact with the trainee.

During system operation, LEIAs were launched to participate in dynamic, open-ended simulations with users playing the role of attending physicians. When the user launched the simulated life of a particular patient, after some amount of time the patient would begin to experience symptoms. Depending on the patient’s personality traits, it would sooner or later decide to go see the doctor. During the visit to the doctor, the patient would answer the doctor’s questions, ask clarification questions, learn new words and concepts (e.g., for tests and diseases), and negotiate with the doctor about diagnostic and treatment procedures. There could then be any number of follow-up visits, tests, and procedures depending on the decision-making of the patient and the doctor, as well as the physiological predispositions of the patient. In short, the scenarios were not at all canned – they could unfold in countless ways, and users were free to make mistakes and see how they played out.

InfoWorks: One of the cool aspects of LEIA technology is it includes sensory input from the surrounding world—you use the example of robotic vision. How can collaborations be encouraged between computational linguists and scientists working on sensory tech?

LEIAs can integrate the results of any type of sensory tech. For example, if one wants a LEIA robot to be able to identify that an object it is holding is prickly, then the LEIA must have a concept “prickly” in its world model, it must know the salient properties of that concept so that it can talk and reason about prickly things (e.g., the object contains small spikes; touching it causes pain), and it must link that concept to the sensor data generated when a robot touches something prickly. In the Maryland Virtual Patient system, the virtual patient experienced bodily signals (interoception) which were generated by the physiological simulation engine. Since LEIAs suffered from esophageal diseases, they experienced symptoms related to those diseases, such as difficulty swallowing, chest pain, and regurgitation.

Such collaborations [between computational linguists and sensory tech scientists] are absolutely essential but will only happen if funding practices reopen the door to knowledge-based methods. As long as funding practices continue to overwhelmingly favor ML-based approaches, it is difficult to entice scientists working on sensory tech to think about interpreting their data results in the style that LEIAs require.

InfoWorks: You wrote that “the vision of human-level AI remains as tantalizing now as when formulated by the founders of AI over a half century ago.” Within the business realm specifically, what are some of the exciting possibilities? 

The most exciting possibility is shifting from the current method-driven orientation of AI (“do whatever you can with ML”) to a goal-driven orientation (“identify a problem that needs to be solved and solve it using whatever methods are most appropriate”).

It is good scientific practice to test out the capabilities of every new methodological and technological advance – along with the limits of those capabilities. Several generations of ML-based methods have, by now, been thoroughly investigated, and their limitations have become progressively clearer. It is high time for the companies who deploy AI to turn from asking “What can we do with the current ML methods?” to “What do my customers most need and how can we deliver that to them?” And it is up to research institutions to intensify R&D beyond the currently fashionable methods.

 

Interview by Katherine Don

  • facebook
  • twitter
  • linkedin
  • mix
  • reddit
  • email
  • print