Quantum NLP: Machine Learning’s Next Frontier.

Ask most people to define natural language processing (NLP), and they’ll likely shrug their shoulders. 

What they may not realize, however, is that much of their lives already revolve around this technology. From setting up email filters to asking Siri or Alexa which route to take or the ingredients of their favorite dish, to using search engines on the internet, NLP is everywhere in the consumer world. It’s become increasingly prevalent in the business world, too, through automated summarization, translation, sentiment analysis of media and other content, and other complex applications. 

But the increasingly large demands placed on NLP, thanks to the growing ubiquity of human-computer interfaces, have exposed the weaknesses of traditional NLP on classical computers. That’s because most current NLP approaches are dominated by the so-called “bag of words” approach: Using strings of words defined only by the meaning of those individual words, without accounting for grammatical structure or composition. 

This approach is typically referred to as a distributional approach to NLP and uses the vector space model to determine the meaning of words. How this works is relatively simple. First, a set of several thousand context words gets identified. From there, researchers use a corpus to determine how often other words appear near these context words (for example, next to “sports,” you’ll likely see mentions of “football,” “baseball,” “hockey,” and other closely related words). All these words come together to create the word vector for any particular word (in this case, the term “sports”). 

The problem, however, is that natural language is creative and difficult to predict. People don’t use the same sentence repeatedly to describe something, for example, when people speak or write, they’re prone to change things up, using different combinations of words and cadences to describe the same things. As a result, distributional techniques struggle to translate longer swathes of text.  

And so researchers also came up with a second approach to NLP called the compositional model (also known as semantics), which boils grammatical structures down to mathematical algorithms – essentially turning sentences into networks of words where these words interact to create meaning. 

“Each of the two approaches has its own shortcomings, which are more or less solved in the other one,” according to Makarov et al. And so things were until researchers Coecke, Sadrzadeh, and Clark combined the two into something called the “compositional distributional model,” today known by the funky acronym “DisCoCat.”

DisCoCat, QNLP, and everything in between

Such a framework looks at sentences as mini-networks that can be represented visually. A visualization of a relatively simple sentence is below, with boxes representing the meaning of words and lines representing “channels through which these meanings can be transmitted.”

Quantum NLP
Image courtesy: https://medium.com/cambridge-quantum-computing/quantum-natural-language-processing-748d6f27b31d

“One particularly interesting aspect of this graphical framework for linguistics was that the networks were inherited from previous work that provided quantum theory with an entirely network-like language,” according to Meichanetzidis et al., also stating that “Language is quantum native.

“A direct correspondence was established on the one hand between the meanings of words and quantum states, and on the other hand grammatical structures and quantum measurements,” the authors write. “Obviously this led to the question — can one make quantum computers handle natural language?”

Indeed, most advanced NLP problems require massive amounts of classical computational resources. But the advent of the first quantum computers, known as noisy intermediate-scale quantum (NISQ) devices, means researchers have begun building on the quantum-like nature of advanced NLP interactions (along with quantum machine learning techniques) to map DisCoCat diagrams to quantum circuits. The promise of quantum computers, after all, is practically limitless in terms of the computational power they can theoretically bring to complex problems beyond the power of classical computers.

The idea of applying quantum computers to NLP first appeared in this research paper by Will Zeng and Bob Coecke. Since then, it has only gathered momentum, with more researchers – and even Intel via the company’s quantum simulator – attempting to tackle the problem. However, one issue with the above research paper is that it relied on the assumed existence of quantum random access memory (QRAM), which, to this day, is still only theoretical.

Regardless, in an indication of the field’s growing popularity thanks to these advances, Oxford University held the first-ever QNLP conference hosted by the university’s Quantum Group at the Department of Computer Science, Cambridge Quantum Computing (CQC), and IBM.

Recent advances in QNLP

At this conference, Meichanetzidis et al. presented some of the world’s most recent QNLP experiments. The authors of the paper containing the diagram we referenced earlier demonstrated that QNLP can indeed be implemented on NISQ devices, but not using the relatively simple sentence networks illustrated above. Instead, researchers had to encode these networks into something a quantum circuit can understand – in their words, providing sentences with a “quantum circuit skeleton,” as shown below.

Quantum NLP
Image courtesy: https://medium.com/cambridge-quantum-computing/quantum-natural-language-processing-748d6f27b31d

“In this form, we demonstrate that QNLP can be implemented on NISQ devices, and of course, will work extremely well as these devices scale in terms of size and performance,” the researchers say. They add that their experiment also proves QNLP is possible without the assistance of QRAM (which, again, exists in theory only at this point). “By employing quantum machine learning, we do not directly encode the meanings of words, but instead construct a framework in which quantum states and processes learn their meanings directly from text.”

Through this framework researchers can prepare quantum states that encode the meaning of words and phrases on NISQ devices – and, as they’ve demonstrated, it works. “Posing a question to the quantum computer, constructed by the vocabulary and grammar the quantum computer has learned, it returns the answer.”

The current state of quantum computing, which is still very much in its infancy, is a clear roadblock to more advanced QNLP experiments. Today, most quantum computers are still held back by their incredible sensitivity to environmental factors, including vibrations or temperature changes, which can essentially smash a contemporary quantum computer into a state of quantum decoherence – rendering it more or less useless. But as quantum hardware becomes more powerful and less error-prone, it’s clear QNLP will have a large role to play in advancing the complexity and effectiveness of computer-human interactions. 

Contact Us.