A recent model that uses inner products of word distribution maps with respect to position in linguistics has found cross-over appeal in quantum information processing and vice versa. The relation between distributions of words with respect to verbs, and understanding how to model the effect words have on eachother as matrix transformations has lead to a new way of processing text vastly different than anything out there including Google's current understanding mechanisms.
The paper featuring the linguistic advance covered here: http://arxivindex.blogspot.com/2012/04/linguistics-nlp-model-from-physics-and.html
A news story on the QC group:
"The most important thing is how it has focussed attention on quantum processes, and how one combines them to get more complex processes," Panangaden says.
That impact is reflected by the growth of Coecke’s research group and his professional success since he, in his words, got his head down and tried to "do something with my life that actually worked out!" When he started his postdoc position at Oxford University ten years ago he was the only person doing research in quantum logic. Now a professor, his research group has ballooned to 35, and counting. "I’ve got 20 PhD students at the moment, that’s actually a bit too many," he laughs. Coecke’s conceptual stance on quantum theory has also been attracting a lot of interest from the ’big-guns’—for example the US navy—because by creating a framework for putting together quantum processes you are essentially modeling a quantum computer.http://www.fqxi.org/community/articles/display/166
Why is Coecke so good at what he does? Panangaden puts it down to his "intellectual boldness that allows him to go where others would hesitate." Indeed, Coecke is most proud of how his diagrammatic approach can be applied to other parts of our daily reality, such as linguistic processes—an arena that few quantum physicists would dare to enter.
Surprisingly, the way words interact to make up a sentence is similar to the way quantum processes interact. Google takes no notice of the order of words on a page, but actually the ordering can completely change a sentence’s meaning. Coecke has used his graphical approach to connect individual words in a sentence so their meaning can be extracted according to both the content of each word and its positioning. This is quite an achievement: most models of human language either focus on individual words or grammatical rules, not both. "Our categorical model blows away the existing language processing models," says Coecke.
Coecke is now working on a linguistics model with Mehrnoosh Sadrzadeh, Ed Grefenstette and Dimitri Kartsaklis in his group, and linguists Stephen Pulman at Oxford and Stephen Clark of Cambridge University. They are testing it with samples from the British National Corpus—a 100 million word collection of written and spoken British English. The team plans to incorporate language processing tasks to calculate the meaning of sentences, for example to include words that are ambiguous—such as ’Mars’ which might refer to the planet, a Roman god, or the chocolate bar, among other things. They also need to untangle compound types—when two words are joined together, such as sandpaper.
"I’m looking forward to seeing what comes out of his research group in the next few years," says Spekkens.
The fact that Coecke’s high-level approach to understanding quantum information has such power when applied to other diverse fields, including linguistics, may point to a higher truth: that there are structures common to all layers of reality. "I used to think that physics was the only real deal, the foundation of everything," says Coecke. "I think that perspective is very naïve now."
And a full text article by the same group:
We put forward a new take on the logic of quantum mechanics, following Schroedinger's point of view that it is composition which makes quantum theory what it is, rather than its particular propositional structure due to the existence of superpositions, as proposed by Birkhoff and von Neumann. This gives rise to an intrinsically quantitative kind of logic, which truly deserves the name `logic' in that it also models meaning in natural language, the latter being the origin of logic, that it supports automation, the most prominent practical use of logic, and that it supports probabilistic inference.
In 1935, merely three years after the birth of his brainchild, von Neumann wrote in a letter to American mathematician Garrett Birkhoff :âI would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space no more.â (sic)âfor more details see . Soon thereafter they published a paper entitled âThe Logic of Quantum Me- chanicsâ . Their âquantum logicâ was casted in order-theoretic terms, very much in the spirit of the then reigning algebraic view of logic, with the distribu- tive law being replaced with a weaker (ortho)modular law. This resulted in a research community of quantum logicians [31, 48, 66, 69]. However, despite von Neumannâs reputation, and the large body of research that has been produced in the area, one does not find a trace of this activity neither in the mainstream physics, mathematics, nor logic literature.
Hence, 75 years later one may want to conclude that this activity was a failure. What went wrong? 1.1. The mathematics of it. Let us consider the raison dâËtre for the e Hilbert space formalism. So why would one need all this âHilber space stuffâ, i.e. the continuum structure, the field structure of complex numbers, a vector space over it, inner-product structure, etc. Why? According to von Neumann, he simply used it because it happened to be âavailableâ. The use of linear algebra and complex numbers in so many different scientific areas, as well as results in model theory, clearly show that quite a bit of modeling can be done using Hilbert spaces. On the other hand, we can also model any movie by means of the data stream that runs through your cables when watching it.
But does this mean that these data streams make up the stuff that makes a movie? Clearly not, we should rather turn our attention to the stuff that is being taught at drama schools and directing schools. Similarly, von Neumann turned his attention to the actual physical concepts behind quantum theory, more specifically, the no- tion of a physical property and the structure imposed on these by the peculiar nature of quantum observation. His quantum logic gave the resulting âalgebra of physical propertiesâ a privileged role.
All of this leads us to ... 1.2. ... the physics of it. Birkhoff and von Neumann crafted quantum logic in order to emphasize the notion of quantum superposition. In terms of states of a physical system and properties of that system, superposition means that the strongest property which is true for two distinct states is also true for states other than the two given ones.
A theory for word meaning. The current dominant theory of word meaning for natural language processing tasks is the so-called distributional or vector space model of meaning . It takes inspiration from Wittgensteins phi- losophy of meaning is use , whereby meanings of words can be determined from their context, and works as follows. One fixes a collection of n words, the context words, and considers an n dimensional vector space with chosen basis where each basis vector represents one of the context words. Then one selects a huge body of written text, the corpus. E.g. the internet, all editions of a cer- tain newspaper, all novels, the British National Corpus5 which is a 100 million word collection of samples of written and spoken language from a wide range of sources, etc. Next one decides on a scope, that is, a small integer k, and for each context word x one counts how many times Nx (a) a word a to which one wants assign a meaning occurs at a distance of at most k words from x. One obtains a vector (N1 (a), . . . , Nn (a)), which one normalizes in order to obtain (Ï1 (a), . . . , Ïn (a)), the meaning vector of a.
Now, in order to compare meanings of words, in particular, how closely their meanings are related, one can simply compute the inner-product of their meaning vectors. The meanings of all sentences live in the same vector space so we can again simply use the inner-product to measure their similarity. Grefenstette and Sadrzadeh have recently exploited this theory for standard natural processing tasks and their method outperforms all existing ones .
Turning things upside-down, one can now ask the question: why are there algebraic gadgets that describe grammatical correctness, i.e. why do these even exist. Our theory of word meaning explains this: they witness the manner of how word meanings interact to form the meaning of a sentence. Â§6. The remaining challenge. In this paper we pushed forward the idea that the diagrammatic languages describing quantum phenomena as well as meaning-related linguistic phenomena may constitute some new kind of quanti- tative logic. The same logic also governs Bayesian inference, Bayesian inversion boiling down to nothing but transposition for appropriately chose cups and caps: B A|B = B|A A This was established by Spekkens and the author in , to which we refer for details. So where does traditional logic fit into this picture? One perspective is to start with standard categorical logic [3, 9, 64]. The compact structure can then be seen as a resource sensitive variant (as in Linear Logic [50, 74]) which is degenerate in the sense that conjunction and disjunction coincide [22, 44].
We do not subscribe (anymore) to conceiving the diagrammatic logic as a âdegenerate hyper-deductive variantâ of standard logic in categorical form since this does not recognize the quantitative nor the process content. Rather, we would like to conceive the quantitative diagrammatic logic as âthe default thingâ from which traditional qualitative logic arises via some kind of structural collapse. There are several results that could be taken as a starting point in this direction, for example, the generalization in  of Carboni and Waltersâ axiomatization of the category of relations . But since this still belongs to the world of speculation, we leave this to future writings.Related: http://arxivindex.blogspot.com/2012/04/unsung-hero-of-gr.html
Mathematician behind GR also linguistic expert
Random matrices and prime numbers show up in NLP and physics.