Friday, June 10, 2016

DQC Continued

These guys have put out a ton more information on DQC than the other research groups:

http://horn.tau.ac.il/research.html

Software package for QC:

http://horn.tau.ac.il/compact.html

See qc.m and graddesc.m for the core operations.

Descent:

D=xyData;
[V,P,E,dV] = qc (xyData,q,D);
for j=1:4
   for i=1:(steps/4)
       dV=normc(dV')';
       D=D-eta*dV;
       if (rescale)
          D=normr(D);
       end
       [V,P,E,dV] = qc (xyData,q,D);

Lagrangian integration:

function [V,P,E,dV] = qc (ri,q,r)
% function qc
% purpose: performing quantum clustering in n dimensions
% input:
%       ri - a vector of points in n dimensions
%       q - the factor q which determines the clustering width
%       r - the vector of points to calculate the potential for. equals ri if not specified
% output:
%       V - the potential
%       P - the wave function
%       E - the energy
%       dV - the gradient of V
% example: [V,P,E,dV] = qc ([1,1;1,3;3,3],5,[0.5,1,1.5]);
% see also: qc2d
%close all;
if nargin<3
   r=ri;
end;
%default q
if nargin<2
   q=0.5;
end;
%default xi
[pointsNum,dims] = size(ri);
calculatedNum=size(r,1);
% prepare the potential
V=zeros(calculatedNum,1);
dP2=zeros(calculatedNum,1);
% prepare P
P=zeros(calculatedNum,1);
singlePoint = ones(pointsNum,1);
singleLaplace = zeros(pointsNum,1);
singledV1=zeros(pointsNum,dims);
singledV2=zeros(pointsNum,dims);
% prevent division by zero
% calculate V
%run over all the points and calculate for each the P and dP2
for point = 1:calculatedNum
   singlePoint = ones(pointsNum,1);
   singleLaplace = singleLaplace.*0;
   D2=sum(((repmat(r(point,:),calculatedNum,1)-ri).^2)');
   singlePoint=exp(-q*D2)';
   %EXPij=(repmat(singlePoint',calculatedNum,1).*(repmat(singlePoint,1,calculatedNum)));
%      singleLaplace = sum((D2').*singlePoint);
   for dim=1:dims
      singleLaplace = singleLaplace + (r(point,dim)-ri(:,dim)).^2.*singlePoint;
   end;
   for dim=1:dims
      singledV1(:,dim) = (r(point,dim)-ri(:,dim)).*singleLaplace;
   end;
   for dim=1:dims
      singledV2(:,dim) = (r(point,dim)-ri(:,dim)).*singlePoint;
   end;
   P(point) = sum(singlePoint);
   dP2(point) = sum(singleLaplace);
   dV1(point,:)=sum(singledV1,1);
   dV2(point,:)=sum(singledV2,1);
end;
% dill with zero
%v1(find(v1==0)) = min(v1(find(v1)));
%v2x(find(v2x==0)) = min(v2x(find(v2x>0)));
%v2y(find(v2y==0)) = min(v2y(find(v2y>0)));
P(find(P==0)) = min(P(find(P)));
V=-dims/2+q*dP2./P;
E=-min(V);
V=V+E;
%V=q*dP2./P;
for dim=1:dims
   dV(:,dim)=-q*dV1(:,dim)+(V-E+(dims+2)/2).*dV2(:,dim);
end;
dV(find(P==0),:)=0;



Not the greatest / optimal solution on GPU but this exists:

https://github.com/peterwittek/dqc-gpu


Selected thesis

http://horn.tau.ac.il/publications/Thesis_GS.pdf

Non-recursive merge sort.

For those of you who don't like recursion:

http://www.algorithmist.com/index.php/Merge_sort.c

See also:

http://stackoverflow.com/questions/159590/way-to-go-from-recursion-to-iteration

http://stackoverflow.com/questions/1557894/non-recursive-merge-sort

Info and improvements about actual sort used in Java collections:

https://arxiv.org/pdf/1412.0193.pdf

Overview of reservoir computing

One of the best PhD theses I've ever read. Generic introduction to the field of reservoir computing -- which came out of a series of optimizations on the gradient descent step of neural network training.

http://organic.elis.ugent.be/sites/organic.elis.ugent.be/files/Mantas_Lukosevicius_PhD_thesis.pdf


Spark Shuffle Optimizations

This is a little out of date but it's still important / interesting / great background.

http://people.eecs.berkeley.edu/~kubitron/courses/cs262a-F13/projects/reports/project16_report.pdf

DQC (Dynamic Quantum Clustering) updates

First post in a while. Trying to play some catch up on articles here:

To start with, check out this paper
https://arxiv.org/ftp/arxiv/papers/1310/1310.2700.pdf

on analyzing big data using DQC, some pretty dramatic visuals.

Friday, September 21, 2012

SSRP High bandwidth cheap OS radio driver developement (mostly) finished!

 >Mbps software defined radio package with data transmission drivers finally written implemented on OS hardware at least an order of magnitude cheaper than competitor Ettus Research. Some finishing touches still need to be added but it is working in tests.

GNURadio and hardware available for it before now has mainly targeted expensive, scientific research markets. Up until recently, the lowest entry machines were roughly $700 for high bandwidth, tunable radio links over Mhz spectrum. This device in conjunction with some tuners and amplifiers would make a fantastic longer-range alternative to Wifi for hard-to-create network links. Devices like this are already available but they tend to cost more than they are worth. This is a surprisingly steep drop in price.

This hardware is all open source based so it should be possible for anyone to put it together from any vendor, not necessarily the writer of these drivers.

As a side note, the same OS ADC chips are also going to impact the EEG market. Most companies ridiculously overcharge for a tiny number of probes. The ADCs used here are very similar to an EEG setup, although not identical, so it should impact open source EEG hardware projects, if it hasn't already. (The development of these cheap OS ADCs. (analogue to digital conversion chip.) )

Brief Status
One Assembled LTC1746 ADC board available!
As of 10/14/2007 I have one LTC1746 assembled board available. I also have a few blank PCBs that I will sell separately or with the three ICs pre-installed. The assembled board is $120. Bare PCBs with documentation are $15. PCBs with the LTC1746, LP2989 and THS4501 installed are $60.

Phase Two Works! A new board featuring the MAX5190 8bit 40MHz is now under development. This new board will bring the capability to sythesize arbitrary waveforms of up to 20MHz bandwidth. The board has been assembled and successfully tested. See the MAX5190 board section.

The LTC1746 board works! The board is now in full production. Aquisition rates up to 15Msps (30MB/sec) have been tested and observed SNR exceeds 75db. More details (and pictures!) on this board are available in the LTC1746 board section. In the software side of things, simple asynchronus and synchronus data transfer firmware have been developed. Corresponding host utilities, a SSRP library and an initial GNURadio module have been completed as well. All code is available for download in the SSRP and gr-ssrp tarballs. Performance tests have reached 40.8MB/s average transfer rates.

http://oscar.dcarr.org/ssrp/

New form of universal computation present in hundreds of systems

With an incredibly simple optical-electronic setup outperforming by cost digital/silicon on neural-networking like calculations.
http://arxiv.org/abs/1209.3129
To our knowledge, the system presented here is the first analog readout for an experimental reservoir computer. While the results presented here are prelim- inary, and there is much optimization of experimental parameters to be done,the system already outperforms non-reservoir methods. We expect to extend easily this approach to different tasks, already studied in [9, 10], including a spoken digit recognition task on a standard dataset.3
Further performance improvements can reasonably be expected from fine- tuning of the training parameters: for instance the amount of regularization in the ridge regression procedure, that here is left constant at 1 · 10−4 , should  be tuned for best performance. Adaptive training algorithms, such as the ones mentioned in [21], could also take into account nonidealities in the readout components
Moreover the choice of τ, as Figure 3 shows, is not obvious and a
more extensive investigation could lead to better performance.
The architecture proposed here is simple and quite straightforward to re-alize. It is very modular, meaning that it can be added at the output of any preexisting time multiplexing reservoir with minimal effort, whether it is based on optics or electronics. The capacitor at the end of the circuit could probably
be substituted with a more complicated, active electronic circuit performing the summation of the incoming signal before resetting itself. This would eliminate the problem of residual voltages, and allow better performance at the cost of increased complexity of the readout. The main interest of the analog readout is that it allows optoelectronic reser-  voir computers to fully leverage their main characteristic, which is the speed of operation.
Indeed, removing the need for slow, offline postprocessing is indi- cated in [13] as one of the major challenges in the field. Once the training is finished, optoelectronic reservoirs can process millions of nonlinear nodes per second [10]; however, in the case of a digital readout, the node states must be recovered and postprocessed to obtain the reservoir outputs. It takes around 1.6 seconds for the digital readout in our setup to retrieve and digitize the states  generated by a 9000 symbol input sequence. The analog readout removes the need for postprocessing, and can work at a rate of about 8.5 μs per input sym- bol, five orders of magnitude faster than the electronic reservoir reported in  [8].
Finally, having an analog readout opens the possibility of feedback - using the output of the reservoir as input or part of an input for the successive time steps. This opens the way for different tasks to be performed [15] or different  training techniques to be employed [14].

  The supplementary materials pdf from the nature link  http://www.nature.com/srep/2012/120227/srep00287/full/srep00287.html has some more interesting stuff about how it's not just optoelectronic mediums that can do this effect, it's a new universal form of computation that is apparently possible over many different systems. They are still investigating new ways to do it but there seem to be hundreds. Important characteristics are symmetry breaking and other strange phenomena.

http://en.wikipedia.org/wiki/Electro-optic_modulator

One last interesting tidbit: http://en.wikipedia.org/wiki/Acousto-optic_modulator Quartz 27 Mhz intensity modulators don't compare to Lithium niobate ~10 Ghz but they are vastly cheaper and since apparently this computation 'reservoir' thing is applicable even in a bucket of water (ridiculous, it used the waves or something to perform nonlinear computation.) it probably means there are cheap cheap ways of implementing this like quartz or some organic polymer.

Compared to the million of steps in semiconductor fabrication it's positively easy to construct one of these in relation. This paper only came out last week so obviously nobody is gonna manufacture them yet but if it beats the cost over time it could mean incredibly economical supercomputer production. These things are so simple you could 3d print them. (They've already 3d printed fiber and lasers and lenses etc. pretty easy, Fab Lab and other semiconductor projects are stuck in the mud in comparison to the ease of this process.)

Friday, May 11, 2012

The non-algorithmic side of the mind


The existence of a non-algorithmic side of the mind, conjectured by Penrose on the basis of G\"odel's first incompleteness theorem, is investigated here in terms of a quantum metalanguage. We suggest that, besides human ordinary thought, which can be formalized in a computable, logical language, there is another important kind of human thought, which is Turing-non-computable. This is methatought, the process of thinking about ordinary thought. Metathought can be formalized as a metalanguage, which speaks about and controls the logical language of ordinary thought. Ordinary thought has two computational modes, the quantum mode and the classical mode, the latter deriving from decoherence of the former. In order to control the logical language of the quantum mode, one needs to introduce a quantum metalanguage, which in turn requires a quantum version of Tarski Convention T.

http://arxiv.org/abs/1205.1820

Wednesday, May 2, 2012

Order of magnitude advance in QC: "The ion-crystal used is poised to create one of the most powerful computers ever developed,"

Previous efforts at realizing a quantum simulator have had problems with decoherence and other systemic errors past 10~ qubits. This advance is roughly 10 ten times more than the previous record. Quantum simulators are used to model systems such that parameters that could not be physically varied in the original system can be with the modeled analogue parameters. That is the computation that they are referring to when they say a computer the size of the universe would be needed to perform the calculations.

For instance, the quantum behaviour of these hundred so spin qubits mimicks the behaviour found in numerous mesoscale systems, particularly lattices. In those systems, varying the lattice length and / or other parameters describing the system is often physically impossible or otherwise restrictively difficult, and completely impossible to simulate with a computer. By creating an analogue, pseudo-variations can be performed that give insight into the underlying structure and lead to a better understanding of the original lattice.

From the arXiv paper below the news story: a tunable parameter that mimicks various physical couplings.
That is, by adjusting the single experimental parameter μR we can mimic a continuum of physical couplings including important special cases: a = 0 is infinite range, a = 1 is monopole-monopole (Coulomb-like), a = 2 is monopole-dipole and a = 3 is dipole-dipole. Note that a = 0 results in the so-called Jˆz interaction that gives rise to spin-
squeezing and is used in quantum logic gates (see Supple-
mentary Information) [27]. In addition, tuning μR also permits
access to both antiferromagnetic (AFM, μR > ω1 ) and ferro-
magnetic (FM, ω2 μR < ω1 ) couplings [13].
http://sydney.edu.au/news/84.html?newsstoryid=9081
"The system we have developed has the potential to perform calculations that would require a supercomputer larger than the size of the known universe - and it does it all in a diameter of less than a millimetre," said Dr Biercuk.
"The projected performance of this new experimental quantum simulator eclipses the current maximum capacity of any known computer by an astonishing 10 to the power of 80. That is 1 followed by 80 zeros, in other words 80 orders of magnitude, a truly mind-boggling scale."
The work smashes previous records in terms of the number of elements working together in a quantum simulator, and therefore the complexity of the problems that can be addressed
Most recent paper from author:
http://arxiv.org/abs/1204.5789

Engineered 2D Ising interactions on a trapped-ion quantum simulator with hundreds of spins


Related:
http://arxiv.org/abs/1204.5917

Prospects for Spin-Based Quantum Computing 

Experimental and theoretical progress toward quantum computation with spins in quantum dots (QDs) is reviewed, with particular focus on QDs formed in GaAs heterostructures, on nanowire-based QDs, and on self-assembled QDs. We report on a remarkable evolution of the field where decoherence, one of the main challenges for realizing quantum computers, no longer seems to be the stumbling block it had originally been considered. General concepts, relevant quantities, and basic requirements for spin-based quantum computing are explained; opportunities and challenges of spin-orbit interaction and nuclear spins are reviewed. We discuss recent achievements, present current theoretical proposals, and make several suggestions for further experiments.