with Parallel Processing Power on a Single Chip". If you read the article, you will find no hint of substance whatsoever about why this is so revolutionary and that's a shame because they might actually have something.
The presentation they made at the Symposium on Parallelism in Algorithms and Architectures is similarly light on detail except for one term I was unfamiliar with so I googled it, a "PRAM Machine". This machine appears to be a simple (but ill-documented) concept
(From Wellesley CS331 Notes)
A PRAM uses p identical processors ... and [is] able to perform the usual computation of [a typical processor] that is equipped
with a finite amount of local memory. The processors communicate through some shared global memory to which all are connected. The shared memory contains a finite number of memory cells. There is a global clock that sets the pace of the machine executon. In one time-unit period each processor can perform, if so wishes, any or all of the following three
1. Read from a memory location, global or local;
2. Execute a single RAM operation, and
3. Write to a memory location, global or local.
So really, the only difference between it and a multicore Pentium is that there are probably more than 4 CPUs and that all of the CPUs share a global clock. Interesting but I think the better question is - why did they build it?
It looks like there's a whole set of theory that goes into how to extract parallelism out of algorithms and a PRAM execution model allows the task to be expressed simply.
For example, suppose you wanted to increment the contents of every element of an array by 1. This type of machine would simply have every processor load one element, increment it, and store it back. If you had the same number of processors as array elements, that operation would take place in exactly one time unit. Perfect parallelization. That particular operation is also found in "SIMD" machines. Again, the importance is not that a PRAM can implement this operation, it's because there have been languages developed that allow all of this business of scheduling all of the instructions across processors to be abstracted from the programmer.
Interestingly, it looks like we could use these same concepts to schedule LabVIEW code without having to change the diagram at all. Hmm.