Thursday, June 28, 2007

University of Maryland's PRAM Computer

In breathless language, the University of Maryland announced "New Era of "Desktop Supercomputing" Made Possible
with Parallel Processing Power on a Single Chip"
. If you read the article, you will find no hint of substance whatsoever about why this is so revolutionary and that's a shame because they might actually have something.

The presentation they made at the Symposium on Parallelism in Algorithms and Architectures is similarly light on detail except for one term I was unfamiliar with so I googled it, a "PRAM Machine". This machine appears to be a simple (but ill-documented) concept

(From Wellesley CS331 Notes)
A PRAM uses p identical processors ... and [is] able to perform the usual computation of [a typical processor] that is equipped
with a finite amount of local memory. The processors communicate through some shared global memory to which all are connected. The shared memory contains a finite number of memory cells. There is a global clock that sets the pace of the machine executon. In one time-unit period each processor can perform, if so wishes, any or all of the following three
steps:
1. Read from a memory location, global or local;
2. Execute a single RAM operation, and
3. Write to a memory location, global or local.


So really, the only difference between it and a multicore Pentium is that there are probably more than 4 CPUs and that all of the CPUs share a global clock. Interesting but I think the better question is - why did they build it?

It looks like there's a whole set of theory that goes into how to extract parallelism out of algorithms and a PRAM execution model allows the task to be expressed simply.

For example, suppose you wanted to increment the contents of every element of an array by 1. This type of machine would simply have every processor load one element, increment it, and store it back. If you had the same number of processors as array elements, that operation would take place in exactly one time unit. Perfect parallelization. That particular operation is also found in "SIMD" machines. Again, the importance is not that a PRAM can implement this operation, it's because there have been languages developed that allow all of this business of scheduling all of the instructions across processors to be abstracted from the programmer.

Interestingly, it looks like we could use these same concepts to schedule LabVIEW code without having to change the diagram at all. Hmm.

Labels: , ,

Tuesday, June 19, 2007

Reasons for personal source code control

In Jim Kring's top 5 bad excuses for not using source code control post, someone commented

Perhaps another top five is needed - “reasons a single LabVIEW developer should use scc software” - then we could decide for ourselves how our manual methods stack up.

So here's my top 5

1. It forces you to document what you changed

Each time I check in/commit a set of VIs, the SCC program (+ LabVIEW if I enable it) asks me to describe my changes.  It's invaluable to have a running history of the changes you made to your code, both what you said you changed and what VIs actually were different. Here's an example from one of my personal projects

  • #1 - "Initial version"
  • #2 - "Fixed link to Open URL in Browser. It isn't in vi.lib so the paths get all messed up as soon as you move this llb somewhere else"
  • #3 - "Changed name of LLB and help directory. Added quoting to the URL sent to OpenURL"
  • #4 - "Added flashing of dots on refresh. Enhanced the help slightly. Will add to the help in the next changelist"
  • #5 - "Added examples and 2 more pictures to the docs"
  • #6 - "Completed help and examples - pre final review"

That code was written in November 2001 and by looking at that brief history, it takes me back through what I was doing.

2. It tracks code in multiple areas with ease

Code that spans multiple folder hierarchies on disk can be a problem for the "make a copy as a backup" technique.  Good source code management systems can track code spread across your disk.  Did you put code in user.lib and in your documents & settings area? No problem. As soon as you have code in more than one directory, I question how meticulous you will be about copying "everything".  Instead, you'll start getting lazy which takes us to #3.

3. You can always revert to a good (or at least known) state

If something goes horribly wrong, I do a "save" rather than "save as..." and nuke a VI, I can always recover to something known.  It can be accomplished in 3 mouse clicks rather than searching around on the hard drive.  For SCM systems that are "changelist" based (such as CVS, Subversion, and Perforce), you can pull from a particular known good state.  "Give me what I submitted on 5/21/2007" and you know those VIs actually work together. That's why one of the "good scm practices" is to put EVERYTHING that could change under SCM. 

4. You can get everything back 6 months from now

This is another version of #2 & #3. Suppose your nifty user.lib extension changes 4 months from now.  Can you rebuild your nice little utility that used an older version?  Do you even have that older version of the extension any more?  Keep everything that could change under SCM, including the OpenG tools that you get from Jim and you can always roll your environment back to the way it was 6 months ago and have a clean rebuild.  We put our C++ compiler under SCM here at NI so if we need to rebuild LabVIEW 8.0.1 for some reason, we can make sure we are using the exact same build environment as when we first built it.

5. Allows you to track your components

You got those nifty OpenG tools and you (gasp) modified a VI!  How do you keep track of those changes?  What happens when they upgrade those components? How do you make sure you go re-modify the source with your changes? Manual record-keeping is not your friend here.

6. More disk efficient

Maybe in this age of big hard drives, this isn't an issue.  But how many times are you going to make a duplicate of a 30 meg folder hierarchy because you changed one VI.  Do you stop and say "maybe not, I'll do it later". That goes through my head.

Labels: , ,

Wednesday, June 13, 2007

Hear me on a TechOnline roundtable with respected experts

Embedded systems are getting more complicated and harder to program. What can be done about it? You can come hear a panel of experts (+ me) talk about the challenges of embedded systems programming and a risk of programmer shortage in the future.

The round table participants are all current or former editors of Embedded Systems Design magazine and are folks I've read for a long time.  I've been reading Jack Ganssle's column since 1994 when I was at Motorola Semiconductor and Dan Saks's articles when I was still programming in C++ on LabVIEW so it's neat to share the virtual stage with them.

This web-cast came about because the current editor in chief of Embedded Systems Design magazine, Rich Nash, wrote an article postulating that there was an emerging coding crisis. It generated more letters to the editor than anything else he had done. You can see them in the online comments to the article as well as a follow up reader response article.

So, if you're not going to be busy on June 19th, please sign up and listen in. There will be a Q&A section at the end if you want to ask follow-ups or you can post questions to me here if you want.

There are some trends I'm especially interested in discussing such as high level programming tool adoption, difficulties programming multicore processors, and how FPGAs might change things

Labels: ,

FREE hit counter and Internet traffic statistics from freestats.com