Wednesday, May 25, 2005

Weak Linked Languages

As I watch LabVIEW build.... and build... and build... it reminds me of why C++ is the wrong direction for computer languages. I stopped getting C/C++ Users Journal a few years ago because each issue was more depressing than the previous. The trends shown in that magazine were toward an ever increasing amount of templating. I don't object to generic programming. I do object to the way that C++ does it. C++ aims for maximum generated code efficiency by trying to include and inline everything. The more the compiler knows during a pass, the better code it can generate. This has two problems. First, C/C++ compilers work on one object file at a time. The only way to give the compiler more information to work with is to put more and more information into each object file (via the #include statement). More information for each object file means slower compile time for the object file. In a source base the size of LabVIEW, that adds up to a ton of time. The second problem is that it couples all of your source files together. If you make a change to the implementation of your "list" class, the implementation of that class is inlined into every single source file, necessitating a rebuild of every single source file. The build/fix/debug cycle becomes 30 minutes for every line of code you change. That's a productivity killer.

There is a whole class of languages that doesn't work that way. I call them "weak linked" languages. These languages try to have minimal dependencies between objects. They give up a little in run-time efficiency because they have to discover some things at run-time that C++ programs set at compile time but the programmer productivity tradeoff is huge. Languages like C#, Java, and LabVIEW only make you "compile" the file that you just touched, not the other 5,000 files that may come in contact with the file you touched. Build/fix/debug in LabVIEW is 5 seconds for just about any change.

Beyond compile time, weak linked languages give you some other cool benefits. Notice how in LabVIEW, you can open a heirarchy of 200 VIs, go to any one of them, and hit the Run button and run the thing? A weakly linked language gives you that flexibility. Java has something a bit like it, though it's IMHO inferior :-) Any class in Java can have a "main()" function. From the command line, you can just start up the program by calling any classes main(), not just the one big entry point that is supposed to start the program. The JUnit framework takes advantage of that benefit to make a nice unit test system.

Just as programmers gradually gave up assembly language to get more productivity (and more maintainable code) in spite of a slight loss in run time efficiency, languages like LabVIEW are the way programming will be done in the future. The folks around me who program in G already know that.

Thursday, May 19, 2005

Benchmarking a LabVIEW Operation

A few years ago, I added an aside to an NI Week presentation about how to benchmark an operation in LabVIEW to see how long it would take. I wish I had written that information down because it's rather nuanced and you can easily shoot yourself in the foot if you don't have a checklist. So, for the benefit of my own memory and hopefully something for you, here's how to do a benchmark in LabVIEW.

1) LabVIEW is not a sequential programming language. I repeat, LabVIEW is not a sequential programming language. It does not execute left to right. It does not execute top to bottom. Forget this at your peril.

2) Therefore, always use a sequence structure. The recently added flat sequence structure works really well for this. Do not allow any code to stray beyond the border of the sequence structure because it could easily run in parallel with the thing you are trying to measure.

3) I recommend the structure shown below. It has 5 phases: initialization, record the initial timestamp, do the stuff you want to measure, record the ending timestamp, and clean up. The structure of the 2nd and 4th frames shown are very important. If you put anything else in them, you have no way to know if it ran before or after the timestamping code (re-read rule 1 three more times for good measure)


Proper Benchmark Template 1


4) Always Save All before running the benchmark. This is mandatory. Why? Because any VI in memory that has any modification will have some extra bits loaded that affect performance. For example, a VI that needs to be saved (maybe you changed a connector pane on a different VI and that caused a ripple) will cause LabVIEW to load the front panel of the "dirty" VI into memory, even if it's not visible. If a front panel is in memory, LabVIEW sees "hey, front panel controls are here, I'd better make sure they stay current" and so when the VI runs, it will force updates to all of the front panel elements, even if the front panel isn't actually visible. I can say this is the number one reason that I used to have someone tell me that the newest version of LabVIEW was slower. They loaded their old VIs into a new version of LabVIEW, hit "run" and it ran slower. They hadn't done a "save all" first and so every VI in their hundred VI heirarchy had the front panel in memory and ran slowly.

5) Close all front panels. This bit me and Steve Rogers when we ran a benchmark a month ago. We couldn't figure out why this new thingy he had added didn't benchmark any faster. We were dumbfounded for 15 minutes until we realized "doh!" the subVI front panel was open and so it was redrawing a few thousand times during the test.

6) Don't put any controls or indicators in frame 3 of the benchmark template above unless you are trying to benchmark the drawing time of an indicator. Indicators update asynchronously and will have a random effect on your test.

7) If you are benchmarking the drawing time of an indicator, you will have a trickier time. We do a lot of stuff to try and keep you from shooting yourself in the foot with drawing. Sure, you wired that output to the indicator, but did you really mean to redraw the indicator 50,000 times? Thus, LabVIEW will not stall the diagram waiting for the indicator value to update. It will only transfer the data from the block diagram to the indicator periodically. It will only redraw the actual indicator every so often (your eye can't see an update rate faster than 60 or so Hertz anyway), etc.. So, read up on "Defer Panel Updates" in LabVIEW help as well as the "Advanced->synchronous display" option in the context menu for the control. I will tell you that "Synchronous Display" does not (in spite of the tantalizing name) truly lock the diagram to the control. It just means that if you try and write to the indicator twice, it will stall the diagram the second time around until it has finished updating the indicator the first time. But, it will then let the diagram continue before it has completed drawing the second time. If you use the "Value" property for the indicator, it does send the data to the indicator immediately but I'm not sure if it waits until the drawing is done (it very well may, I just can't remember). Anyway, as with all things, we are always trying to improve LV performance in lots of different use cases without messing up existing code and so it's a very tricky balance.

As always, I hope this is helpful. I happed to be working on this type of thing today.

Tuesday, May 17, 2005

Intro

Just a little bit of introduction. I thought I'd do this blog to give folks some insight into LabVIEW, how to get the most out of it, and interesting things that are going on in my area.

I graduated from school 12 years ago, spending 3 years as an application engineer at Motorola Semiconductor (now known as Freescale) writing networking microcode, doing customer support, and writing appnotes and documentation. From Motorola, I moved to Metrowerks to work on computer game development tools and eventually managing all of the game tools development there.

I started at NI about 6 years ago, initially working on improving LabVIEW performance. After managing the Performance team for a while, I helped out the NI-DAQ 7.0 effort for a few months and then managed the LabVIEW Real Time and Embedded teams (PDA, FPGA, RT). Now, I focus exclusively on getting LabVIEW in places it hasn't been before. PDA is still part of the mix as well as the LabVIW DSP Module that was just released today.

I hope some of this information is of use. Feel free to comment on anything.

Monday, May 16, 2005

55ms

Each node on a LabVIEW data flow diagram will execute at some point
after its inputs are valid. That simple specification is sufficient
to guarantee that the computation for each node is correct. A program is
a collection of nodes. If all of the computations for all of the
nodes in your program are correct, can your program be incorrect?
Yes. When your program is interacting with the real world, it starts
dealing with another aspect of programming: time. Until LabVIEW 7.1,
time was not mentioned in the LabVIEW language. Libraries might have
mentioned it, but the language did not.

The way a diagram executes in time has always been a side-effect of
the way LabVIEW happens to work. LabVIEW uses data flow rules to
schedule parallel nodes to run in parallel. You notice this most when
two traditional "for" or "while" loops run in parallel. There is no
specification on how these loops will run with respect to each other.
While there is no spec, you probably should know how it actually
behaves. On the desktop, LabVIEW will ping-pong between loops every
55ms unless the loops have explicit wait primitives in them.
If you are writing a program that is communicating over the serial
port and has to respond to an incoming character in 30 ms, you need
to write your program in a special way to make sure things happen the
way you want them to. Here are a few rules to live by

1) Desktop operating systems tend to "hiccup" at undesirable times.
Windows will occasionally steal control for 100ms from applications.
While drivers like NI-DAQ can take steps against this, applications
like LabVIEW can only avoid that problem by running on a
deterministic OS such as the one used in LabVIEW Real-Time.

2) In LabVIEW 7.0 and before, run your time critical code in a
separate VI and set the execution system to a high priority (but not
subroutine). VIs in higher priority execution systems do not ping-
poing with VIs in lower priority execution systems. The higher
priority VI can take all of the time it wants.

3) In LabVIEW 7.1, take advantage of the timed loop. It allows just
that part of a diagram to run at high priority.

4) If you can't use high priority VIs or timed loops, then you need
to do it the hardest way. You need to make sure that no element of
your code can take more time than your minimum response time (30ms)
in this case. All library calls, all loops, etc.. must either
complete in less than 30ms or must yield time by doing a "wait".

FREE hit counter and Internet traffic statistics from freestats.com