If you ever wanted to know how a military radar transmission works, there's a pretty good overview that you can find in this Tektronix Application Note on pages 2-6 It talks about what a radar pulse looks like and why you might modulate the pulse in order to get better performance.
They skipped the most basic type of radar system but Mattel has utilized it to make a really neat and cheap radar gun.
You can use RADAR (technically it's an acronym so it should be capitalized) to tell a number of things about object it's aimed at. The Mattel radar gun gives you the most basic information, speed. It transmits a microwave signal and observes the reflections (if any) that come back from the object. If the object is moving toward or away from the gun, the reflected signal will have a Doppler shift. The amount of frequency shift is proportional to the speed.
Now if you remember your music theory, when you mix two waves of slightly different frequencies together, you get a signal with both frequencies as well as a "beat". The frequency of the beat is the frequency difference between the signals.
Taking advantage of this, the Mattel gun combines the signal it sends and the signal it receives to get the resultant signal, filters out the microwave signals and is left with the beat signal. Get the frequency of the beat signal and it's some simple math to get the velocity of the object.
All this for $30 retail.
Now, notice that the gun doesn't give you position. Radar systems figure out how far the target is away by measuring how long it takes for the signal to reach the target and come back. To do this, they send a pulse and wait for the return and then time the return. The Mattel gun sends a continuous signal rather than a pulse so it has no way of measuring time of flight. That processing would probably also cost a bit more to implement.
Why did I even bother to read this article? Well, I'm seeing some of our customers testing radar systems by simulating return pulses with hardware such as the R-series plug in boards or our Vector Signal Generators and wanted to know a little more about what they were really doing.
Labels: SoftwareDefinedRadio
A device such as a digital oscilloscope uses a high speed a/d converter to acquire the desired signal for some very short duration of time at a very high sampling rate, this acquired data is then transferred through a signal processing engine that may perform some sort of analysis on the data at that point such as an RMS calculation, and then the data is transferred to the main display engine for more math (such as calculating a cursor value) and finally displayed on the screen. So the flow is simple, acquire the data, analyze it to make the measurement, and present the result to the user. For 20 years, NI has used the phrase "acquire, analyze, present" to describe virtual instrumentation. A virtual instrumentation system allowed you to create that oscilloscope yourself out of a PC, a digitizer, and LabVIEW and allowed you to write your own measurement.
However, the phrase "acquire, analyze, present" may misleadingly imply that the phases are discrete when in fact they can be continuous. Why might this be continuous? One example is the processing to perform a trigger. If a scope is going to trigger when the signal exceeds a voltage level, acquisition and analysis must run continuously to digitize the signal and perform some very simple math (comparison) to look for the threshold to be met.
As with all engineering problems, the devil is taking the simple concept and applying it to a need that our customers have. Creating that trigger is subject to two fundamental limitations: the speed at which the trigger criteria can be evaluated and the rate at which the data can be sent from the a/d unit to the trigger computation circuitry. Triggering is typically a hardware operation because it needs to run at a megasample or gigasample rate and it is performed "close" to the a/d converter to handle the data rate of hundreds of megabytes or multiple gigabytes per second being generated.
The march of technology provides some interesting opportunities on the horizon that have been alluded to in past posts. The architecture of a software defined radio (SDR) and an oscilloscope are not that different. The FPGA technology and wideband A/D technology driving SDR is beneficial in T&M applications. The biggest difference is the audience. Whereas the SDR design team consists of C and VHDL programmers working for 12 months on a highly specified design in a high process development environment, the T&M system design team is a few people who know LabVIEW trying to keep up with the changing specs being thrown over the wall and trying to avoid the end product being late even when the design team is late.
The next generation of test equipment will be leveraging the technological power available in the SDR architecture but the ease of use you expect from a T&M instrument to get those difficult measurements made. Imagine the possibilities:
- Write your own math computation for that trigger, compile it, and have it run at the full rate of the hardware.
- Perform calculations on the data at full speed. Need a hundred kilohertz RMS calculation to decode a LVDT? You've got it. Filter or demodulate a 20 MHz communication signal? No problem.
- Take that test that you wrote but have part of it run on an FPGA to speed it up. When that's not good enough, take the whole thing and move it down to the FPGA.
All of these tasks are possible today but some aren't as easy as they could be. The technology is there to accomplish the task, the trick is to expose that power to you. NIWeek is coming up in just over two weeks. I look forward to discussing the possibilities with you there.
Labels: hardware in the loop, Hardware Test, LabVIEW, SoftwareDefinedRadio
In my
last post, I mentioned a need to find software faults in microprocessor-controlled embedded systems. Since I'm a software guy, I'm curious about this angle and started doing some reasearch.
1) The faults that were mentioned (timing, memory leaks, crashes) are basically software domain problems but they may only be triggered by certain I/O inputs the system receives (and the sw ends up dealing with). So, it looks like the main tools to diagnose the fault will need to be ones that deal with the software domain but the system may need to be driven from the hardware I/O domain.
2) These challenges are similar to those found in other applications that use microprocessor-driven control systems. For example, NI makes hardware that can be used for real-time process control (such as controlling valves in a paper mill). The controllers need to run 24x7, just like a radio, and the entire system is controlled by the microprocessor. NI needs to make those controllers robust and therefore we need to test them extensively.
3) The automotive industry also uses microprocessor-based high speed control systems, especially to control the "powertrain" (engine & transmission). They use a technique called
Hardware in the loop testing. The real controller (ECM) that is to be mounted in the car is hooked to a "simulator". The ECM sends out real output signals using a real wiring harness to the simulator, the simulator then looks at the outputs from the ECM and calculates what the real engine would do and then sends signals back to the ECM that make it think it is talking to a real engine. You then "drive" the ECM as if you were driving the car through various scenarios and the simulator checks to make sure all of the outputs from the ECM are valid. At the same time as the simulator is running, they monitor the software running on the ECM to verify that it is operating properly. If the ECM crashes, they have tools to tell them what happened in the software. It would seem like you could do exactly the same thing with software defined radio test.
4) In the auto industry, they typically don't have an OS and they do the tracing at a low level. In the old days, this was done with a chip emulator. The emulator (or logic analyzer) would snoop the address and data bus to monitor execution. Now that caches are so prevalent, the external bus is not a good place to monitor. Chips that Freescale makes for automotive such as the MPC565 have a port called
NEXUS. This port works in conjunction with information coming out on the address/data bus to spit out information while the processor is running that gives visibility into the processor pipeline. You can tell that a branch was taken, not taken, etc.. even if the instruction stream is all in the cache. There are boxes such as the
Lauterbach ICD Trace that hook up to the chip for this purpose. Their tools can then show the full execution history, what led up to the processor crash, etc.
WindRiver,
GreenHills, and other emulator vendors also have this capability.
It would seem like this concept could be applied to Software Defined Radio. A Radio Emulator could drive the radio under test while the radio is being monitored with a tool like this. In the automotive case, I don't know how much this trace data ends up getting synchronized to the operation of the HIL emulator. It might be critical to share control or timing information between the radio emulator and the trace tool so that you can correlate the waveform signals precisely to the line of code being executed. This is where I think LabVIEW would come in. HIL simulators provided by
NI partners use LabVIEW as their base and are open to be modified by the end user. LabVIEW has the ability to work with other software tools via ActiveX and other interfaces. It should be relatively easy to get LabVIEW to control both the HIL simulator and the trace tool.
5) Since SDRs typically run high function OSes like GreenHills Integrity or WindRiver VxWorks, there are some additional tools that would be useful during debug. WindRiver has a tool called
MemScope that will monitor the heap so you can see memory leaks. LabVIEW could control this tool also. GreenHills
has a leak checker but I haven't had any luck finding out more details on what it does.
I'm looking into setting up this type of system in our lab. We have a reliability lab here for testing our real time controllers and we're looking at adding a tool like MemScope (since we use VxWorks in our controllers) to our testing arsenal. We may add on a trace tool as the next step after that.
Labels: hardware in the loop, Hardware Test, Software Test, SoftwareDefinedRadio
In my visit to the SDR Forum Test Workshop, I observed something interesting. The presentations given by instrumentation vendors focused on verifying the analog and digital performance of the RF subsystem. These vendors all have very nice equipment, such as spectrum analyzers, to do this testing.
However, some of the questions from the audience during the round-table discussion asked about issues that had nothing to do with the RF subsystem. Those questions were about frustrations in testing the software in a software-based electronic system. One question was on how to get a PC to talk to the radio over the Ethernet interface. JTRS radios are typically controlled via SNMP and a user was looking for a toolkit (
I found one here) to talk to their radio.
Two other attendees needed to trace a device failure to a software condition such as a stack overflow or a priority inversion, both software bugs that expose themselves as if they are hardware failures. The worst part about these bugs is that they are typically hard to reproduce and can take a long time to track down. This makes them expensive to track down because in test, time is money.
Labels: JTRS, SNMP, Software Test, SoftwareDefinedRadio
RF Test is one of the largest areas of the test market. Somewhere around 1/3rd of all test equipment purchases are for RF test. I've been looking into this market for the past 6 months or so to see we need to be doing in LabVIEW (Desktop, RT, Embedded, FPGA) to better serve people testing RF with the NI platform.
One interesting area of RF development is in
"Software Defined Radio". The term is most specifically applied to the
JTRS (Joint Tactical Radio System) program in the military but really applies to the way in which radio design is changing. Historically, radios were implemented in "hardware": either in chips from various silicon vendors or in discrete components on a board. You may have bought a science kit when you were 10 to make an AM radio that consisted of a transistor or vacuum tube and some other electronic components.
The design of radios is changing. The first migration was moving the signal manipulations that used to take place in the analog domain to the digital domain. A
DSL modem has chips inside that use digital signal processing to convert the analog signal that travels down the copper wire into bits that become Ethernet packets.
The second migration was to perform these signal processing operations in a re-programmable device. When the 33.6 KBit modems came out, they were implemented using "standard" digital signal processors from
Motorola Semiconductor (now Freescale),
Rockwell, or others. Using a reprogrammable processor allowed the device to support multiple standards or to be upgradeable when new versions of the standard were released. Now, you are seeing the same thing happen with more 'exotic' radios that broadcast in the RF range rather than just ones that transmit over phone lines. The JTRS program envisions a radio that can run
more than 30 different standards with the same hardware. These radios use a combination of processors and FPGAs to perform the signal processing needed to pull this off. There is still some very complex analog circuitry in the radio to pull this all off but, as some attendees of a conference I just went to noted, "the bits are getting much closer to the antenna".
Technorati Tags: SoftwareDefinedRadio, Modem, Test, JTRSLabels: JTRS, Modem, SoftwareDefinedRadio, Test