Test and Measurement Technical Data
Speeding up dScope Automation
Introduction
When introducing dScope into a production test environment, speed is money. Here are some thoughts on what you can do to make your tests run as fast as possible. Some are obvious, others not so much so.
Get a good balance between loading configurations and changing them
Almost inevitably you will be using at least one dScope configuration as a starting point for your tests. While it is convenient and easy to string together configurations to do what you want, that is seldom the fastest way to test. Almost every parameter and setting of the dScope can be automated and it will nearly always be faster to change the dScope configuration with a few lines of code than it is to re-load a new configuration. This is a bit of a balance though since you may need to have various stages of the test where you can jump back to and re-test a section. Doing this without configurations is difficult and there is the risk that you will not correctly configure the dScope if there is any doubt about what has changed since the configuration was loaded.
Watch out for un-necessary stuff in your configurations
Apart from asking the question "Do I really need to test this parameter?" and cleaning out tests that tell you nothing helpful (eg, sometimes you can know for certain that if test A passes then test B will pass too), it can be useful to consider the excess content within the dScope configurations you are using. These can be quite big files and can take a moment or two to load. In the design office this is seldom a problem, but on the production line where you may end up having to load thousands of configurations a day, making them small and streamlined can save a lot of time. If your configurations don't load more or less instantly, here are some ideas for making them load faster:
- Don't save trace data if you don't need it. If you use the "save as" option and look to the right of the dialogue that appears, you will see that each part of a dScope configuration can be set to be saved and restored or ignored. If you don't need to recall the trace data, don't save it. Note that you may need to save sweep traces if you want to save their limit lines, but you shouldn't need to save sample buffers (scope traces) or FFT data.
- Keep your pages clean. If you look at all the five pages that a dScope configuration contains, most of them will not be used by you, but will contain a lot of dialogue boxes and memos etc. Consider showing just the results you want the operator to see on page 1 (if any) and settings and notes/memos for the test on page 2. The rest of the pages could be empty. They will load faster.
- Watch out for generator wavetables. Some generator signals are saved as parameters needed to create a wavetable and get calculated when they are loaded. With a large wavetable (eg, a 64k bin-centres wavetable.) that can take several seconds. These are sometimes left over after a configuration has been used for development work and are not actually used in the test.
- Save with a script or rendered output? (wavetables etc.) In several places within the dScope it is possible to use a .dss script in place of a rendered file. Examples would be generator wavetables and FFT weighting functions. In the generator example, if you have set the generator to use a "User waveform" you can use a specific type of .dss script to calculate the wavetable when the configuration is loaded. This gives the flexibility to change the waveform depending on other parameters in the system. If this functionality isn't needed then it may be faster to save the configuration with the rendered output of the script rather than the script itself. In the example of the generator wavetable, you would save the configuration with a .wfm file saved as the user wavetable, rather than the .dss file that created it.
Do more than one reading at a time
This is a pretty broad subject. dScope is essentially a software instrument with a lot of concurrency and not a lot of modality. This means that you can read the Signal Analyzer and the Continuous Time Detector and up to 40 FFT detectors in the same configuration, as well as a lot of other parameters from the digital carriers etc. You can also sweep up to four results at the same time, for example you can measure amplitude and distortion on both channel A and B simultaneously rather than making separate amplitude and distortion sweeps.
Extending this further, often sweeps are not the fastest way of making measurements and you can get a LOT more results a lot quicker from multi-tones, bin-centres, or log swept sine signals. These won't be discussed in detail here, but suffice to say that there are faster ways to make some measurements than making them one after another.
Optimise your settling times
Settling is a complex subject but at its root is a simple concept: if I have just changed something on my setup, how do I know when it's OK to make a measurement? How long do I have to wait for the system to have "settled" to the new conditions? When you do things by hand, you seldom have to worry about this, but since an automation system can make measurements within milliseconds of making changes, you need to pay some attention to it. The settling algorithms in dScope essentially wait a defined period after a change has been made, then make a succession of measurements to decide whether the results have "settled" to their new value. There are several things that can make this very difficult. If you find that it takes a few seconds to make a reading, or if you get marks on your sweep with the letter "s" next to them, you are seeing settling time-outs.
- Readings close to zero These are a problem because a very small change can constitute a very large percentage difference. Because settling parameters are set up to look for differences in percentage terms, you can find that you have a very steady reading very close to zero that won't settle. The most common culprit here is inter-channel phase and for readings close to zero, or even sweeps that pass through zero, you may find that you get settling time-outs. Low distortion readings also suffer from this. Unfortunately you can't just switch units to dB because this only affects the displayed result and the settling is using linear ratios behind the scenes. When controlling dScope from an automation interface, it may help to read a different parameter first with settling turned on (e.g., amplitude) and then read the phase with the settling turned off.
- Noisy readings The problem here is that there is no right answer when it comes to unstable signals and settling. If you are attempting to measure random noise (as opposed to steady tonal noise) you are going to hit this sooner or later. The problem is that the we are waiting for the signal to "settle" and random noise may never do that - dScope will keep waiting until the settling times out and that wastes test time. The first step is usually to try and loosen the tolerance if the noise is fairly steady. The next step is to employ a bit of averaging in each result - one of the easiest ways to do this is to increase the size of the sample of data we are measuring - either increasing the FFT size if we are using the FFT detectors, or slowing down the reading rate of the Signal Analyzer (in the Signal Analyzer panel) for Signal Analyzer and CTA readings. This makes the rate of getting the individual results slower, but since they are more consistent, they should make the result settle faster. You can also turn settling off and/or use averaging in the settling parameters.
- Latency This is something that can trip you up if you are trying to test too fast. The basic problem is that it's quite easy for dScope to make a measurement before the signal it is generating has actually arrived back at the analyzer. Symptoms of this include the first point on a sweep being much lower or higher than you expect, two adjacent readings being identical when you are expecting a ramp, or a complete offset of your values. To get a better idea how this happens, imagine you are measuring a device that has 50ms of latency. If you use the default settling parameters, you may well find that dScope is not actually reading what you think it is reading. On a sweep step going from 100Hz to 200Hz dScope will switch its generator from 100Hz to 200Hz, wait the default 30ms and makes a measurement thinking it is reading the 200Hz tone that it is outputting. In fact, because of the latency, the device is still outputting the 100Hz tone. The dScope will plot the result at 200Hz because that is what it is generating. If you have filters in your measurement (eg, THD+N notch filters) that are tracking the generator, they also will not line up with the tone and the result will be wrong. As a rule of thumb set the settling time to the latency plus about 50ms. With very long latency, you can design the test to wait for the signal you are expecting. If measuring with a sweep, you can use "sensed sweeps" to do this. These detect when the changes have occurred on the analyzer side in order to step the sweep forward and have been successfully used with very long paths and latency of a few seconds (including signals being encoded into a transport stream, bounced off satellites and then decoded again - admittedly not your typical production test.).
- Settling and FFT Detectors: With FFT based measurements, settling works the same way as other measurements and requires several full FFT acquisitions before a "settled" result can be determined. This is particularly slow with larger FFT buffer sizes and quite often it's sufficient to turn settling off for FFT measurements and put the number of results to 2. This essentially just uses the second acquired FFT buffer to make the measurement (the first quite often contains a glitch where a parameter change takes place etc.) and is usually fast and reliable.
Optimise your auto-ranging
Within the dScope hardware all measurements are made in the digital domain: generator signals are generated digitally and then converted to analogue for the analogue outputs. Analogue signals coming in to the analyzer are converted to digital before measurements are made on them. In both these scenarios, gain ranging circuits are used to adjust the analogue signal levels to keep the digital converters operating close to optimum. In the dScope Series III, there are 2dB gain steps on both the generator and analyzer. We don't need to worry about the generator auto-ranging; because the signal is under the control of the software, it controls the auto-ranging directly. The analyzer on the other hand doesn't have any way of knowing in advance what signal you are going to connect to it and must adjust itself instantly when levels change. If you know in advance what the levels are that you are going to be using in your test, you can speed things up a bit and reduce dScope relay wear if you set the correct gain range in advance (in the Analogue Inputs dialogue). If you use fixed gain ranges, dScope will still auto-range up if you set it too low. It does this in order to protect itself and prevent clipping. If you fix the gain range too high in dScope, the penalty is poorer noise and distortion performance. In practice dScope auto-ranging is pretty fast, but you can make it faster and prevent relays chattering if you design your test to use fixed gain ranges. See also the section on speeding up sweeps by sweeping the amplitude downwards below for more on this.
If you are not measuring high performance equipment close to the limits of the dScope, you may also find that you can speed things up by using a larger auto-ranging step size such as "medium" (6dB steps) or large (20dB steps). The dScope converters won't be as close to optimum, but it will usually be faster and performance will often be fine.
Set filters to "track generator" where possible
By default dScope tends to use filters set to track the input frequency for measurement such as THD+N where you need to notch out the fundamental. This is because it will be more universally applicable than tracking the generator as it will also work with signals that aren't derived from the dScope itself such as playing a tone from a playback only device. If you are using the generator, however, it will be faster to set the filter to "track generator". The reason is that, if you have it set to "track input", internally it will have to measure the frequency of the input first using the Signal Analyzer before it can set the filters. This will require the frequency reading to have settled and then the filter to be applied before a measurement can be taken. If this is set to "Track generator" the software already knows what the generator frequency is and can set the filter immediately. For frequency sweeps where the generator is the source, this can make a useful difference in speed.
Avoid using DC coupled dScope hardware
By default dScope Series III units are shipped with the analyzer analogue electronics AC coupled. Using DC coupling is possible using internal jumpers, but it is likely to slow down measurements, particularly where auto-ranging and settling are concerned.
Use a fast PC
Fairly obvious: dScope Series III is essentially a software instrument with a dedicated hardware interface. Many processes are faster with a faster PC, particularly large FFTs and anything that requires complex computation. Note that audio acquisition is always "real time" so acquiring a particular size buffer of data will always take the same amount of time, but it's what you do with it afterwards that can speed up significantly. The dScope application itself is quite lightweight by today's standards and will still run quite happily on a netbook, but it will certainly run faster on a faster PC.
Speeding up sweeps
In addition to the notes above, many of which apply equally to sweeps, there are a few additional tips to make your sweeps run as fast as possible:
- Sweep downwards (amplitude) If you are performing a sweep of generator amplitude, it will be faster to start at a high level and sweep downwards. Why? It's to do with auto-ranging. At each step the dScope has to measure the incoming level and work out what the gain range should be before setting it and then measuring again. As the sweep progresses if the amplitude is getting smaller, each successive signal can be accommodated within the existing gain range and can therefore be measured. This means the dScope knows what gain range to set and can do it straight away in one step. If you sweep upwards, successive steps can go above the range of the converters. In order to work out what the gain range should be, dScope has to momentarily set a much larger gain range, find out what the level is and then step the gain down again: this takes two steps and will be slower (and will increase the wear on the gain range relays).
- Sweep downwards (frequency) Test with early versions of dScope software seemed to indicate that sweeping down in frequency was slightly faster. Recent tests have shown no significant difference.
- Use sweep data tables to reduce the number of points measured This will not always be appropriate, but if you are performing a frequency response sweep, you may well be interested in a few regions of the response where you want higher resolution, typically at the top and bottom of the frequency range, but you may not be too concerned with the region in between which is likely to be flat. To get the required level of resolution, you can either increase the number of points for the whole sweep, making far more measurements than you need, or you can use a sweep data table with a lot of data points clustered around areas of interest and sparser points in between.
- Use Adaptive FFT sizes This is a bit advanced. If you are measuring an FFT based parameter, you will often find that the frequency resolution is too low when measuring at low frequencies or un-necessarily high at high frequencies. You can get around this by running a script on each sweep step that sets the FFT size depending on other parameters such as the signal frequency, the sample rate and the buffer size. This allows you to use a fast small FFT size at high frequencies and a larger, slower FFT only at low frequencies where it is needed for resolution purposes. Please contact us for more information on this.
- Drop out on failure If you are performing a production test running sweeps, you may want to drop out of the sweep in the event of a failure. You can achieve this by setting a script to run on a limit breached event in the Event Manager that simply stops the sweep. Very simple and effective.
- Avoid Sweeps altogether Sweeps are good, but they are not particularly fast. Many things can be done faster using multi-tones, bin-centres or log swept sine stimuli.
User interface and control
Another broad topic, but one where there can be big time savings. The general idea here is to avoid the test operator having to do anything that can be done faster by a machine. Or, if it can't be done by a machine, make it faster (and usually therefore easier) for the operator. Some ideas, many of them obvious:
- Avoid the operator having to use a mouse - mice are great, but they're not usually the best way to navigate through a test sequence. If you have to use your hands for re-patching a connection or making a change on the EUT, it's not good to have to grab the mouse in order to click an on-screen OK button - specially if it's small. Better is to hit the keyboard space bar, better still can be using a foot-switch, and best is, if possible, don't do anything at all.
- Proceed on sensing a change There are several levels to this. At the highest level you can use the test system to determine when an action has taken place and move on without requiring the operator to do anything. For example, if the operator instruction says to turn on phantom power, you can set your test equipment to wait for the presence of the DC and move on automatically when it is detected. At a lower level, when the system has made a change, you can either wait a fixed period that will safely accommodate the change, or you can continuously poll a measurement until the change has taken effect - this can be particularly helpful when waiting for digital inputs to lock etc. In both cases, you will need a time-out or some means of allowing the operator to indicate when the expected event did not take place in the event of a fault.
- Think about your prompts - it used to be slow to use graphical prompts because of the speed of computers. This is no longer a problem and it may help to use graphics to aid in recognition of particular stages of a test as these are often faster for the operator to take in than text. When a test is new, the operator will need to be able to work out from the prompt what to do. After doing several thousand, they no longer need the prompt detail, and it may be faster to have distinctive graphics that they can see out of the corner of their eye that keep things in sync. In practice you will probably need both graphics and text, but if you ask the operator, you may find that after a while, they no longer use the text and rely instead on recognising the graphics.
- Avoid data entry if you can use barcode scanners instead of requiring the operator to type in serial numbers, it should be faster. Likewise, it is often possible to interrogate the EUT electronically to get information about it including its serial number etc. If possible, all the meters used for a test should be able to talk directly to the test system without requiring the operator to type in readings.
- Use Switchers - rather than requiring the operator to re-patch, if possible, connect everything and use switchers to determine what is connected for measurement.
- Make use of jigs and fixtures For board testing, a bed of nails or dedicated test terminals are often the only way to go. For completed products patching audio cables is often one of the longer test processes on high volume manufacture. Simple things can make quite a big difference. Having fixtures that hold the connectors in place so patching is quick and easy can save a lot of time. Removing the latches from XLR connectors (remove the button entirely from the females and smoothly grind out the lip on the end for the males) can make re-patching quicker. Distinct colour coding of connectors (not just little numbers on the wires) reduces time searching for the right cable. Consider making wiring looms that can be connected to the next EUT while the previous unit is still being tested and connected to the system via a single multi-pole connector. Fix the system side of this rigidly so it doesn't require two hands to connect. Remember that you are actually connecting more connectors doing this and will have to allow time to disconnect as well - this will therefore only help if there is dead time during the test process when the operator can be patching up the next unit or if it's economically viable to employ another operator to patch up units for test if you really need the throughput.
- Make use of software EUT control and special test modes - with software driven products, making it possible to make changes to the product in a special test mode can streamline production test enormously. Being able to change settings on the EUT using the test software reduces the reliance on an operator to make those changes. Designing the product from the outset with speed of manufacture and test in mind can be crucial.
Summary
There are probably many more ways in which tests can be sped up, some specific to particular equipment. Almost everything has an alternative and knowing what is the most efficient way to achieve the correct results is the key to fast testing. The order in which you test, the number of concurrent processes you can achieve, the amount of dead time you can avoid all add up. When things are moving really fast, you can't easily tell whether a change has made a difference. To do this you should consider logging the time taken to do each stage of a test, including the time between tests and save it into your results for analysis.
As a final thought: be nice to your test operators and speak to them: if they are finding something difficult to do, make it easier: it will almost certainly be faster too.