EDIT: 10/02/2020 - I've updated a couple of points in the text below following very informative comments received from other members.
Introduction
I’m taking the time to better understand how my scope is working and I was looking at its quoted performance specifications and how they translate into real life: marketing vs reality. Turned into quite an informative activity so I thought I would write it up in case it can help any one else new to these tools. This isn’t a review of features or options just a look at some very specific elements of the scope to see what the implications / limitations for capturing and viewing waveforms are. In some ways it hooks into fmilburn's post comparing three entry level scopes which didn't look at these aspects in detail as that wasn't the purpose of it.
The scope I have is a Siglent SDS1104X-E which is a 4-channel, 100MHz device. The specifications I’m interested in here are taken from the datasheet:
Real Sample rate: 1GSa/s per ADC (2 ADCs)
Memory Depth: 14Mpts (single channel/pair); 7Mpts (two channels/pair)
Waveform update rate: 100,000 wfm/s (normal mode); 400,000 wfm/s (sequence mode)
No mention is made of an ‘equivalent sample rate’ and I don't believe that the scope has such a capability.
I'm laying out my understanding in this post which may have gaps, not be completely grasped or be completely wrong. I hope, then, that anyone who knows better will correct me in the comments and I will update accordingly.
TL;DR;
It is quite a long post so I can easily see people drifting off . Here's a summary:
- I describe my understanding of sampling, memory and waveform update rates
- I run some tests to investigate these
- I come to the, probably not unexpected, conclusion that there was a lot I didn't appreciate and that the specifications need to be considered in terms of how testing is being undertaken. In particular, the relationship between sampling rate and memory depth and the reality of the waveform update rate specification being VERY dependent on a particular setup for the scope.
Understanding Sampling
Essentially, the scope is sampling an incoming waveform at various points once it has triggered. These points are then joined to reproduce the waveform on the screen - rather like a dot-to-dot picture:
(excuse the crummy picture) It doesn't take much intuition to work out that the more samples it can capture, the more accurate the waveform displayed will be. The Siglent will interpolate these samples (join the dots) using a Sin(x)/x or x algorithm (curve or straight line) but this can still only work with the samples it has:
(excuse the even crummier picture) In the image above, the incoming red signal isn't sampled fast enough and so what appears is the interpolated blue waveform.
So interpolation doesn’t necessarily result in an accurate capture or display. In practice, when this is needed, for example trying to measure a waveform at a frequency greater than the scope is capable of, the scope should be building the displayed waveform up from samples captured at slightly different times across multiple, successive incoming waveforms in order to give an accurate image - called equivalent-time sampling. This would require the waveform to be stable and repeating to work of course - without that, accuracy is impossible and at best you’d need to use your brain and intuition to guess what the actual waveform looks like. It certainly isn’t going to help find glitches or other problems if the scope is working in this way. A datasheet would refer to this as ‘Equivalent Sample Rate’, something that doesn’t appear in the datasheet for this model.
What I want to see, then, are samples captured across the full capability of the scope at its specified 1GSa/s sampling rate. The Siglent, according to the datasheet, will capture at 1GSa/s on any pair of channels (Ch1 or Ch2, and, Ch3 or Ch4); this drops down to 500MSa/s where a pair is used (Ch1 and Ch2, or, Ch3 and Ch4.) That makes sense as it is a 2 ADC machine. So, it should be able to capture a sample at 1ns intervals (1second / 1,000,000,000 samples per second) when only one channel of a pair is used.
A 100MHz signal has a period of 10 ns so a 1GSa/s rate would capture 10 samples at that period to recreate the waveform; intuitively, and I could be wrong, that feels like it is pushing the boundaries of accuracy, but also implies that if I was to use both channels of a pair, that sample rate would drop to 500MSa/s so with a 100MHz signal it would be taking 5 samples for each waveform (500,000,000 / 100,000,000) which is probably not ideal.
EDIT: 10/02/2020 - updated below following comments below and information provided by Shabaz:
On that basis it would seem that when a channel pair is in use the scope would really be limited to 50MHz signals on each channel. It would also imply that the 200MHz version of this scope with only 1GSa/s probably isn’t up to snuff as it would need to be sampling at 0.5ns intervals (2GSa/s) to get to 10 samples.
A sampling rate of 2xfrequency is sufficient to capture an accurate waveform. So even at a sample rate for 500MSa/s then that is 2.5x200MHz signal so should be sufficient to capture the waveform - see the caveats in the comment below related to input filtering. I don't have a means of generating a signal of that frequency (or 100MHz for that matter) so I can't test this unfortuanately.
The above is written on the basis of a waveform that is a sine wave; anything else will have harmonics that rise above its base frequency and must also be captured for accuracy. In any case, capability at these frequency levels are theoretical for me as I don’t (at least yet) do anything at that level.
Understanding Memory Depth
My initial idea about memory on the scope was that it is used to hold data about a signal and that the display was a window on that data. The size of that window was dependent upon the timebase: a fast timebase would be a thin sliver; a slow timebase a fatter sliver. Big misunderstanding on my part it turns out.
Memory is used to capture enough data about a waveform to fit it on the display. At a fast time base, less data is shown on the display so less memory is needed; at a slow timebase more data is shown on the display so more memory is needed. That is a totally different use of memory than I initially thought. The Siglent has 14Mpts (million points) per channel pair, meaning 14 million sample points, and it isn’t using that ALL the time - displaying a small part and keeping the rest so you can search around - instead it is using ONLY what it needs to fill the display.
There is something implicit in that understanding and the specification that is not immediately obvious, or wasn’t to me until I sat down and thought about it. 14Mpts at 1GSa/s would mean that 14ms of activity can be captured (14,000,000 / 1,000,000,000). In other words, the sample rate can be maintained at 1GSa/s as long as only a maximum of 14Mpts is required to fit on the screen.
Therefore I can expect to see the following:
- Only as much memory as is needed will be used to fill the screen which means at fast timebases, much less memory is needed and that isn’t a cause for concern.
- There is a limit to the timebase at which the maximum sample rate can be maintained. This is defined by the scope's physical characteristics.
- Beyond that limit, the sample rate must reduce as the memory is finite and the screen has a fixed size. This may be a cause for concern, although I don’t know why, it’s just a feeling. Later, I will see that it is indeed important to understand.
This then means that memory and sample rate are interrelated and it isn’t sufficient to think that the scope samples at 1GSa/s and has memory of 14Mpts because one of those specifications are flexible.
In fact, the manual gives a formula:
Memory Depth = Sample Rate x secs per div x num of divs
So at 1GSa/s with a timebase of 10ns and 14 divisions (for this scope) then
Memory Depth = 1,000,000,000 x 0.000000010 x 14 = 140 points.
Not many, and it would seem to indicate that having a larger amount of memory in the scope shouldn’t be one of the headline figures when choosing if one is primarily looking at fast signals. As a thought, whilst there’s a lot of complaints of Keysight scopes having little memory, at the timebases/frequency/sampling used in real-world testing, that is of little consequence. Turns out it isn’t that simple but may be ok for a lot of testing purposes.
We could look at the point that maximum memory (14Mpts) is used:
Timebase = memory depth / (sample rate x divs on screen)
Timebase = 14,000,000/ (1,000,000,000 x 14)
Timebase = 0.001secs = 1ms
So at 1ms timebase, we can maintain 1GSa/s filling memory. And what happens at a slower rate, say 2ms:
Memory Depth = 1,000,000,000 x 0.002 x 14
Memory Depth = 28,000,000pts = 28Mpts
But the scope only has 14Mpts so at 2ms it must reduce its sample rate as memory and divisions are fixed; in fact, half its sample rate:
Sample Rate = 14,000,000 / (0.002 * 14)
Sample Rate = 500,000,000 = 500MSa/s
This holds true when a channel pair is in use, when the maximum sampling rate becomes 500MSa/s:
Timebase = 7,000,000 / (500,000,000 x 14)
Timebase = 0.001secs = 1ms
Thinking back to what I said about the Keysight. Take the ‘equivalent’ Infinivision DSOX1204A which has a specification of 2GSa/s per channel pair and 1Mpts. Its timebase to maintain maximum sample rate would be:
Timebase = 1,000,000 / (2,000,000,000 x 10 divs)
Timebase = 0.00005secs = 50uS
In comparison to the Siglent, you may think it’s a bit rubbish. But that’s only the case if your testing needs are that slow or you need to capture lots of data to zoom down into. And that's the rub really: large memory is useful in capturing slow and drilling down, something that isn't as possible on the Keysight, as it is on this scope, especially given its high sampling rate.
Waveform Update Rate
The sampling specification relates to how many sample points on a waveform the scope can make; the waveform update rate refers to how many waveforms it can measure, in the Siglent’s case 100,000 per second.
A scope will read a waveform, process it, then read the next waveform. Whilst it is processing, then anything happening on the signal is being lost - called Dead Time. So, the faster it can read waveforms, the better: imagine if a problem on the signal was occurring whilst processing was going on - you’d never know until statistics caught up and you got lucky. There is an acquire mode - Sequence - that improves this to 400,000 per second by capturing waveforms into memory and not displaying until memory is full - in other words it reduces the dead time in order to capture more waveforms.
I can calculate the dead time percentage for the scope at 10ns per div, real-time sampling, as:
DT% = 100 * (1 - Update rate x timebase x num of divs)
DT% = 100 * (1 - (100,000 x 0.00000001 x 14)
DT% = 98.6%.
Even at 400,000 waveforms / second, the DT% = 94.4%
Now that’s a useful value to know: there’s a really good chance that an anomaly is going to occur in that dead time. If it’s infrequent, by which I mean it doesn’t appear on all or the vast majority of waveforms, then the chances of capturing it is down to statistics. There’s a formula for that. Probability (P) of capturing an anomaly that occurs 5 times a second within 10 seconds (t) at 10ns across 14 divs is:
Pt = 100 x (1 - (1 - (5 x 0.00000001 x 14)) ^ (100,000waveforms x 10))
Pt = 100 x (1 - (1 - 0.0000007) ^ 1,000,000)
Pt = 100 x (1 - 0.496585182)
Pt = 50.34% of capturing over 10 seconds; at 400,000 waveforms it is 93.92% so there’s a clear advantage here of switching to the Sequence acquisition mode.
(in this calculation I chose to believe I would wait for 10 seconds for an anomaly to appear before moving on to testing something else - the longer the wait, the probability of seeing it goes up.)
So if I was to observe the scope for 10 seconds, using real-time sampling, there is a 50.34% chance that it would display the anomaly. This obviously changes according to the timebase and assumes that the waveform update rate is always 100,000 per second.
Testing All This
So that's my understanding of these performance specifications but I want to test reality. I don’t have the means to generate a signal higher than 30MHz but I can generate a sine wave or square wave at that frequency. A 30MHz sine wave will have a period of 33ns which is well within the capability of this scope, even capturing it twice over one channel pair so I won't be able to confirm if the scope does switch into equivalent-time sampling (I pick 30MHz purely as it's the closest I can get to the scope's bandwidth.) Nonetheless, let’s see what I can do.
Memory Depth
This is the easiest to test so I’ll do it first. I calculated above for a 10ns timebase at 1GSa/s the scope would be able to capture 140pts of data. And so it goes:
30MHz waveform at 10ns timebase and 140pts gives 140ns of captured data:
And at 1ns it should use:
Memory Depth = 1,000,000,000 x 0.000000001 x 14 = 14 points.
Giving me 14ns of data:
To get full memory usage, the timebase has to shift to 1ms:
That does give me 14ms of data as calculated above:
I can use that data to zoom in. Here is the captured data at a timebase of 5ns:
Compare the above to the image 3 back which shows this signal at 5ns with the waveform captured at 1ns.
So what does that tell me - remember that I originally thought memory was used differently? Firstly, I suppose, that it is conforming to its specification but that doesn’t surprise me; ditto the maths: it's a simple multiplication across 14 divisions. Secondly, is that at faster timebases memory depth of 14Mpts is irrelevant because it just isn’t used - and let’s face it, faster timebases is where we are likely to be looking. Having said that, I can slow the timebase down to, 1ms in this case, and capture a mass of data that I can zoom into, right down to a 1ns timebase as demonstrated in the last image above. Clearly the way to use memory is to capture slow and zoom in.
Just to re-iterate, here is the 1ms capture zoomed in to 10ns:
What might I be missing with this approach (capture slow, zoom in)? The first is obvious just looking at the image above: at 1ms the image is essentially a blur and of no use. I have to zoom in to see detail and 14Mpts is a lot of data to sift through! It would seem that judicious use of triggering to identify a point of interest at a fast timebase is the way to go, slowing down for more data (just enough) once I know where and what I’m looking for. Apart from that, I'd say that taken on its own merit, having lots of memory is a good thing - but I'll soon see the implications of that and that it cannot be taken on its own merit!
Finally, just to prove a point, here is sampling and memory usage with one channel pair in use, thus halving the sample rate and available memory.
Sampling Rate
This scope doesn’t do equivalent-time sampling so there’s no testing for that. The memory depth tests above show that the scope is maintaining its sampling rate right down to its fastest timebase of 1ns and up to 1ms. What I can test is the implication of reducing the sampling rate on the waveform displayed. To do that, I’ll send in a 25MHz square wave sampling at 1GSa/S:
It may not be perfect, but it’s definitely a square wave when captured at maximum sampling rate - here shown at 10ns timebase.
Capturing it at 10MSa/s (a timebase of 100ms) gives us a (useless) view of the signal:
But worse to come - I’m capturing a full 14Mpts of data but zooming into that I can get down to 1ms and I have no idea what it is showing me. What look like square waves but reflected (not my setup I don’t think, I’m connecting with 50Ohm terminators)
But I can get down to 50ns and see something that is completely incorrect:
It’s not possible to drill down further to a faster timebase, not that it would help.
Changing to an interpolation method of ‘X’ (i.e. join the dots with straight lines) doesn’t help either (I’ve re-captured the waveform at 100ms and zoomed in):
So without enough samples, the scope cannot properly reproduce the waveform. This is a fairly obvious statement (it ties in to the commonly stated sampling at 2xfrequency or 5xfrequency for square waves) but it drives home the need to ensure that it’s important to understand exactly what the interplay between sampling rates and memory means for waveform accuracy. Something I hadn’t really appreciated until now. Here I know I'm looking for a square wave so I know something has gone wrong, but that might not be the case for a device under test where glitches and other anomalies are going to be missed.
Waveform Update Rate
The specifications given state that the scope can capture 100,000 waveforms per second. Like all good marketing statements they insert the phrase ‘up to’ which one never focuses on because of the big number next to it. Rather like how the advertised sale for that jumper you want at up to 70% off really means a reduction from 0% (highly likely) to 70% (on the item in bright yellow with purple dots sized for a sumo wrestler.)
I was wondering how I could test this - perhaps by creating an arbitrary wave with a glitch that only appears once across the waveform and doing a test to see how quickly it found it. The thing is, it strikes me that I know that from the calculations above and it’s down to probability so I don’t think it would be particularly revealing. The fact that dead time is so high is revealing enough and it makes me realise that even expensive scopes with very high update rates are still going to have a very large dead time. And when I think about it, something that an analog scope wouldn’t suffer from.
However, there is something I think I can do because reading through the manual it has a trigger out which is used for pass/fail testing or an indication of when the scope triggers. Waveforms are captured when triggering occurs so if I count the triggers I can see the update rate. Makes sense in my head but if I’m wrong then hopefully someone will point out in a comment below and I’ll update this.
I don’t have a counter of any sort but this is just a per second count so if I send in a sine wave and have it trigger, put the scope in Normal mode so that it keeps triggering and displaying, then the signal out is just a frequency measurement (counts per second === Hz)?? Again, if I’m wrong hopefully someone will point out below. I can hook my DMM on the trigger out and get it to display frequency and see what I see.
Previously, I’ve done a lot of measurements at 10ns so I’ll inject a 1MHz sine wave with the timebase set at that and I measure 15.995kHz! Wow, that’s a distance from 100kHz! Changing the timebase I can see:
1ns = 11.190kHz = 11,190wfm/s
2ns = 13.478kHz = 13,478wfm/s
5ns = 19.751kHz = 19,751wfm/s
10ns = 15.995kHz = 15,995 wfm/s
20ns = 16.194kHz = 16,194 wfm/s
50ns = 21.592kHz = 21,592 wfm/s
1us = 6.8088kHz = 6,808.8 wfm/s
5us = scope is flagging triggered, but DMM is showing 0Hz
For that 1MHz signal I can’t measure a trigger count on it slower than 1us. A 1kHz signal at 2us = 2.996Hz = 2,996wfm/s.
These are a long way from 100,000 waveforms a second. Putting my thinking head on what I need to do is make sure that the scope is doing as little as possible apart from capturing waveforms, so no measuring, normal triggering (no averaging etc), no networking, no web serving, only 1 channel. The results above are already taken with all that in place so it isn’t enough.
Turn interpolation off and just join the dots (X mode rather than sin(x)/x): 1MHz @ 50ns = 21.617kHz = 21,617wfm/s. Trying dots mode rather than vector mode, so it isn’t actually drawing the wave on the display, just placing dots, at 50ns as that is the highest frequency captured above:
1MHz @ 50ns = 118.06kHz = 118,060wfm/s. So that’s just over the specification of 100,000 waveforms per second. 1MHz @ 100ns = 19.203kHz = 19,203wfm/s.
Clearly, there’s something going on here that again I haven’t appreciated. Assuming my test approach was valid, then the waveforms per second are extremely reliant on scope settings and incoming signal and that figure quoted is exactly like the yellow jumper with purple dots in sumo size.
That also means Dead Time and probability of viewing is highly reliant on settings and signal.
So, for a 1MHz sine wave and the same ‘glitch’ as above (occurs 5 times a second within 10 seconds), the calculations give me:
Timebase | wfm/s (high is better) | DT% (low is better) | Probability (high is better) |
---|---|---|---|
1 ns | 11,190 | 99.98% | 0.78% |
2 ns | 13,478 | 99.96% | 1.87% |
5 ns | 19,751 | 99.86% | 6.68% |
10 ns | 15,995 | 99.77% | 10.59% |
20 ns | 16,194 | 99.55% | 20.29% |
50 ns | 21,592 | 98.49% | 53.03% |
1 us | 6,808 | 90.47% | 99.15% |
50ns, dot mode | 118,060 | 91.74% | 98.39% |
EDIT: 10/02/2020 - The experiment below is rubbish and I wasn't thinking straight. See the comments where I'm put right and I redo the experiment. Subsequent to the test in the comment, I ran a further one where I was sending in a fast signal on channel 1, and triggering on it, and a signal with infrequent, narrow pulses on channel 2. Whilst the signal on channel 1 was accurately display nothing was displayed for channel 2 (to be expected); turning on the persistence feature and waiting eventually showed the pulse on the screen - that would give an indication of how to trigger for these infrequent events, admittedly easy if it's a pulse, but the principle is the same. I tried different timeframes of persistence but infinite persistence and 30 secs persistence worked best.
After doing this, I realised I could use a pulse waveform at a low frequency to do a quick check. So with a 2Vpp, 100mHz pulse signal with a width of 32.6ns (smallest I could go to) and a 100ns timebase then, subjectively timed with my watch, it took the scope 5/10/10/9/5 seconds to trigger on it over 5 tests. At 10mHz, it took the scope 65 seconds to trigger - I couldn’t be bothered to run more tests on that one.
I then put it into dot mode at 50ns timebase - fastest update according to the table above and re-ran: the 100mHz pulse was triggered after 8/10/10 seconds; the 10mHz pulse was triggered after 10/65/90 seconds (shows how probability can sometimes work in your favour!)
In Sequence mode, 2 segments, dot mode, 50ns there was no appreciable in difference in display times. Note, in this mode nothing is displayed until the 2 segments are filled but the scope does display a trigger count so you can see progress.
Similarly, changing trigger from rising edge to Pulse made no difference - I didn’t really expect it to as this trigger mode is more useful in triggering on pulses versus, say, glitches or waveform artefacts. Still, worth a try.
How long would you wait? It's not a comprehensive test but it would seem that a slower timebase improves chances of finding something even with a smaller update rate as long as the DT% is taken into account rather than raw waveform update rate (see for example, the 5ns and 10ns/20ns results.) Obviously, I wouldn't sit there with my calculator out working the numbers every time I wanted to test something but it's clear that having some understanding of the expected waveform and potential anomalies that might arise is needed to set the scope up.
Conclusion
I’m looking at understanding my scope a bit better and it’s clear to me I had little appreciation of what the specifications mean in reality. I’d seen sample rates change as the timebase changed but hadn’t really understood why, just accepted it as I couldn’t alter it and I was more focussed on the test in hand. Similarly, I’d just accepted the waveform update rate as a figure: 100,000 is pretty good against, say, the Keysight at 50,000. This is clearly nonsense as the circumstances have to be very precise to achieve that (assuming I was testing correctly) and it could be that the way the Keysight works gives a much better set of results here - it has dedicated ASICs after all. I also understand when and why to use Sequence mode for waveform capture although it didn’t seem to make a lot of difference over normal acquisition mode.
This has been a real eye-opener really and I’m glad I spent a couple of days looking at it. An interesting intellectual exercise as well because I had to think about what was going on so I could adjust accordingly. I just have to see the dead time % I calculated to understand what issues are likely to arise at fast timebases - not only is a fraction of the signal being captured, there’s also very little data to zoom into. High memory specs are useful but only in the right circumstances.
Obviously, the actual sample rate, memory depth and waveform update rate all play a part in displaying an accurate waveform and capturing anomalies and they all interplay against each other. Knowing how they do that is fundamental. I'd take a guess that the grizzled old timers reading this have their head in their hands at this point . I’m not going to be too hard on myself, it is the first one I’ve ever owned or used and I certainly didn’t want to be spending over a £1000 on one. I still believe it’s a good scope for the money - I’ve even used it tracking down a problem on the Bombe when the Tektronix we had couldn’t do it. This isn’t the end of my investigating either.
If I’ve made errors in any of the above, please let me know below otherwise I'll continue blindly on; also, if there are other things it would be worth me trying let me know and I’ll have a go and report back. Hopefully, anyone else new to scopes who come across this will find it of use when looking at their own or considering what to buy.
Top Comments