Application notes

Understanding TFA interpolation and jitter

The Time and Frequency Analyzer (TFA) from Liquid Instruments is a flexible and powerful instrument. It can be used as a time tagger, frequency counter, event counter, event timer and more. The TFA runs on Moku software-defined instrumentation, making it easily accessible and seamlessly integrated into the Moku suite of instruments, alongside an Oscilloscope to view and debug the timing edges or a PID controller for closed-loop stabilization of count rates, to name just two.

Running the TFA on Moku means that it relies on the Moku high speed sampling front end, rather than bespoke time-to-digital conversion frontends used on some dedicated timing devices. A sampled architecture provides lower dead time and more flexibility, but can require more precise preconditioning of the input signal.

In this note, we will look at the TFA’s interpolation modes and noise characteristics to understand how to get the best possible measurement accuracy in your experiment.

Instrument basics

The TFA instrument works very simply by recording the time of arrival of a series of edges. The edge detectors at the front of the TFA can be configured with the edge polarity (i.e. rising or falling), and the voltage threshold at which the edge is said to have occurred. The edge detectors take as input samples from the Moku analog to digital converters (ADCs), and are able to provide sub-sample estimates of edge arrival time by interpolation (see below).

The timestamps from the edge detectors can be logged directly, or they can be used as input to one or more interval analyzers. The interval analyzers compute statistics on the difference between timestamps. For example, the period between events can be used to calculate the frequency in a frequency counter application.

Even though users will most likely interact with the interval analyzers and the statistics they produce, it’s important to know that it’s the edge detectors that are responsible for producing timestamps accurate to a few tens of picoseconds. The rest of this note will examine in detail how they’re able to achieve this.

Jitter, resolution and accuracy

The Time and Frequency Analyzer performance is characterized by three related parameters: The measurement jitter, measurement accuracy, and the digital resolution.

The jitter is a measure of how much variation you would expect in repeated measurements of the same edge arrival time. The accuracy is a measure of how close any single measurement is to the true value (sometimes explicitly called the “single shot” accuracy). The jitter is usually given as the standard deviation of the distribution of a large number of repeated measurements. This makes the jitter independent of bias (i.e., non-zero mean) and while not strictly required, it’s often implicitly assumed that the distribution is Gaussian. The accuracy may be given as an RMS error or mean-absolute error. In the specific case of a non-biased measurement, the RMS accuracy and the jitter are the same. Note that in this formulation, as bias grows, accuracy degrades but jitter does not.

Resolution, or more correctly the digital resolution, is the smallest difference between two measurements that the instrument can report. So long as this number is substantially smaller than the jitter (or accuracy), then resolution is not a performance measure. It’s easy to add more and more digital resolution, but if the jitter is not improved then all those extra digits only contain noise and do not give you any more information about the signal.

The Moku TFA has a digital resolution generally around 20x finer than the jitter, and the measurement is not biased. As such digital resolution is not a performance limitation, and accuracy and jitter measures are the same.

The jitter, accuracy, and resolution of a timing histogram. Jitter is the standard deviation of the measurement distribution and is independent of bias. Accuracy is the RMS value of the measurements. Resolution is the smallest representable change in measurement value and is rarely the limiting factor for performance. For the purposes of illustration, the histogram is biased by more than would be typical for a real instrument.

Interpolation modes

As discussed above, the Moku Time and Frequency Analyzer uses a sampled front end. The Moku:Pro samples at 1.25GSa/s by default, giving a period of 800 picoseconds. Without any further processing, this 800 picosecond value would be the digital resolution and, being higher than the underlying jitter, would be the performance limiting factor. The Moku TFA can improve on this by using an interpolation algorithm to get a sub-sample estimate of the point at which the signal crossed the threshold. The TFA linear interpolation works very well for a wide variety of signals, and is able to improve timing performance by around 100x compared to no interpolation.

Linear interpolation makes one very important assumption: That the rising edge is linear between the sample points either side of the threshold crossing. This introduces two related requirements on the rise time of your signal (assuming a rising edge trigger):

  1. That the rising edge is slow enough compared to the sample rate that there are several samples on the edge itself, and
  2. That the rising edge is slow enough compared to the bandwidth of the Moku that the step response of the Moku front end isn’t limiting the rise time

This second point ensures that the signal is rising approximately linearly as it crosses the threshold, as opposed to e.g. being in a nonlinear portion of an exponential convergence. It also means that for best results, you should try to choose a threshold value where the signal is approximately linear. For example, if you’re looking at the timing of a sine wave in a frequency counter application, the threshold should be set to the mean value of the sine wave (usually the zero crossing).

An example is shown below for a step input with three different edge speeds. For the fastest edge (left), the actual signal (dark) can move by as much as a whole sample period without affecting the values actually sampled by the ADC, and therefore without changing the interpolated rising edge (light blue). On the other hand, with an edge rate slow enough that there are always sample points on the edge either side of the threshold (right), the actual and interpolated edges cross the threshold at exactly the same point and the edge position is recorded perfectly. In the transition region between the two extremes, only one or two sample points are on the rising edge and therefore the interpolated value is biased towards the point halfway between sample times, but not limited to it.

Another way to look at this is to examine how the distribution of edge times changes as the edge rate increases. For slow edges, the distribution of interpolated times matches the real values very closely. As the edge rate speeds up, the distribution starts to bias towards the point halfway between samples (as seen in the left-most illustration above). This simple simulation starts to show bias as the number of sample points per edge drops down below two.

As we can see, the TFA linear interpolation is able to capture edge times that are jitter-limited (rather than resolution limited), so long as the assumption that the rising edge is linear around the threshold holds. As the edge rate increases past a point, the measurement gets worse; the distribution of edge times biases towards multiples of the sample period, the accuracy degrades, and the jitter begins to depend on the actual value of the edge arrival time relative to the sample times.

Optimizing your measurement

If the interpolation algorithm is required to get high resolution, and works best with slower edge rates, why wouldn’t you just continue to slow the edge down indefinitely? All TFA instruments have an intrinsic timing jitter, based on the performance of the internal oscillators and clocking infrastructure, and also intrinsic amplitude noise based on the performance of the components in the analog signal chain. As the edge rate gets slower, the amplitude noise moves the threshold crossing point in time by a greater amount.

In the illustration below, the three signals have the same amount of amplitude noise, and the histogram of the threshold-crossing location is plotted in blue below. When the edge rate is fast, the amplitude noise doesn’t cause a spread of the measurement distribution at all, while slower edge rates cause timing jitter to be seen due to coupling of that amplitude noise.

The rise time affects how much the amplitude noise of a signal couples in to the timing. For fast edges, the threshold crossing doesn’t move at all.

We now have two competing requirements: The edge must be fast to couple the minimum amount of amplitude noise into the threshold crossing time, and slow to preserve the assumption of linearity that’s required to interpolate between points on the Moku TFA sampled front end. Getting the best possible measurement then requires a “goldilocks” edge rate: Not too fast and not too slow. This is characterized for each Moku hardware platform and given in the Moku TFA specification sheet as the “optimum” rise time. For example, Moku:Pro has an optimum rise time of 6ns, or around 7.5 sample periods; enough to guarantee that the edge is sampled in the linear region around the trigger point.

If you don’t have direct control over the edge rate of your signal, the best way to meet the optimum rise time is to put it through a low-pass filter before it goes into the Moku. The conversion between rise time and filter bandwidth is approximately BW = 0.35 / rise (assuming a first-order butterworth filter), e.g. to meet the optimum rise time for Moku:Pro of 6ns, a 60MHz low-pass filter is recommended.

It’s natural to wonder whether the filter changes the timing characteristics of the signal that you’re trying to measure. It does, in that it introduces group delay on the signal; but it’s the same delay for all edges. Given that the Moku time stamps are relative to an arbitrary origin time, moving all measurements forward or back by the same amount has no overall effect. Another way to think of this is like using a slightly longer cable – it’s important that all inputs are matched, but not what the absolute length of the cables are.

Conclusion

The Moku Time and Frequency Analyzer is a powerful tool for the analysis of event times and their statistics. The TFA shares its analog front end with the rest of the Moku suite of software-defined instrumentation and, along with flexibility, this sampled architecture brings a greater dependence on edge times when optimizing your measurement. The signal can:

  • Not be interpolated at all: This is robust but limits the performance of the instrument to the sample rate (800ps on Moku:Pro)
  • Rise too quickly: The one-shot measurement accuracy may be similar to the optimal case, but the value becomes biased
  • Rise too slowly: The amplitude noise of the analog frontend couples into the threshold crossing time and introduces extra timing jitter

The good news is that it’s easy to condition your edge for excellent TFA performance. The Moku specification sheet gives optimal rise time values that minimize jitter while preventing bias, and these rise times can be met by the simple expedient of adding a low-pass filter in series with your signal.