Introduction
Classical and quantum light sources have broad uses, from quantum optics and quantum computing to classical fields such as laser interferometry. Light sources are often categorized by their statistical emission properties: whether they are coherent, thermal, chaotic, or quantum. Coherent light sources include lasers which have stable phases, and Poissonian fluctuations in the number of emitted photons. Thermal light sources, such as light bulb filaments or stars, produce light in bursts from large numbers of uncorrelated emitters. Quantum sources, such as squeezed light or single-photon emitters, exhibit ‘anti-bunching’ where the probability of detecting a photon immediately after you just saw one is lowest. This is an inherently quantum phenomenon that no classical light source can produce.
For quantum applications, it is often essential to characterize lights’ statistical emission properties to verify one has a genuinely quantum light source. Similarly, the accuracy of lasers should be limited by Poissonian statistics that govern the number of photons. One way to determine these statistical properties of light emission is through the \(g^{(2)}(\tau)\) correlation function, a widely used metric in classical and quantum optics.
In this application note, we demonstrate how \(g^{(2)}(\tau)\) can be measured using Moku and provide some examples with a Hanbury-Brown-Twiss-like interferometer. Using the Moku Time & Frequency Analyzer’s ability to capture continuous event data with <20 ps resolution, we simulate a HBT setup and record time-tagged photon events. We then walk through the procedure for generating a time-delay histogram and using it to calculate the second order correlation function.
What is the second-order correlation function?
The second-order correlation function, \(g^{(2)}(\tau)\), measures the fraction of events in a time interval τ relative to the average number of detected events over a span \(T\). The value at \(\tau=0\) is of particular importance, as this can infer information about how many concurrent photons are emitted by the light source. For example:
If \(g^{(2)}(0)>1\), then the light source tends to emit photons in groups. This is called bunching behavior and is usually indicative of a thermal light source. For example, \(g^{(2)}(\tau=0)=2\) means that there are twice as many events at time delays close to zero compared to the average number of events separated by any time delay \(\tau\) within some length of time \(T\).
If \(g^{(2)}(0)<1\), then the light source tends to emit photons at regular intervals, with very little chance of emitting more than one at a time. This is called antibunching behavior and is a desirable trait for single-photon sources.
If \(g^{(2)}(0)=1\), then there is no correlation between emitted photons. This is usually true of a coherent light source, such as a laser.
A simulation of these three cases can be seen in Fig. 1 below.

To demonstrate how to use and compute \(g^{(2)}(\tau)\), we introduce the Hanbury-Brown-Twiss (HBT) effect. In a HBT experiment (see Fig. 2), the source under test emits a stream of photons into a 50/50 beam splitter, which directs the individual photons to one of two optical paths. Two single-photon detectors (arbitrarily labeled “A” and “B”) monitor the arrival of photons traveling each of these arms. Any optical path difference between the two arms will result in a temporal offset between the arrival of photons at the two detectors. If a detector records a photon event, it generates an electrical pulse, which it passes to the time interval analyzer (TIA). The function of the TIA is to accurately timestamp and record photon events at both A and B, for later postprocessing. As we will derive in the next section, the time correlations between events A and B will determine the value of the second-order correlation function.

Quantifying the second order correlation function and coincidence rate
The second-order correlation function \(g^{(2)}(\tau)\) is given by the time-averaged product of photon detection counts:
\(g^{(2)}(\tau) = \frac{\langle n_A(t) n_B(t+\tau) \rangle}{ \langle n_A(t) n_B(t) \rangle} \)
where \(n_A(t)\) and \(n_B(t)\) are the rate of detected photon events at detectors A and B at times \(t\) and \(t + τ\), respectively. Quantities inside \(\langle . \rangle\) are time averages, i.e.,
\(\langle n_A(t) n_B(t+\tau) \rangle = \lim_{T\to\infty} \frac{1}{T}\int_0^T n_A(t) n_B(t+\tau) dt \).
Having established a quantitative definition of \(g^{(2)}(\tau)\), let now discuss how to determine \(\langle n_A(t) n_B(t+\tau) \rangle\). This is the time-averaged coincidence rate, or the probability density of finding a photon at time \(t\) and another at time \(t +\tau\), regardless of where the first event came from. This means that an initial event at A could be followed by a first, second, or third event at B in a cascade. All these events must be considered when calculating the coincidence rate. Then these τ values are binned and normalized to yield an estimate of \(\langle n_A(t) n_B(t+\tau) \rangle\).
Measuring the time delay histogram
A key part of HBT measurements is the time interval analyzer, which collects the photon events and generates a histogram of time intervals. The Moku Time & Frequency Analyzer can be configured to perform HBT measurements, as seen in Fig. 3. In this setup, single-photon detectors (SPDs) A and B are connected to Inputs A and B of Moku. The Time & Frequency Analyzer can be configured to detect the output pulses from the SPDs, with Input A acting as the “start” and Input B acting as the “stop.” For more details on how to configure the Moku for HBT measurement, refer to our configuration guide on the topic.

During the measurement sequence, the Time & Frequency Analyzer records the delay time between successive events A and B, from which a number-density of delays between pairs of detected photons is constructed in real time. It is important to notethe real-time histogram generated by the Time & Frequency Analyzer only measures first-step events and thus some post-processing is required to calculate \(g^{(2)}(\tau)\). Fortunately, there is a way to convert the distribution of time interval data into an approximation of \(g^{(2)}(\tau)\), through a process called self-convolution, discussed in the next section.
Analyzing the interval data
Assume a histogram of interval times captured by the Time & Frequency Analyzer. We call this distribution \(k(\tau)\), which records the time delay between each photon and the first subsequent photon in the other stream. In pseudo-code, we can imagine it written:
for A_time in A:
find the first B_time after A_time
tau = B_time - A_timeAs we stated in the previous section, \(k(\tau)\) is a probability density function for first-step delays only. While \(k(\tau)\) ostensibly resembles \(\langle n_A(t) n_B(t+\tau) \rangle\), it only includes first-order contributions and excludes photons caused by longer cascades. For example, photons arriving at times \(t + \tau_1 + \tau_2\) or \(t + \tau_1 + \tau_2 + \tau_3\) are not considered.
For this reason, \(k(\tau)\) is not typically used to calculate \(g^{(2)}(\tau)\) directly. Instead, one must manually count each explicit coincidence between events A and B, up to some maximum time. In pseudo-code, the process is:
for A_time in A:
find all B_time that occur after A_time, within some t_max
for each B_time
tau = B_time - A_timeThese τ values are then binned to create a histogram and normalized appropriately so that \(g^{(2)}(\tau)\) represents no correlation at that delay. While direct and intuitive, this method scales poorly because one must compare every event A to every event B. This leads to \(O(N_A \times N_B)\) operations, which can quickly grow out of control if the rate of events is high. For this reason, this feature is difficult for hardware to implement in real time.
A second, alternative method to calculating \(g^{(2)}(\tau)\) involves performing higher-order self-convolutions of \(k(\tau)\), represented by the quantity \(k^{(n)}(\tau)\). Each \(k^{(n)}(\tau)\) describes the probability density that a photon occurs at time \(\tau\), given that it is the nth photon in a cascade. In other words, it corresponds to the delay distribution from the initial event to a descendant n photons later. The total probability density \(G(\tau)\) of observing a photon at time $\tau$ after the initial trigger, regardless of which generation it came from, is proportional to the infinite sum [1]:
\(G(\tau) \propto \sum_{n=1} ^\infty k^{(n)}(\tau)\).
Experimentally verifying \(g^{(2)}(\tau)\) with the Moku Time & Frequency Analyzer
To simulate an HBT-like setup, we use an all-electrical setup with Moku:Pro. Making use of Multi-Instrument Mode, we partition the Moku FPGA into four instrument slots, each of which emulates part of a HBT measurement. See Fig. 4 for details and connection map.

First, we configure the Waveform Generator, which will act as the pulsed laser source. From the configuration screen, we enable Output A and select a square wave output with repetition rate 50 MHz and duty cycle of 10%. We offset the output by 500 mV so that the pulse ranges from 0 to 1 V. We also enable Output B, selecting the “noise” function, and set the range to 1 V. See Fig. 5 for the Waveform Generator configuration.

Second, we use Moku Custom Instrument with homemade VHDL code to mimic the beamsplitter and photon detectors. The bitstream, available from our Github, receives two inputs, A and B. Input A is the square wave generated by the Waveform Generator, and Input B is a noise signal. Every ten clock cycles, the beamsplitter samples the noise input and determines the sign. The code then directs Input A to either Output A or Output B based on the sign of noise signal.
Third, we configure the Time & Frequency Analyzer to detect “photon” events occurring on inputs A and B. From the Events tab, we set the event threshold to 500 mV on a rising edge, and then measure the time interval beginning with Event A and ending with Event B. The repetition rate is high, so we can measure in Windowed Mode, with a measurement interval of 10 ms. Finally, we set up the Oscilloscope in the fourth slot to observe the output from Moku Custom Instrument. After setting up all instruments and enabling the output from the Waveform Generator, we see the output reflected on the Oscilloscope as in Fig. 6. Both channels A and B should have “event” pulses randomly distributed between them, but never overlapping.

Returning the Time & Frequency Analyzer, we likewise start to see these events populate in a histogram. See Fig. 7 for an example. As established in the previous section, this histogram only reflects the time interval between nearest events and does not account for 2nd- or 3rd-order cascades. To generate \(g^{(2)}(\tau)\), we can log the raw timestamp data to a host PC by disabling the histogram and clicking on the data logger icon in the bottom right. Set the start trigger and collection time, then click the red button to begin logging. After logging, click the cloud icon, then select the most recent data file. Make sure the format is set to “NumPy” format, then send it to your destination of choice. In the next section, we will use a Python script to calculate \(g^{(2)}(\tau)\).

Calculating \(g^{(2)} (\tau)\) with Python using timestamp data
We will now use the histogram data generated by the Time & Frequency Analyzer to estimate \(g^{(2)}(\tau)\). Our first step, after making standard package imports, is to import the data into two arrays, A_times and B_times. We then calculate the next occurring B event after an A event, giving us the first-order interval information – the same as plotted by the Moku Time & Frequency Analyzer. With both the time stamp and histogram information, we can perform the calculation of \(g^{(2)}(\tau)\) via the methods described in the previous section. The first method is seen in Fig. 8, where we search for every event in B_times that occurs after each event in A_times. These values are then binned and normalized appropriately.

For the second method, the self-convolution of the first-order event histogram his gives us a quantity proportional to the time-averaged coincidence rate as defined earlier, however it also needs to be normalized appropriately. We multiply by \(N_{AB}\), the total number of coincidence events, then divide by the total length of the measurement sequence, \(T\), as well as the bin width, \(\Delta \tau\). To obtain \(\langle n_A(t) \rangle\) and \(\langle n_B(t) \rangle\), we divide the total number of independent events by \(T\). This gives us a final expression of
\(g^{(2)}(\tau) = \frac{N_{AB} T}{N_A N_B \Delta \tau }K^{(n)}(\tau)\)
Where \(K^{(n)} (\tau)\) is the nth self-convolution of the first-order histogram \(K(\tau)\). To verify that both computational methods return approximately the same answer, we plot both as a function of delay time, seen in Fig. 9. We see that \(g^{(2)}(\tau)\) exhibits sharp spikes at regular intervals, indicative of a regularly spaced pulse sequence and common in quantum optics experiments with pulsed sources.

Using a Python script, we have transformed the raw timestamped data collected from the Moku Time & Frequency Analyzer and obtained a plot of the second order correlation function versus time delay, using two different methods of calculation. Both methods show good agreement with each other and can be extended to other, more “quantum” systems.
Summary
The second-order correlation function is a powerful tool for evaluating the temporal dynamics of light sources and is crucial for quantum optics applications ranging from secure communication to photonic computation. Through our simulation of a Hanbury Brown–Twiss setup using the Moku Time & Frequency Analyzer, we demonstrated how to capture high-resolution time-tagged photon events and analyze them with Python. Whether investigating coherent lasers or single-photon emitters, these tools enable users to measure deviations from ideal behavior and quantify source performance.
References
[1] Chen-How Huang, Yung-Hsiang Wen, and Yi-Wei Liu, “Measuring the second order correlation function and the coherence time using random phase modulation,” Opt. Express 24, 4278-4288 (2016).