Applications

Connect machine learning algorithms with the physical world

Combine the power of neural networks with the low latency and flexibility of FPGAs.

Engineer configuring the software app for the Moku Neural Network on his computer

Machine learning for test and measurement

Neural networks deployed on FPGAs enable more efficient processing, real-time decision-making, and informed control system feedback. The Moku Neural Network implements deterministic data analysis algorithms for applications where timing is critical, such as quantum control, and can be reconfigured on the fly. Build and train models using Python, then deploy to your test systems using Moku to achieve low-latency inference and react quickly to changing experimental conditions.

Building an autoencoder with the Moku Neural Network

Learn how to denoise data extract features using a special type of neural network called an autoencoder. 

The Moku Neural Network has an architecture that includes input, hidden, and output layers, as well as customizable activation functions.

Neural network resources

Explore examples, comprehensive application notes, and detailed configuration guides to help implement neural networks in your experimental setup.

FAQ

How does the Moku Neural Network instrument work in practice?

Users design and train their networks through Python, using a class called LinnModel. This class contains methods to construct and output a Liquid Instruments Neural Network (.linn) file that can be loaded into a Moku Neural Network instrument. This file contains all information about activation values, biases, and activation functions, so that the model is accurately reconstructed on Moku hardware.

How complex of a network can it build?

It supports up to five hidden layers, with each layer consisting of up to 100 fully interconnected nodes. It also supports five different activation functions.  

Does the Moku Neural Network cut down on latency?

Using an FPGA allows one to bypass a host PC and perform calculations on the fly. As with any other DSP algorithm, the calculations do take time to perform, which may result in some finite latency. However, depending on the application and size of the neural network, it could have a lower delay than a dedicated filter and/or instrument.

Need help or have questions?