The third wave
Computing infrastructure in the data center has evolved in three waves.
The first wave started with cloud computing using general purpose CPUs enabled by the rise of hyper-scale cloud service providers.
The second wave is driven by GPUs for the computing needs of deep learning training.
The third wave is led by FPGAs to support real-time streaming analytics with machine learning/deep learning algorithms.
Why FPGA acceleration?
FPGAs provide a reconfigurable sea of hardware gates on which one can:
- Design a custom hardware accelerator with direct I/O connectivity and low latency
- Deploy it for a single application to deliver increased performance efficiencies
- Quickly reconfigure the device as a new accelerator for a different application
Megh Computing is enabling the 3rd wave in the data center with a platform built to take advantage of FPGAs.
The Megh Platform
Megh Computing provides a platform for accelerating real-time analytics using our Nimble Framework. The solution enables seamless acceleration of applications that process streams with machine learning and deep learning algorithms to extract value from data as it is moving.
The Megh Platform supports both in-line processing of streaming data using FPGAs and offloading of machine learning and deep learning libraries with GPUs and FPGAs.
The Application, developed in-house, or by Megh Computing or a 3rd party, provides support for one or more analytics uses cases.
Megh’s Nimble Framework supports flexible pipelines for implementing the application for deployment from edge to cloud.
Our Arka Libraries expose software- or hardware-accelerated functions to the application using standard APIs.
The Arka Runtime enables the application to build custom data pipelines spanning multiple devices and accelerators—GPUs and FPGAs.
The Sira Shell provides platform-agnostic, scale-out hardware services for the FPGA accelerators.
Megh’s Deep Learning Engine (DLE) delivers best-in-class performance for accelerating inferencing for image detection and classification on FPGAs.
Megh's Deep Learning Engine
The platform includes our Deep Learning Engine (DLE), the best-in-class inference engine for implementing various deep learning models.
- Seamless multi-stage model support on single and across multiple FPGAs.
- Native mixed precision support on a layer-by-layer basis, giving fine-grained control of model accuracy and performance.
- Optimized for batch size = 1 data ingest and allows for both inline real-time and offload workloads.
- Easy end-user customization using DLE Compiler.
- Scalable architecture to meet end-user throughput and FPGA area requirements.
Learn more about DLE.
See blog posts about DLE.