Custom Meta Tags - Dynamic

Hero Banner

New Product Introduction

NPI Body

DEEPX DX-M1 M.2 LPDD5Rx2

DX-M1 M.2 LPDD5Rx2 - Module brings server-grade AI inference directly to edge devices

Front side of DEEPX DX M1 M.2 LPDDR5x2

The DEEPX DX-M1 M.2 module brings server-grade AI inference directly to edge devices. Delivering 25 TOPS of performance at just 2-5W, the module achieves 20x better performance efficiency (FPS/W) than GPGPUs while maintaining GPU-level AI accuracy.

 

Features

  • AI Accelerator
  • M.2 M Key (22 x 80 mm)
  • PCle Gen.3 x4
  • 4GB LPDDR5, QSPI 1Gbit NAND Flash
  • x86, ARM Based Architecture

 

Applications

  • Edge Cameras Systems
  • Smart Mobility
  • Smart Factory
  • Smart Cities
  • Robotics
  • Drones
  • Edge Computing
  • Smart Homes
  • Smart Retail

 

Block diagram

DX-M1 M.2 LPDDR5x2 Block Diagram
Click to enlarge

 

Feature AI Accelerator Details
Processor INT8 Performance 25 TOPS (=200 eTOPS / INT8)
Signal Interface PCI Express

PCle Gen.3 x4 / Bandwidth: 4GB/s

*Compatible to PCIE x1

Power Power Consumption 2W min., 5W max. for DEEPX supported models
Operating Temperature

-25 ~ 85°C (Throttling)

-25 ~ 65°C (Non_Throttling)

Environment Humidity 40 °C @ 85% relative humidity (non-condensing)
Thermal Solution Cooling Heatsink (Option)
Physical

Form Factor

Dimensions

Power Range

M.2 2280 (Key M)

22mm x 80mm x 4.1mm

3.3V ± 5%

Software Support

Windows

Linux

Framework

Windows 11, 10 64 bit

Ubuntu 22.04, 20.04 LTS
Support Yocto Project and Docker

Support TensorFlow, TensorFlow Lite, ONNX, Keras, PyTorch by Dataflow compiler converted

System Support CPU Platform x86, ARM Based Architecture

 

Body Content Spots

Supplier Logo

Content Spots

Related Link



Have a question?

Get in touch:
Click here to find contact information for your local Avnet Silica team.