Intel® Movidius™ Myriad™ X Vision Processing Unit
The Intel® Movidius™ Myriad™ X VPU is Intel's first VPU to feature the Neural Compute Engine — a dedicated hardware accelerator for deep neural network inference. The Neural Compute Engine in conjunction with the 16 powerful SHAVE cores and high throughput intelligent memory fabric makes Intel® Movidius™ Myriad™ X ideal for on-device deep neural networks and computer vision applications.
Intel® Movidius™ Myriad™ X Vision Processing Unit
Intel® Movidius™ Myriad™ X Vision Processing Units
The Intel® Movidius™ Myriad™ X VPU is programmable with the Intel® Distribution of the OpenVINO™ toolkit for porting neural network to the edge, and via the Myriad Development Kit (MDK) which includes all necessary development tools, frameworks and APIs to implement custom vision, imaging and deep neural network workloads on the chip.
Dedicated Neural Compute Engine
The Intel® Movidius™ Myriad™ X is Intel's first VPU to feature the Neural Compute Engine, a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver outstanding performance per watt without encountering common data flow bottlenecks encountered by other architectures.
16 High Performance SHAVE Cores
These programmable processors, with an instruction set tailored for computer vision, can be used to run traditional computer vision workloads, or can complement the Neural Compute Engine by running custom layer types for CNN applications thanks to extensive support for sparse data-structures.
Enhanced Vision Accelerator Suite
Intel has added a new suite of vision accelerators to the Intel® Movidius™ Myriad™ X VPU, including a new stereo depth block that is capable of processing dual 720 feeds. With the suite of vision accelerators available, it is possible to offload key vision workloads onto fixed-function hardware to help improve power efficiency.
Flexible Image Processing and Encode
The Intel® Movidius™ Myriad™ X VPU features a fully tune-able ISP pipeline for the most demanding image and video applications. The Intel® Movidius™ Myriad™ X VPU also features hardware based encode for up to 4K video resolution, meaning the VPU is a single-chip solution for all imaging, computer vision and CNN workloads.
Support for Multiple VPU Configuration
PCI-e interface allows the VPU to be used as an edge AI accelerator in an edge server, by configuring multiple Intel® Movidius™ Myriad™ X VPUs in a single PCI-e add-in card, delivering even more performance flexibility.
Neural Compute Engine: Hardware Based Acceleration for Deep Neural Networks
Intel® Movidius™ Myriad™ X VPU features the all-new Neural Compute Engine - a purpose-built hardware accelerator designed to dramatically increase performance of deep neural networks without compromising the low power characteristics of the Movidius VPU product line. Featuring an array of MAC blocks and directly interfacing with the intelligent memory fabric, the Neural Compute Engine is able to rapidly perform the calculations necessary for deep inference without hitting the so-called "data wall" bottleneck encountered by other processor designs. Based on the Intel® Movidius™ Myriad™ X VPU architecture, the maximum number of neural network inference operations per second achievable by the Neural Compute Engine in combination with the 16 SHAVE cores (916 billion operations per second) is more than 10x the maximum number of neural network inference operations per second achievable by the Myriad 2 VPU’s SHAVE processors (80 billion operations per second) for executing neural network inference.
- Native FP16 and fixed point 8 bit support
- End-to-End acceleration for many common deep neural networks
- Rapidly port and deploy neural networks in Caffe and Tensorflow formats
- High power efficiency in terms of inferences/second/watt
Discover Intel’s portfolio of enhanced for IoT processors to help get critical insight and business value from your data with compute resources where you need them most.
Intel® FPGAs and SoCs, along with IP cores, development platforms, and a software developer design flow, provide a rapid development path with the flexibility to adapt to evolving challenges and solutions in each part of the video or vision pipeline for a wide range of video and intelligent vision applications.
Develop, fine-tune, and deploy convolutional neural networks (CNNs) on low-power applications that require real-time inferencing with Intel®
Neural Compute Stick 2.
Edge AI accelerator cards that let you deploy power-efficient deep neural network inference for fast, accurate video analytics and computer vision applications.
Prototype and experiment with AI workloads for computer vision on Intel hardware with Intel® DevCloud for the Edge.
Harness the full potential of AI and computer vision across multiple Intel® architectures to enable new and enhanced use cases in health and life sciences, retail, industrial, and more.
Notices and Disclaimers1 2 3
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Intel® technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.