Nvidia Jetson AGX Orin is the latest addition to the popular line of Nvidia Jetson modules and brings superb enhancements in performance for Edge AI applications. The newly-released 32GB AGX Orin is available now in both developer kit and SOC module format and is described by Nvidia as an "energy-efficient AI supercomputer", which they say is six times more powerful than its predecessor and is intended to support large, complex AI models for natural language understanding, 3D perception, and multi-sensor fusion. A more powerful 64GB version is due for release in November ahead of the planned Orin NX modules in the new year. Bringing a remarkable 200 TOPS to the table, the 32GB Jetson AGX Orin is capable of 200 trillion operations per second, thanks to a 3-fold increase in CUDA cores over the AGX Xavier, topping out at 1792 across 14 streaming multiprocessors (SM). Combining this increase in performance with next-gen deep learning and vision accelerators provided by the Ampere architecture (and the new Arm Cortex-A78AE CPU), high-speed interfaces, faster memory bandwidth, and multimodal sensor support, the Orin even has the ability to run multiple AI application pipelines concurrently. AGX Orin vs. AGX Xavier Feature Jetson AGX Xavier Jetson AGX Orin 32GB Jetson AGX Orin 64GB AI Performance 32 TOPS 200 TOPS 275 TOPS GPU 512 Core Volta, with 64 Tensor Cores 1792 Core Ampere, with 56 Tensor Cores 2048 Core Ampere, with 64 Tensor Cores DL Accelerator (2x) NVDLA (2x) NVDLA v2.0 (2x) NVDLA V2.0 Vision Accelerator (2x) 7-way VLIW Processor (2x) 7-way VLIW Processor (2x) 7-way VLIW Processor CPU 8 core Carmel ARM CPU 2MB L2 + 4MB L3 8 core Arm Cortex A78AE 2MB L2 + 4MB L3 8 core Arm Cortex A78AE 3MB L2 + 6MB L3 Memory 16GB 256-bit LPDDR4x @ 2133MHz 137 GB/s 32GB 256-bit LPDDR5 @ 2133MHz 205GB/s 64GB 256-bit LPDDR5 @ 2133MHz 205 GB/s Storage 32GB/64GB eMMC 64GB eMMC 645GB eMMC Video Encode 2x8K30 | 6x4K60 | 12x4K30 | 26x1080p60 | 52x1080p30 (HEVC) | 30x1080p30 (H.264) 1x 4K60 | 3x 4K30 | 6x 1080p60 | 12x 1080p30 (H.265) H.264, AV1 2x 4K60 | 4x 4K30 | 8x 1080p60 | 16x 1080p30 (H.265) H.264, AV1 Video Decode (2x) 8Kp30 / (6x) 4Kp60 1x 8K30 | 2x 4K60 | 4x 4K30 | 9x 1080p60| 18x 1080p30 (H.265) H.264, VP9, AV1 1x 8K30 | 3x 4K60 | 7x 4K30 | 11x 1080p60| 22x 1080p30 (H.265) H.264, VP9, AV1 Camera 16 lanes MIPI CSI-2 | 8 lanes SLVS-EC D-PHY 40Gbps / C-PHY 109Gbps 16 lane MIPI CSI-2 connector 16 lane MIPI CSI-2 connector PCI Express 16 lanes PCIe Gen 4 1x8 + 1x4 + 1x2 + 2x1 Up to 2 x8, 1 x4, 2 x1 (PCIe Gen4, Root Port & Endpoint) Up to 2 x8, 1 x4, 2 x1 (PCIe Gen4, Root Port & Endpoint) Mechanical 100mm x 87mm 100mm x 87mm 100mm x 87mm Power 10W / 15W / 30W 15W – 40W 15W – 60W Try Jetson AGX Orin for yourself At Impulse we have a large R&D and technology hub which is aimed to promote collaboration and innovation between our partners and customers. With that in mind, Impulse have invested in a wide range of Nvidia GPU based systems, including Jetson AGX Orin, which are intended for our customers to try their code and help define which chip is suitable for their application, before committing to move onto a more ruggedised solution. Try Jetson ORIN, AGX Xavier, Xavier NX, Nano. Spin up your code on our systems remotely. Benchmark and test to arrive at a suitable platform. Discuss your ruggedised requirements in person or via Teams. If you would like to try any of the Jetson systems for yourself, or even benchmark a rack or tower based GPU computer, please get in touch with our friendly team who will be happy to discuss your requirements. Our Embedded Systems Capabilities Developing an industrial AI computing solution can be difficult, costly and time-consuming. With our Embedded Systems capabilities, we can create reliable, repeatable and robust systems to help reduce your costs and development time. With our team of in-house engineers and specialists, all with decades of experience, we can offer you fully deployable embedded Edge AI computing solutions straight out of the box.