Skip to main content

Nvidia’s latest A.I. results prove that ARM is ready for the data center

Nvidia just published its latest MLPerf benchmark results, and they have are some big implications for the future of computing. In addition to maintaining a lead over other A.I. hardware — which Nvidia has claimed for the last three batches of results — the company showcased the power of ARM-based systems in the data center, with results nearly matching traditional x86 systems.

In the six tests MLPerf includes, ARM-based systems came within a few percentage points of x86 systems, with both using Nvidia A100 A.I. graphics cards. In one of the tests, the ARM-based system actually beat the x86 one, showcasing the advancements made in deploying different instruction sets in A.I. applications.

MLPerf results with Arm processors.
Image used with permission by copyright holder

“The latest inference results demonstrate the readiness of ARM-based systems powered by ARM-based CPUs and Nvidia GPUs for tackling a broad array of A.I. workloads in the data center,” David Lecomber, senior director of HPC at Arm, said. Nvidia only tested the ARM-based systems in the data center, not with edge or other MLCommons benchmarks.

Recommended Videos

MLPerf is a series of benchmarks for A.I. that are designed, contributed to, and validated by industry leaders. Although Nvidia has led the charge in many ways with MLPerf, the leadership of the MLCommons consortium is made up of executives from Intel, the Video Electronics Standards Association, and Arm, to name a few.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The latest benchmarks pertain to MLCommons’ inference tests for the data center and edge devices. A.I. inference is when the model begins producing results. It comes after the training phase where the A.I. model is still learning, which MLCommons also has benchmarks for. Nvidia’s Triton software, which deals with inference, is in use at companies like American Express for fraud detection and Pinterest for image segmentation.

Nvidia also highlighted its Multi-Instance GPU (MIG) feature when speaking with press. MIG allows the A100 and A30 graphics cards to go from a single A.I. processing unit into a few A.I. accelerators. The A100 is able to split into seven separate accelerators, while the A30 can split into four.

By splitting up the GPU, Nvidia is able to run the entire MLPerf suite at the same time with only a small loss in performance. Nvidia says it measured 95% of per-accelerator performance when running all of the tests compared to a baseline reading, allowing the GPUs to run multiple A.I. instructions at the same time.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
GTC 2020 roundup: Nvidia’s virtual world for robots, A.I. video calls
Nvidia CEO Jensen Huang with a few Nvidia graphics cards.

"The age of A.I. has begun," Nvidia CEO Jensen Huang declared at this year's GTC. At its GPU Technology Conference this year, Nvidia showcased its innovation to further A.I., noting how the technology could help solve the world's problems 10 times better and faster.

While Nvidia is most well-known for its graphics cards -- and more recently associated with real-time ray tracing -- the company is also driving the behind-the-scenes innovation that brings artificial intelligence into our daily lives, from warehouse robots that pack our shipping orders, to self-driving cars and natural language bots that deliver news, search, and information with little latency or delay.

Read more
Groundbreaking A.I. brain implant translates thoughts into spoken words
ibm-chip-human-brain-robot-overlord

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

Read more
Mind-reading A.I. analyzes your brain waves to guess what video you’re watching
brain control the user interface of future eeg headset

Neural networks taught to "read minds" in real time

When it comes to things like showing us the right search results at the right time, A.I. can often seem like it’s darn close to being able to read people’s minds. But engineers at Russian robotics research company Neurobotics Lab have shown that artificial intelligence really can be trained to read minds -- and guess what videos users are watching based entirely on their brain waves alone.

Read more