MLPerf Results Show Advances in Machine Learning Inference

SAN FRANCISCO–(BUSINESS WIRE)–Today, the open engineering consortium MLCommons® introduced recent outcomes from MLPerfTM Inference v2.1, which analyzes the efficiency of inference – the applying of a educated machine studying mannequin to new knowledge. Inference permits for the clever enhancement of an enormous array of functions and methods. This spherical established new benchmarks with practically 5,300 efficiency outcomes and a pair of,400 energy measures, 1.37X and 1.09X greater than the earlier spherical, respectively, reflecting the group’s vigor.

MLPerf benchmarks are complete system checks that stress machine studying fashions, software program, and {hardware}, and optionally monitor power consumption. The open-source and peer-reviewed benchmark suites stage the enjoying floor for competitiveness, which fosters innovation, efficiency, and power effectivity for the entire sector.

“We are very excited with the expansion in the ML group and welcome new submitters throughout the globe resembling Biren, Moffett AI, Neural Magic, and SAPEON,” mentioned MLCommons Executive Director David Kanter. “The thrilling new architectures all show the creativity and innovation in the trade designed to create better AI performance that may carry new and thrilling functionality to enterprise and shoppers alike.”

The MLPerf Inference benchmarks are centered on datacenter and edge methods, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the many contributors to the submission spherical.

To view the outcomes and discover further details about the benchmarks please go to and These outcomes reveal intensive trade participation, a give attention to power saving, paving the trail for extra succesful clever methods that may profit society as a complete.

About MLCommons

MLCommons is an open engineering consortium with a mission to learn society by accelerating innovation in machine studying. The basis for MLCommons started with the MLPerf benchmark in 2018, which quickly scaled as a set of trade metrics to measure machine studying efficiency and promote transparency of machine studying methods. In collaboration with its 50+ founding companions – international expertise suppliers, teachers and researchers, MLCommons is concentrated on collaborative engineering work that builds instruments for all the machine studying trade by way of benchmarks and metrics, public datasets and finest practices.

For further data on MLCommons and particulars on turning into a Member or Affiliate of the group, please go to and phone [email protected].

Recommended For You