Esperanto Technologies’ Massively Parallel RISC-V AI Inferencing Solution Now in Initial Evaluations
[ Back ]   [ More News ]   [ Home ]
Esperanto Technologies’ Massively Parallel RISC-V AI Inferencing Solution Now in Initial Evaluations

Delivering Industry-Leading Energy Efficiency, Esperanto’s ML Inference Accelerator Is Designed to Be the Highest Performance Commercial RISC-V AI Chip

MOUNTAIN VIEW, Calif. — (BUSINESS WIRE) — April 20, 2022 — Esperanto Technologies™, the leading developer of high performance, energy-efficient artificial intelligence (AI) inference accelerators based on the RISC-V instruction set, today announced that initial evaluations for its ET-SoC-1 AI inference accelerator are underway with lead customers. Additional slots are available to qualified customers who have an interest in AI inferencing accelerators for datacenter applications.

To inquire about the evaluation program, please visit esperanto.ai/technology/#eap.

“Our data science team was very impressed with the initial evaluation of Esperanto’s AI acceleration solution,” said Dr. Patrick Bangert, vice president of Artificial Intelligence at Samsung SDS. “It was fast, performant and overall easy to use. In addition, the SoC demonstrated near-linear performance scaling across different configurations of AI compute clusters. This is a capability that is quite unique, and one we have yet to see consistently delivered by established companies offering alternative solutions to Esperanto.”

Esperanto’s evaluation program enables users to obtain performance data from running a variety of off-the-shelf AI models, including recommendation, transformer and visual networks on the ET-SoC-1 AI Inference Accelerator. Users can set options, including model and dataset selection, data type, batch size and compute configuration of up to 32 clusters containing over 1,000 RISC-V cores with ML-optimized tensor units. Customers can run many inference jobs, with the results provided in detailed histogram reports, as well as fine-grain visibility into silicon performance.

“Esperanto has made very impressive progress, and is now providing customers evaluation access to their RISC-V hardware and software running off-the-shelf AI models with strong performance and efficiency. This really shows the company’s confidence in their first multi-core solution,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “In addition, because Esperanto’s chip is RISC-V-based, it has the programming tools and software stack to more easily adapt to new AI workloads, alongside non-AI workloads, all running on the same silicon. This step forward is another very strong indicator of the bright future of RISC-V.”

“Esperanto has achieved an industry first by demonstrating its massively parallel RISC-V silicon running a variety of real-world AI workloads,” said Richard Wawrzyniak, principal analyst at Semico Research. “It was exciting for me to see the company put the chip through its paces across a variety of scenarios, including different models, data types, batch sizes and compute cluster combinations – all showing competitive results. This is another positive step forward for the RISC-V industry in the AI space as this new market continues to grow even faster than we had previously forecasted.”

“Harnessing the power of over 1,000 RISC-V processors is a major accomplishment, and we are very pleased with the results which validate our initial projections of performance and efficiency,” said Art Swift, president and CEO of Esperanto Technologies. “We look forward to extending access to a broader range of qualified companies, as we accelerate our RISC-V roadmap efforts with a growing number of strategic partners for applications spanning from Cloud to Edge.”

Esperanto Technologies is the AI RISC-V leader, offering massively parallel 64-bit RISC-V-based tensor compute cores currently delivered in the form of a single chip with 1088 ET-Minion compute cores and a shared high performance memory architecture. Designed to meet the performance, power and total cost of ownership (TCO) requirements of large-scale datacenter customers, Esperanto’s inference chip is a general purpose, parallel processing solution that can accelerate many parallelizable workloads. It is designed to run any machine learning (ML) workload well, and to excel at ML recommendation models, one of the most important types of AI workloads in many large datacenters.

About Esperanto Technologies:

Esperanto Technologies develops massively parallel, high-performance, energy-efficient computing solutions for Artificial Intelligence / Machine Learning based on the open standard RISC-V instruction set architecture. Esperanto is headquartered in Mountain View, California with additional engineering sites in Portland, Oregon; Austin, Texas; Barcelona, Spain; and Belgrade, Serbia. For more information, please visit https://www.esperanto.ai/



Contact:

Paula Jones
(650) 279-8997
newsroom@esperantotech.com