ArterisIP Drives Artificial Intelligence & Machine Learning Innovation for 15 Chip Companies

Interconnect IP enables fast and efficient integration of tens or hundreds of heterogeneous neural network hardware accelerators

CAMPBELL, Calif. — November 14, 2017 — ArterisIP, the innovative supplier of silicon-proven commercial system-on-chip (SoC) interconnect IP, today announced that in the past two years, 15 companies have licensed ArterisIP’s  FlexNoC Interconnect or  Ncore Cache Coherent Interconnect IPas critical components in new artificial intelligence (AI) and machine learning SoCs.

ArterisIP technology gives chip design teams the means to integrate machine learning processing elements into their systems quickly and efficiently, ensuring that they meet their schedule and functional safety requirements.

ArterisIP-logo-col-trans-500px.png
Ty Garibay, Chief Technology Officer (CTO), ArterisIP

These nine (9) publicly-announced ArterisIP customers have created or are developing machine learning and AI SoCs for data center, automotive, consumer and mobile applications:

  1. Movidius (Intel) – Myriad™ ultra-low power machine learning vision processing units (VPU)
  2. Mobileye (Intel) –  Since 2010; EyeQ®3, EyeQ®4 and EyeQ®5 advanced driver assistance systems (ADAS) using multiple heterogeneous processing elements for vision processing and machine learning
  3. NXP – Multiple ADAS and autonomous driving SoCs implementing machine learning, based on cache coherency and functional safety mechanisms
  4. Toshiba – Automotive ADAS SoC using cache coherence and functional safety mechanisms
  5. HiSilicon (Huawei) –  Since 2013; new Kirin 970 Mobile AI Processor with Neural Processing Unit (NPU)
  6. Cambricon – Neural network processor with multiple processing elements
  7. Dream Chip Technologies – ADAS image sensor processor with multiple digital signal processor (DSP) and single instruction multiple data (SIMD) hardware accelerators
  8. Nextchip – Vision ADAS SoC with multiple processing elements
  9. Intellifusion – Machine learning visual intelligence with multiple heterogeneous on-chip hardware engines

In addition to the nine publicly-announced customers listed above, the following six (6) companies are also using ArterisIP to implement new AI and machine learning hardware architectures:

  • Two (2) major semiconductor and systems vendors targeting autonomous driving
  • A major semiconductor vendor targeting consumer electronics
  • A major autonomous flying vehicle vendor
  • A leader in new automotive sensor technologies
  • An innovator in data center analytics

All of these innovation leaders create SoCs that accelerate machine learning and neural network algorithms using multiple instances of heterogeneous processing elements. Each SoC architecture is tailored to its target market requirements based on an on-chip interconnect configured specifically for the task. They have all licensed ArterisIP  interconnect technology because it:

  • Eases the on-chip integration of these different processing engines while allowing design teams to finely tune power management and quality-of-service (QoS) characteristics, like path latency and bandwidth;
  • Simplifies software development and enables customized dataflow processing by supporting cache coherence in key parts of a system. This allows the system to take advantage of data reuse and local accumulation in shared caches, which reduces die area and can increase memory bandwidth while reducing processing latency and power consumption;
  • Protects data in transit and at rest to increase functional safety diagnostic coverage, allowing large supercomputer-like SoCs to meet the stringent requirements of the automotive ISO 26262 specification.

“Efficiently implementing machine learning and visual computing in commercially viable systems requires hardware teams to accelerate neural network functions using many types of hardware accelerators, with the types and number of accelerators based on performance, power and area/cost requirements,” said Ty Garibay, Chief Technology Officer at ArterisIP. “ArterisIP technology gives these teams the means to integrate these processing elements into their systems quickly and efficiently, ensuring that they meet their schedule and functional safety requirements.”

“Machine learning has become the ‘killer app’ for our advanced interconnect IP, with a perfect match between the QoS, power consumption and performance required by AI and what the FlexNoC and Ncore interconnects deliver,” said K. Charles Janac, President and CEO of ArterisIP. “Our team is excited to be such a critical enabler to the new generation of neural network, machine learning and artificial intelligence chips.”

Presentation Download

For more information, please  download this presentation titled, “ Implementing Machine Learning and Neural Network Chip Architectures using Network-on-Chip Interconnect IP.”

About ArterisIP

ArterisIP provides  system-on-chip (SoC) interconnect IP to accelerate SoC semiconductor assembly for a wide range of applications from automobiles to mobile phones, IoT, cameras, SSD controllers, and servers for customers such as  Samsung Huawei / HiSilicon Mobileye (Intel) Altera (Intel), and  Texas Instruments. ArterisIP products include the  Ncore cache coherent and  FlexNoC non-coherent interconnect IP, as well as optional Resilience Package (ISO 26262 functional safety) and  PIANO automated timing closure capabilities. Customer results obtained by using the ArterisIP product line include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit  www.arteris.com or find us on LinkedIn at  www.linkedin.com/company/arteris



contact:

Kurt Shuler
Arteris Inc.
+1 408 470 7300
Email Contact

Featured Video
Jobs
Senior Principal Software Engineer for Autodesk at San Francisco, California
Principal Engineer for Autodesk at San Francisco, California
Machine Learning Engineer 3D Geometry/ Multi-Modal for Autodesk at San Francisco, California
Mechanical Engineer 3 for Lam Research at Fremont, California
Mechanical Test Engineer, Platforms Infrastructure for Google at Mountain View, California
GIS Specialist for Washington State Department of Natural Resources at Olympia, Washington
Upcoming Events
Digital Twins 2024 at the Gaylord National Resort & Convention Center in, MD. National Harbor MD - Dec 9 - 11, 2024
Commercial UAV Expo 2025 at RAI Amsterdam Amsterdam Netherlands - Apr 8 - 11, 2025
Commercial UAV Expo 2025 at Amsterdam Netherlands - Apr 8 - 10, 2025
BI2025 - 13th Annual Building Innovation Conference at Ritz-Carlton Tysons Corner McLean VA - May 19 - 21, 2025



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise