FLEX LOGIX ANNOUNCES INFERX HIGH PERFORMANCE IP FOR DSP AND AI INFERENCE

MOUNTAIN VIEW, Calif., April 24, 2023 — (PRNewswire) — Flex Logix® Technologies, Inc., a leading innovator in DSP & AI inference IP and the leading supplier of eFPGA IP, announced today the availability of InferXTM IP & software for DSP and AI inference. InferX joins EFLX® eFPGA as Flex Logix's second IP offering. It can be used by device manufacturers and systems companies that want the performance of a DSP-FPGA or a AI-GPU in their SoC, but at a fraction of the cost and power. The company's EFLX eFPGA product line has already been proven in dozens of chips with many more in design from 180nm to 7nm with 5nm in development.

"By integrating InferX into an SoC, customers not only maintain the performance and programmability of an expensive and power-hungry FPGA or GPU, but they also benefit from much lower power consumption and cost," said Geoff Tate, Founder and CEO of Flex Logix. "This is a significant advantage to systems customers that are designing their own ASICs, as well as chip companies that have traditionally had the DSP-FPGA or AI-GPU sitting next to their chip and can now integrate it to get more revenue and save their customer power and cost. InferX is 80% hard-wired, but 100% reconfigurable."

The end user benefit is more powerful DSP and AI in smaller systems, lower power and lower cost. With InferX AI, users can process megapixel images with much more accurate models like Yolov5s6 and Yolov5L6 to detect images at smaller sizes/greater distances than is affordable now.

The InferX Advantage
InferX DSP is InferX hardware combined with Softlogic for DSP operations, which Flex Logix provides for operations such as FFT that is on-the-fly switchable between sizes (e.g. 1K to 4K to 2K); FIR filters of any number of taps; Complex Matrix Inversions 16x16 or 32x32 or other size; and many more. InferX DSP streams Gigasamples/second, can run multiple DSP operations, and DSP operations can be chained. DSP is done on Real/Complex INT16 with 32-bit accumulation for very high accuracy. With InferX DSP you can integrate DSP performance that is as fast or faster than the leading FPGA at 1/10th of the cost and power, while keeping all of the flexibility to reconfigure almost instantly. One example is InferX DSP with <50 square millimeters of silicon in N5 can do Complex INT16 FFTs at 68 Gigasamples/second and switch instantly between FFT sizes from 256 to 8K points. This is faster than the best FPGA available today at a fraction of the cost, power and size.

InferX AI is InferX hardware combined with the Inference Compiler for AI Inference. Inference Compiler takes in a customer's neural network model in Pytorch, Onnx or TFLite formats, quantizes the model with high accuracy, compiles the graph for high utilization and generates the run time code that executes on the InferX hardware. A simple, easy-to-use API is provided to control the InferX IP. With InferX AI, customers can integrate AI Inference performance that is as fast or faster than the leading edge AI modules at 1/10th of the cost and power, while keeping all of the flexibility and the ability to run multiple models or change models on the fly. InferX AI is optimized for megapixel batch=1 operations, and the inference compiler is available for evaluation. As an example, with about 15 square millimeters of silicon in N7, InferX AI can run Yolov5s at 175 Inferences/second: this is 40% faster than the fastest edge AI module, Orin AGX 60W.

InferX technology is proven in 16nm and production qualified and will be available in the most popular FinFet nodes. 

InferX hardware is also scalable. Its building block is a compute tile that can be arrayed for more throughput. For example, a 4 tile array is 4x the performance of a 1 tile array. The InferX array with the performance the customer wants is delivered with an AXI bus interface for easy integration in their SoC.

For general information on InferX and EFLX product lines, visit our website at this link. For more information under NDA, qualified customers can contact us at this link.

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm in development; and can support other nodes on short notice. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit  https://flex-logix.com.

MEDIA CONTACTS
Kelly Karr
Tanis Communications
Email Contact
+408-718-9350

Copyright 2023. All rights reserved. Flex Logix and EFLX are registered trademarks and INFERX is a trademark of Flex Logix, Inc.

Cision View original content to download multimedia: https://www.prnewswire.com/news-releases/flex-logix-announces-inferx-high-performance-ip-for-dsp-and-ai-inference-301805412.html

SOURCE Flex Logix Technologies, Inc.

Contact:
Company Name: Flex Logix Technologies, Inc.

Featured Video
Jobs
Principal Engineer for Autodesk at San Francisco, California
Senior Principal Software Engineer for Autodesk at San Francisco, California
Machine Learning Engineer 3D Geometry/ Multi-Modal for Autodesk at San Francisco, California
Mechanical Engineer 3 for Lam Research at Fremont, California
Senior Principal Mechanical Engineer for General Dynamics Mission Systems at Canonsburg, Pennsylvania
Business Development Manager for Berntsen International, Inc. at Madison, Wisconsin
Upcoming Events
Dimensions User Conference 2024 at The Venetian Resort Las Vegas NV - Nov 11 - 13, 2024
Greenbuild 2024 at Pennsylvania Convention Center Philadelphia PA - Nov 12 - 15, 2024
Digital Construction North (DCN) 2024 at Manchester Central. Manchester United Kingdom - Nov 13, 2024
Digital Twins 2024 at the Gaylord National Resort & Convention Center in, MD. National Harbor MD - Dec 9 - 11, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise