Flex Logix Announces nnMAX AI Inference IP In Development On GLOBALFOUNDRIES 12LP Platform
MOUNTAIN VIEW, Calif. -- Aug. 31, 2020 -- Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) and AI Inference IP, architecture and software, today announced that its nnMAX™ AI Inference IP is in development on GLOBALFOUNDRIES® (GF®) 12LP FinFETplatform under an agreement with the U.S. Government. The nnMAX AI IP on GF 12LP, and extendable to GF's 12LP+ to enable enhanced power performance, is a superior solution for DSP acceleration and AI inference functions. The IP will also be available to commercial customers in 2H 2021.
"We are excited to expand our nnMAX IP portfolio in support of aerospace and commercial programs requiring high-performance edge inference solutions and manufacturing in an advanced US wafer fab," said Geoff Tate, CEO and co-founder of Flex Logix. "No other inference solution on the market delivers more throughput on tough models for less dollars and less watts, which is the number one requirement customers are asking for today."
nnMAX AI Inference provides TensorFlowLite/ONNX programmable inference with more throughput per unit of silicon area than alternatives. Flex Logix's nnMAX is scalable, enabling a NxN array of nnMAX inference tiles to have N2 the throughput of a single tile. With nnMAX available on GF's 12LP, it will now be possible to manufacture the most demanding processing needs for efficient AI inference chips in the United States.
"Flex Logix's nnMAX provides a unique reconfigurable data path option for AI inference to enable power optimized implementation," said Mark Ireland, vice president of ecosystem and design solutions at GF. "As a vital supplier of differentiated technologies, this IP is a great addition to GLOBALFOUNDRIES' 12LP, and extendable to 12LP+ that will enable clients, including the U.S. government, to develop innovative solutions for AI training and inference applications."
Through its longstanding partnership, Flex Logix has proven silicon for EFLX eFPGA on GF's 12LP, with several SoC designs in production and many more in design across multiple customers.
About Flex Logix
Flex Logix provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix's second product line, nnMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 1 to >100 TOPS using a higher throughput/$ and throughput/watt compared to other architectures. Flex Logix is headquartered in Mountain View, California.
|
Related News
- Flex Logix Announces EFLX eFPGA And nnMAX AI Inference IP Model Support For The Veloce Strato Emulation Platform From Mentor
- Evaluation Boards Now Available for Flex Logix EFLX4K eFPGA on GLOBALFOUNDRIES' Most Advanced FinFET Platform
- Flex Logix Announces InferX™ High Performance IP for DSP and AI Inference
- Flex Logix Expands Management Team To Meet Growing Demand For Its AI Inference and eFPGA Solutions
- Five AI Inference Trends for 2022
Breaking News
- RISC-V International Promotes Andrea Gallo to CEO
- See the 2025 Best Edge AI Processor IP at the Embedded Vision Summit
- Andes Technology Showcases RISC-V AI Leadership at RISC-V Summit Europe 2025
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- Keysom Unveils Keysom Core Explorer V1.0
Most Popular
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- SiFive and Kinara Partner to Offer Bare Metal Access to RISC-V Vector Processors
- Imagination Announces E-Series: A New Era of On-Device AI and Graphics
- Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership
- Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems to Transform AI-Driven Silicon, Systems and Drug Design
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |