Flex Logix Accelerates Growth With New Office In Austin; Prepares For Global Expansion Of Its Edge AI Inference Product Line
MOUNTAIN VIEW, Calif., Nov. 9, 2021 -- Flex Logix® Technologies, Inc., supplier of the most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, today announced the official opening of its Austin office located just off Loop 360. Designed to support the growth of its edge AI inference business, the new office currently employs 10 people including two of Flex Logix's top executives hired in less than a year. The company also plans to hire 50 software and hardware engineers over the next several years.
Just last month, Flex Logix announced the production availability of its InferX™ X1P1 PCIe accelerator board. Designed to bring high-performance AI inference acceleration to edge servers and industrial vision systems, the new InferX X1 PCIe board provides customers with superior AI inference capabilities where high accuracy, high throughput and low power on complex models is needed.
"Our expansion into Austin allows us to tap into the region's large pool of highly skilled engineering, management, sales, and customer support professionals," said Dana McCarty, Vice President of Sales and Marketing for Flex Logix's Inference Products. "With production boards now available for our InferX X1, we are well positioned for rapid growth in the edge vision market for a wide range of applications in medical, retail, industrial, imaging, and robotics. This new office will enable us to quickly capitalize on this opportunity."
About the InferX X1P1 Board
The InferX X1P1 board offers the most efficient AI inference acceleration for edge AI workloads such as Yolov3. Many customers need high-performance, low-power object detection and other high-resolution image processing capabilities for robotic vision, security, retail analytics, and many other applications. For information, visit this link.
About Flex Logix
Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry's most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix's eFPGA platform enables chips to flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix is headquartered in Mountain View, California and also has offices in Austin, Texas. For more information, visit https://flex-logix.com.
|
Related News
- Flex Logix Appoints CFO and VP of Inference Hardware to Senior Management Team; Announces Expansion to Austin, TX
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
- Flex Logix Announces Working Silicon Of Fastest And Most Efficient AI Edge Inference Chip
- Flex Logix Discloses Real-World Edge AI Inference Benchmarks Showing Superior Price/Performance For All Models
- AI Edge Inference IP Leader Expedera Opens R&D Office in India
Breaking News
- RISC-V International Promotes Andrea Gallo to CEO
- See the 2025 Best Edge AI Processor IP at the Embedded Vision Summit
- Andes Technology Showcases RISC-V AI Leadership at RISC-V Summit Europe 2025
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- Keysom Unveils Keysom Core Explorer V1.0
Most Popular
- RISC-V Royalty-Driven Revenue to Exceed License Revenue by 2027
- SiFive and Kinara Partner to Offer Bare Metal Access to RISC-V Vector Processors
- Imagination Announces E-Series: A New Era of On-Device AI and Graphics
- Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership
- Cadence Unveils Millennium M2000 Supercomputer with NVIDIA Blackwell Systems to Transform AI-Driven Silicon, Systems and Drug Design
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |