Economics of the FPGA
Successive generations of chips are becoming harder to justify where smaller process geometries become nonlinearly more expensive. An increased integration however, drives up packaging complexity and cost. This increase in complexity grows design time and expense. Because of these factors, the margins with which to gain profit from the volume production of ICs are reducing. First, the increase in competition allows for more consumer choices, which reduces per-product volumes. This increase in competition also reduces the life duration and lifetime volume of products. New compute-intensive nodes and technologies must be increasingly agile, not only to support the changing market demands but also to keep up with upgrading deep-learning models. There is an apparent lack of FPGAs being used for AI applications that sit in the “middle of the road” in terms of complexity, causing designers to rely either on custom chiplets or embedded processors for hardware acceleration. The new FPGA economy paradigm opened by Efinix frees designers to more flexibly innovate in a realm that will deliver revolutionary benefits to society. This once-in-a-lifetime quantum shift in product design possibilities is providing an inflection point away from the dead end of custom silicon and into the customizable blank slate of FPGA fabric.
Flex Logix Raises $55M Series D Financing As It Accelerates Market Adoption of AI Inference and eFPGA Solutions
Flex Logix® Technologies, Inc., supplier of the fastest and most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, announced today the close of a $55 million oversubscribed Series D funding round. Mithril Capital Management led the financing with significant participation by existing investors Lux Capital, Eclipse Ventures and the Tate Family Trust.
Flex Logix’s inference architecture is unique. It is optimized for low latency operation required by edge megapixel vision applications. It combines numerous 1-dimensional tensor processors with reconfigurable, high bandwidth, non-blocking interconnect that enables each layer of the neural network model to be configured for maximum utilization, resulting in very high performance with less cost and power. The connections between compute and memory are reconfigured in millionths of a second as the model is processed. This architecture is the basis of Flex Logix’s InferX™ X1 edge inference accelerator which is now running YOLOv3 object detection and sampling to lead customers.