Networking hardware Enfabrica secures raises $125M

Networking hardware Enfabrica secures raises $125M

Enfabrica, a company that creates networking chips designed to handle AI and machine learning workloads, has announced today that it has successfully raised $125 million in a Series B funding round. The funding values the company at “five times” its Series A post-money valuation. According to Rochan Sankar, the CEO and co-founder, the funding will be used to support Enfabrica’s R&D and operations and expand its engineering, sales, and marketing teams. Atreides Management led the funding round, with participation from several other companies like Sutter Hill Ventures, Nvidia, IAG Capital Partners, Liberty Global Ventures, Valor Equity Partners, Infinitum Partners, and Alumni Ventures.

Networking hardware Enfabrica secures raises $125M
A rendering of Enfabrica’s ACF-S networking hardware. Image Credits: Enfabrica

Despite the challenging funding environment for chip startups, Enfabrica raised a round of this magnitude by offering a unique solution that sets it apart from many of its peers in the industry. As generative AI and large language models continue to drive the most significant infrastructure push in cloud computing across multiple sectors, solutions like Enfabrica have the potential to meet the high demand for networking technologies.

Enfabrica began its journey in 2020 and emerged from stealth in 2023. The startup was created to meet the growth in the AI industry’s appetite for “parallel, accelerated, and heterogeneous” infrastructure, which, in other words, is GPUs. Sankar, the former director of engineering at Broadcom, teamed up with Shrijeet Mukherjee, the previous head of networking platforms and architecture at Google, to build Enfabrica.

Enfabrica, a company founded by a team of engineers from Cisco, Meta, and Intel, is working on an architecture for networking chips that can support the I/O and memory movement demands of parallel workloads, including AI. According to CEO Sankar, conventional networking chips such as switches cannot keep up with the data movement needs of modern AI workloads. This is due to the massive datasets that AI models like Meta’s Llama 2 and GPT-4 ingest during training. Enfabrica is focused on parallelizability in its quest to develop superior networking hardware that can efficiently and sustainably bridge the growing AI workload demand to the overall cost, ease of scaling, and efficiency of compute clusters.

Enfabrica has developed a new type of hardware called the Accelerated Compute Fabric Switch (ACF-S). This hardware facilitates “multi-terabit-per-second” data transfer between GPUs, CPUs, AI accelerators, memory, and networking devices. ACF-S uses “standards-based” interfaces, can scale to tens of thousands of nodes, and can reduce GPU computing for large language models such as Llama 2 by roughly 50% while maintaining the same level of performance. 

According to Sankar, ACF-S is a converged solution that eliminates the need for traditional server I/O and networking chips like rack-level networking switches, server network interface controllers, and PCIe switches. It provides high-performance networking, I/O, and memory within a data center server rack, complementing GPUs, CPUs, and accelerators.

Enfabrica, a networking chip startup, has developed ACF-S devices to optimize existing hardware and support multiple processor vendors without proprietary lock-in. This product can benefit companies handling inferencing by allowing them to use fewer AI accelerators, GPUs, and CPUs. The ACF-S also moves vast amounts of data quickly. 

Enfabrica is not the only startup chasing the AI trend, as Cisco and Broadcom offer networking solutions to support AI workloads. However, Enfabrica is in a strong position, given the enormous investments being made in AI infrastructure. 

The latest funding tranche will support Enfabrica’s production and go-to-market efforts. Although Sankar, the company’s representative, was unwilling to discuss Enfabrica’s customers, he is confident about the company’s prospects.

Investments in AI infrastructure are expected to significantly increase data center capital expenditures to over $500 billion by 2027, according to the Dell’Oro Group. Additionally, IDC predicts that investment in AI-tailored hardware will see a compound annual growth rate of 20.5% over the next five years.

The cost and power footprint of AI computing, whether on-prem or in the cloud, should be a top priority for CIOs, C-Suite executives, and IT organizations deploying AI services. Enfabrica has advanced its funding, product progress, and market potential with innovative and disruptive technology to existing networking and server I/O chip solutions despite economic headwinds. The magnitude of the market opportunity and the technology paradigm shift that generative AI and accelerated computing have given rise to over the past 18 months is significant.

Enfabrica, based in Mountain View, currently employs just over 100 individuals in North America, Europe, and India.

Leave a Comment