AI

Arrcus Unveils AI Network Fabric at Mobile World Congress

LinkedIn Google+ Pinterest Tumblr

At the Mobile World Congress in Barcelona, Arrcus highlighted a significant development in the realm of artificial intelligence and network infrastructure. As AI workloads transition from large-scale training in data centers to real-world inference at the edge, there is a heightened demand for network fabrics that are not only intelligent but also policy-aware. This shift is fueled by a surge in use cases such as autonomous driving, oil drilling, and precision farming, all of which require high throughput and low latency.

The modern network infrastructure needs to embrace this transformation. Shekar Ayyar, CEO of Arrcus, noted that traditional ideas like load balancing and caching no longer sufficiently address the challenges posed by AI. Instead, a new distributed architecture is essential. This architecture seamlessly connects edge nodes to training nodes while routing traffic efficiently, ensuring the network can effectively support intensive inference workloads.

The Arrcus Inference Network Fabric (AINF) stands as a solution to this modern challenge. Designed with a policy-aware approach, AINF connects training clusters to edge nodes while providing granular policy controls. This allows operators to dictate traffic patterns based on workload needs, optimizing for latency, throughput, power, and data sovereignty.

During the congress, Arrcus announced strategic partnerships aimed at bolstering its AI infrastructure capabilities. One notable collaboration is with Fujitsu, leveraging their AI inference chip, MONAKA, to enhance AINF’s efficiency. Additionally, partnerships with Nvidia, Broadcom, and Lightstorm further extend the network’s reach. Integration with Lightstorm’s network-as-a-service solution, Polarin, allows Arrcus to cater to large enterprises and hyperscalers entering Asian markets.

Further collaborations with UfiSpace and Lanner focus on delivering AI-optimized hardware solutions for data centers. These expanded networks underscore the readiness of Arrcus to address the evolving needs of AI inference on a global scale. As Ayyar eloquently put it, while training focuses on complex model development, the current priority is leveraging these models efficiently for impactful results.

Write A Comment