Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel’s latest offerings address the challenges businesses face in scaling AI initiatives.

Innovation is advancing at an unprecedented pace, all enabled by silicon – and every company is quickly becoming an AI company,” said Intel CEO Pat Gelsinger. “Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalise on the immense opportunities ahead.

Enterprises are looking to scale GenAI from pilot to production. To do so, they need readily available solutions, built on performant and cost- and energy-efficient processors like the Intel Gaudi 3 AI accelerator, that also address complexity, fragmentation, data security and compliance requirements.

Introducing Gaudi 3 for AI Training and Inference
The Intel Gaudi 3 AI accelerator will power AI systems with up to tens of thousands of accelerators connected through the common standard of Ethernet. Intel Gaudi 3 promises 4x more AI compute for BF16 and a 1.5x increase in memory bandwidth over its predecessor. The accelerator will deliver a significant leap in AI training and inference for global enterprises looking to deploy GenAI at scale. In comparison to Nvidia H100, Intel Gaudi 3 is projected to deliver 50% faster time-to-train on average3 across Llama2 models with 7B and 13B parameters, and GPT-3 175B parameter model. Additionally, Intel Gaudi 3 accelerator inference throughput is projected to outperform the H100 by 50% on average1 and 40% for inference power-efficiency averaged2 across Llama 7B and 70B parameters, and Falcon 180B parameter models.

Intel Gaudi 3 provides open, community-based software and industry-standard Ethernet networking. And it allows enterprises to scale flexibly from a single node to clusters, super-clusters and mega-clusters with thousands of nodes, supporting inference, fine-tuning and training at the largest scale. Intel Gaudi 3 will be available to OEMs – including Dell Technologies, HPE, Lenovo and Supermicro – in the second quarter of 2024.

Generating Value for Customers with Intel AI Solutions
Intel shared broad momentum with enterprise customers and partners across industries to deploy Intel Gaudi accelerator solutions for new and innovative generative AI applications:

Bharti Airtel: Embracing the power of Intel’s cutting-edge technology, Airtel plans to leverage its rich telecom data to enhance its AI capabilities and turbo charge the experiences of its customers. The deployments will be in line with Airtel’s commitment to stay at the forefront of technological innovation and help drive new revenue streams in a rapidly evolving digital landscape.

Infosys: Global leader in next-generation digital services and consulting announced a strategic collaboration to bring Intel technologies including 4th and 5th Gen Intel Xeon processors, Intel Gaudi 2 AI accelerators and Intel® Core™ Ultra to Infosys Topaz – an AI-first set of services, solutions and platforms that accelerate business value using generative AI technologies.

Ola/Krutrim: To pre-train and fine-tune its first India foundational model with generative capabilities in 10 languages, producing industry-leading price/performance versus market solutions. Krutrim is now pre-training a larger foundational model on an Intel Gaudi 2 cluster.

Intel also announced collaborations with Google Cloud, Thales and Cohesity to leverage Intel’s confidential computing capabilities in their cloud instances. This includes Intel Trust Domain Extensions (Intel TDX), Intel Software Guard Extensions (Intel SGX) and Intel’s attestation service. Customers can run their AI models and algorithms in a trusted execution environment (TEE) and leverage Intel’s trust services for independently verifying the trust worthiness of these TEEs.

Related posts

IBM Study: More Companies Turning to Open-Source AI Tools to Unlock ROI

RUCKUS Wi-Fi 6 Solution Improves Campus Experience for Amrita University

Team Computers Launches Global Delivery Center in Uttarakhand to Transform Rural India

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More