3/20/2024

Meta awaits shipments of new Nvidia chips

Scroll down

ХОЧЕШЬ УЗНАТЬ КАК ЗАРАБАТЫВАТЬ НА НОВОСТЯХ?

Проходи бесплатную регистрацию и получи консультацию эксперта, доступ к обучающему курсу и вебинарам.

Key points:

  • Meta Platforms, Facebook’s parent company, has struck a deal with Nvidia to supply a new type of artificial intelligence chip.
  • The new Nvidia Blackwell B200 chips are expected to begin shipping in late 2024.
  • Meta Platforms plans to use the new B200 chips to train its Llama models.

Meta Platforms, Facebook’s parent company, expects to receive the first shipments of a new type of artificial intelligence chip from Nvidia later this year.

Nvidia, a leader in developing GPUs needed to power much of the cutting-edge AI research, unveiled the new B200 “Blackwell” chip at its annual developer conference on Monday.

Features of the new chips

Nvidia, the chipmaker, said the B200 has 30 times better performance for tasks such as generating chatbot responses. However, the company did not provide details on how efficiently the chip can handle the large volumes of data needed to train these chatbots.

Nvidia Chief Financial Officer Colette Kress told financial analysts on Tuesday that the company “plans to bring the product to market later this year,” but noted that a significant increase in shipments of the new GPUs is not expected until 2025.

Why did Meta make a deal with Nvidia?

Meta Platforms, the social media giant, is one of Nvidia‘s largest clients. In the past, the company has purchased hundreds of thousands of previous-generation chips to support its advanced content recommendation systems and generative artificial intelligence products.

In January, Meta CEO Mark Zuckerberg said the company planned to have about 350,000 of the earlier H100 model chips in its inventory by the end of the year. Combined with other GPUs, the Meta will have the equivalent of about 600,000 H100s by then.

On Monday, Zuckerberg said Meta plans to use Blackwell’s new B200 chips to train its Llama models. The company is currently training the third generation of the model on two GPU clusters it announced last week, each containing about 24,000 H100 GPUs.

ХОЧЕШЬ УЗНАТЬ КАК ЗАРАБАТЫВАТЬ НА НОВОСТЯХ?

Проходи бесплатную регистрацию и получи консультацию эксперта, доступ к обучающему курсу и вебинарам.

Заполните форму и получите бесплатную консультацию!

Add review

Name *

Review *

Recommend to read