Jensen Huang, CEO of Nvidia, Announces Availability of ‘Hopper’ GPU, Cloud Service for Large AI Language Models

jensen huang gtc 2022 crop vs twitter

Nvidia co-founder and CEO Jensen Huang opened the company’s GTC fall conference by announcing the general availability next month of the company’s new “Hopper” GPU in systems from Dell and others. The keynote also included computers for healthcare, robotics, industrial automation, and automotive uses, as well as several cloud services including the Nvidia-hosted cloud service for deep learning language models such as GPT-3.


Nvidia CEO Jensen Huang on Tuesday opened the company’s GTC fall conference by announcing that the company’s “Hopper” graphics processing unit (GPU) is in production and will begin shipping in systems by partners including Dell, Hewlett Packard and Cisco Systems next month. Huang announced that Nvidia systems carrying the GPU will be available in the first quarter of next year.

The Hopper chip, also known as “H100”, is intended for tasks for the data center, such as Artificial intelligence. Nvidia says the H100 can significantly reduce the cost of deploying AI software. For example, the former 320 high-end GPU, the A100, can be made using only 64 H100 chips, which will need only one-fifth of the number of server computers, and will reduce power usage by 3.5 times.

Hopper was the first Unveiled in March by nvidia. company Show some benchmark scores for the slide This month in the MLPerf group for machine learning tasks.

also: NVIDIA unveils Hopper, a new hardware architecture to turn data centers into AI factories

Alongside Hopper availability, Huang discussed a new “as-a-service” cloud offering, Nvidia NeMo LLM, for “Large Language Models,” which offers to enable customers to easily deploy very large NLP models such as OpenAI’s GPT-3 and Megatron Turing NLG 530B, the model The language used by Nvidia Developed with Microsoft.

also: Dispersion Neural Magic, Nvidia’s Hopper, and Alibaba’s network are among the first in the latest MLPerf AI standards

NeMo will be available in “early access” style starting next month, and GPT-3 and other models provide in a pre-trained style and can be tuned by a developer using a method Nvidia calls “instant learning“which Nvidia borrowed from a technology made by scientists at Google Made last year.

A version of NeMo will be hosted specifically for biomedical uses of large language models such as drug discovery, called the NVIDIA BioNeMo service.

During a press briefing about the advertisement, ZDNet Nvidia’s Head of Accelerated Computing, Ian Buck, asked about the handrail the company will build to prevent abuse of large language models that Well documented in the literature on the ethics of artificial intelligence.

“Yes, good question, this service is my go-to for EA [early access] Starting next month, it will be direct to the organization and we are already doing that to connect with different organizations and different companies to develop their workflow,” said Buck.

buck added,

They’ll obviously do anything in tandem with Nvidia, and every user will apply for the service, so, we’ll understand more about what they’re doing versus the open public offering in that way. Once again, we try to focus on providing great language models for organizations and that is the goal of our clients to market.

ZDNet This follows by noting Buck that violations in the literature on large language models include biases embedded in the training data, not necessarily malicious use.

“Is it your view that if it had limited access to enterprise partners, there wouldn’t be those kinds of documented violations like bias in training materials and the resulting product?” Requested ZDNet.

back buck,

yes. I mean, clients are going to bring their data sets to domain training, so, they definitely have to take on some of that responsibility, and we all, as a society, need to take responsibility. This stuff was trained online. It needs to be understood and scoped for what it is now for industry-specific solutions. The problem is a bit limiting because we are trying to provide a unique service for a particular use case. This bias will exist. Humans have a bias, and unfortunately this is of course trained on human input.

The offering also included a range of computer platforms intended for several industries including healthcare, and for OmniVerse, Nvidia’s software platform for the Metaverse.

also: Nvidia clarifies the Megatron-Turing scale claim

Leave a Reply

Your email address will not be published.