Nvidia and Microsoft develop 530 billion parameter AI model, but it still suffers from bias

Nvidia and Microsoft have developed an incredible 530 billion parameter AI model, but it still suffers from bias.

The pair claim their Megatron-Turing Natural Language Generation (MT-NLG) model is the "most powerful monolithic transformer language model trained to date".

For comparison, OpenAI’s much-lauded GPT-3 has 175 billion parameters.

The duo trained their impressive model on 15 datasets with a total of 339 billion tokens. Various sampling weights...

Nvidia-Arm merger in doubt as CMA has ‘serious’ concerns

The proposed merger between Nvidia and British chip technology giant Arm is looking increasingly doubtful as the CMA (Competition & Markets Authority) believes the deal “raises serious competition concerns”.

A $40 billion merger of two of the biggest names in chip manufacturing was always bound to catch the eye of regulators, especially when it’s received such vocal opposition from around the world.

Hermann Hauser, Founder of Arm, even suggested that...

UK considers blocking Nvidia’s $40B acquisition of Arm

Bloomberg reports the UK is considering blocking Nvidia’s $40 billion acquisition of Arm over national security concerns.

Over 160 billion chips have been made for various devices based on designs from Arm. In recent years, the company has added AI accelerator chips to its lineup for neural network processing.

In the wake of the proposed acquisition, Nvidia CEO Jensen Huang said:

“ARM is an incredible company and it employs some of the greatest engineering...

NVIDIA launches UK supercomputer to search for healthcare solutions

NVIDIA supercomputer

Nvidia’s ‘Cambridge-1’ is now operational and utilising AI and simulation to advance research in healthcare.

The UK’s most powerful supercomputer and among the world’s top fifty, Cambridge-1 was announced by the technology company in October last year and cost $100 million (£72m) to build.

Its first projects with AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies include developing a...

MLCommons releases latest MLPerf Training benchmark results

Open engineering consortium MLCommons has released its latest MLPerf Training community benchmark results.

MLPerf Training is a full system benchmark that tests machine learning models, software, and hardware.

The results are split into two divisions: closed and open. Closed submissions are better for comparing like-for-like performance as they use the same reference model to ensure a level playing field. Open submissions, meanwhile, allow participants to submit a...

NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training

NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images...

NVIDIA DGX Station A100 is an ‘AI data-centre-in-a-box’

NVIDIA has unveiled its DGX Station A100, an “AI data-centre-in-a-box” powered by up to four 80GB versions of the company’s record-setting GPU.

The A100 Tensor Core GPU set new MLPerf benchmark records last month—outperforming CPUs by up to 237x in data centre inference. In November, Amazon Web Services made eight A100 GPUs available in each of its P4d instances.

For those who prefer their hardware local, the DGX Station A100 is available in either four 80GB A100...

NVIDIA chucks its MLPerf-leading A100 GPU into Amazon’s cloud

NVIDIA’s A100 set a new record in the MLPerf benchmark last month and now it’s accessible through Amazon’s cloud.

Amazon Web Services (AWS) first launched a GPU instance 10 years ago with the NVIDIA M2050. It’s rather poetic that, a decade on, NVIDIA is now providing AWS with the hardware to power the next generation of groundbreaking innovations.

The A100 outperformed CPUs in this year’s MLPerf by up to 237x in data centre inference. A single NVIDIA DGX A100...

NVIDIA sets another AI inference record in MLPerf

NVIDIA has set yet another record for AI inference in MLPerf with its A100 Tensor Core GPUs.

MLPerf consists of five inference benchmarks which cover the main three AI applications today: image classification, object detection, and translation.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” said Rangan Majumder, VP of Search and AI at Microsoft.

Last...

GTC 2020: Using AI to help put COVID-19 in the rear-view mirror

This year’s GTC is Nvidia’s biggest event yet, but – like the rest of the world – it’s had to adapt to the unusual circumstances we all find ourselves in. Huang swapped his usual big stage for nine clips with such exotic backdrops as his kitchen.

AI is helping with COVID-19 research around the world and much of it is being powered by NVIDIA GPUs. It’s a daunting task, new drugs often cost over $2.5 billion in research and development — doubling every nine years —...