Thursday, 7 April 2016

NVIDIA's Tesla P100 is an advanced GPU for data centers


NVIDIA has announced a new process dubbed Tesla P100, designed specifically for deep learning.It is based on NVIDIA's new Pascal architectiure and is claimed to be the biggest chip ever made.

The Tesla P100 is built on the 16nm fabrication node, features 15 billion transistors and uses 16GB of HBM2 graphics memory, which is embedded onto the same chip, which results in memory bandwidth of up to 720GBps. The peak performance is rated at 21.2 Teraflops for half-precision instruction, 10.6 Teraflops for single-precision and 5.3 Teraflops for double precision work loads.

With NVIDIA's NVLink high-speed bus, up to eight Tesla P100 GPUs can be interconnected to maximize application performance in a single node. The Tesla P100 is also claimed to deliver over 12x the performance compared with previous generation NVIDIA Maxwell in neural network training scenarios.

NVIDIA says that the GPU offers High-performance computing (HPC), deep learning and many intensive computation workloads at data centers. The company plans to use the Tesla P100 as an AI for self-driving cars and in several areas of research including finding cure for cancer or understanding climate change.

The Tesla P100 GPU will be initially available in NVIDIA's new DGZ-1 deep learning computer in June and is also expected to be available  from leading server manufacturers in early 2017.

Via

No comments:

Post a Comment