Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Breaking News : Nvidia Debuts Tesla P100 Accelerator With 15B Transistors for AI, Deep Learning
Nvidia CEO Jen-Hsun Huang used the opening keynote of the company's annual GPU Technology Conference to announce a massive new processor designed specifically for deep learning. The Tesla P100 is the first shipping product to use Nvidia's new Pascal architecture, and is made up of 15.3 billion transistors, which the company says makes it the largest microchip ever fabricated.

The Tesla P100 is built using a new 16nm FinFE manufacturing process and uses 16GB of HBM2 graphics memory which is integrated onto the same chip substrate, which results in memory bandwidth of up to 720GBps. Peak performance is rated at 21.2 Teraflops for half-precision instructions, 10.6 Teraflops for single-precision and 5.3 Teraflops for double-precision workloads. Up to eight Tesla P100 chips can be interconnected using Nvidia's NVLink bus.

The Tesla P100 is claimed to deliver over 12x the performance of Nvidia's previous generation Maxwell architecture in neural network training scenarios. Specific applications, such as the AMBER molecular dynamics code, are said to run faster on one Tesla P100 server node than on 48 dual-socket CPU server nodes, according to Nvidia.

Huang also said that the company has deciced to "go all-in on AI", and that deep learning and artificial intelligence are the company's fastest growing business area. He named several areas of research, including finding a cure for cancer and understanding climate change, which require computing resources that can scale infinitely.

Massachusetts General Hospital has set up a clinical datacentre which will use Nvidia's AI processing technology to help diagnose diseases starting with the fields of radiology and pathology, and will use its archive of 10 billion medical images to create a deep learning neural network.

The Tesla P100 will initially be available in Nvidia's new DGX-1 "deep learning supercomputer" in June, and in servers from a number of manufacturers beginning in early 2017. The DGX-1 will have eight Tesla P100 chips for a combined 170 Teraflops of half-precision performance, and is claimed to be able to deliver the deep learning throughput of 250 traditional x86 servers in a single 3U server enclosure.
Thanks given by:

Possibly Related Threads...
Thread Author Replies Views Last Post
  General News: Google Translate offline is set to get a boost with machine learning nairrk 0 524 06-13-2018, 10:53 AM
Last Post: nairrk
  General News: Google Music adds machine learning feature to personalise music listening experience nairrk 0 675 07-16-2017, 12:40 PM
Last Post: nairrk
  General News: Asus launches G20CB, its Nvidia GTX 1080 housing compact gaming PC at Rs 1,85,990 rahul1117_kumar 0 634 07-28-2016, 08:32 PM
Last Post: rahul1117_kumar
  General News: Xiaomi Mi debuts Mi Pen; prices and photos for Mi Notebook leaked rahul1117_kumar 0 444 07-26-2016, 04:06 PM
Last Post: rahul1117_kumar
  General News: NVIDIA Reveals New, All Powerful Titan X rahul1117_kumar 0 483 07-25-2016, 10:30 PM
Last Post: rahul1117_kumar
  General News: NVIDIA GeForce GTX 1060 announced starting at $249, available in India from July 19 rahul1117_kumar 0 620 07-07-2016, 11:29 PM
Last Post: rahul1117_kumar
  Breaking News: NVIDIA GeForce GTX 1080 announced, 2 times faster than GTX TITAN X rahul1117_kumar 0 579 05-07-2016, 03:19 PM
Last Post: rahul1117_kumar
  General News: SwiftKey Launches Symbols App for People With Talking and Learning Difficulties nairrk 0 615 12-12-2015, 09:29 AM
Last Post: nairrk
  NVIDIA Drivers get Windows 8 Certification Sritam Das 0 1,285 06-08-2012, 05:44 PM
Last Post: Sritam Das
  Softpedia Giveaways 2011: 10 Licenses for SPEEDbit Video Accelerator Premium SRK 0 1,125 11-14-2011, 08:00 AM
Last Post: SRK

Forum Jump:

Users browsing this thread: 1 Guest(s)