Deep Learning GPU Benchmarks - Tesla V100 vs RTX 2080 Ti vs GTX 1080 Ti vs Titan V

 

At Lambda, we're often asked "what's the best GPU for deep learning?" In this post and accompanying white paper, we explore this question by evaluating the top 5 GPUs used by AI researchers:

To determine the best machine learning GPU, we factor in both cost and performance.

Results summary

As of February 8, 2019, the NVIDIA RTX 2080 Ti is the best GPU for deep learning research on a single GPU system running TensorFlow. A typical single GPU system with this GPU will be:

  • 37% faster than the 1080 Ti with FP32, 62% faster with FP16, and 25% more expensive.
  • 35% faster than the 2080 with FP32, 47% faster with FP16, and 25% more expensive.
  • 96% as fast as the Titan V with FP32, 3% faster with FP16, and ~1/2 of the cost.
  • 80% as fast as the Tesla V100 with FP32, 82% as fast with FP16, and ~1/5 of the cost.

Note that all experiments utilized Tensor Cores when available and are priced out on a complete single GPU system cost. As a system builder and AI research company, we're trying to make benchmarks that are scientific, reproducible, correlate with real world training scenarios, and have accurate prices. So, we've decided to make the spreadsheet that generated our graphs and (performance / $) tables public. You can view the benchmark data spreadsheet here.

Results in-depth

Performance of each GPU was evaluated by measuring FP32 and FP16 throughput  (# of training samples processed per second) while training common models on synthetic data. We divided the GPU's throughput on each model by the 1080 Ti's throughput on the same model; this normalized the data and provided the GPU's per-model speedup over the 1080 Ti. Speedup is a measure of the relative performance of two systems processing the same job.

 

Throughput of each GPU on various models; raw data can be found here.

We then averaged the GPU's speedup over the 1080 Ti across all models:

 

 

FP32 and FP16 average speedup vs 1080 ti.

Finally, we divided each GPU's average speedup by the total system cost to calculate our winner:

 

 

FP32 and FP16 performance per .Unitsarespeedup/k.Unitsarespeedup/k.

Under this evaluation metric, the RTX 2080 Ti wins our contest for best GPU for Deep Learning training.

2080 Ti vs V100 - is the 2080 Ti really that fast?

How can the 2080 Ti be 80% as fast as the Tesla V100, but only 1/8th of the price? The answer is simple: NVIDIA wants to segment the market so that those with high willingness to pay (hyper scalers) only buy their TESLA line of cards which retail for ~$9,800. The RTX and GTX series of cards still offers the best performance per dollar.

If you're not AWS, Azure, or Google Cloud then you're probably much better off buying the 2080 Ti. There are, however, a few key use cases where the V100s can come in handy:

  1. If you need FP64 compute. If you're doing Computational Fluid Dynamics, n-body simulation, or other work that requires high numerical precision (FP64), then you'll need to buy the Titan V or V100s. If you're not sure if you need FP64, you don't. You would know.
  2. If you absolutely need 32 GB of memory because your model size won't fit into 11 GB of memory with a batch size of 1. If you are creating your own model architecture and it simply can't fit even when you bring the batch size lower, the V100 could make sense. However, this is a pretty rare edge case. Fewer than 5% of our customers are using custom models. Most use something like ResNet, VGG, Inception, SSD, or Yolo.

So. You're still wondering. Why would anybody buy the V100? It comes down to marketing.

2080 Ti is a Porsche 911, the V100 is a Bugatti Veyron

The V100 is a bit like a Bugatti Veyron. It's one of the fastest street legal cars in the world, ridiculously expensive, and, if you have to ask how much the insurance and maintenance is, you can't afford it. The RTX 2080 Ti, on the other hand, is like a Porsche 911. It's very fast, handles well, expensive but not ostentatious, and with the same amount of money you'd pay for the Bugatti, you can buy the Porsche, a home, a BMW 7-series, send three kids to college, and have money left over for retirement.

And if you think I'm going overboard with the Porsche analogy, you can buy a DGX-1 8x V100 for $120,000 or a Lambda Blade 8x 2080 Ti for $28,000 and have enough left over for a real Porsche 911. Your pick.

Raw performance data

FP32 throughput

FP32 (single-precision) arithmetic is the most commonly used precision when training CNNs. FP32 data comes from code in the Lambda TensorFlow benchmarking repository.

Model / GPU20802080 TiTitan VV1001080 Ti
ResNet-50209.89286.05298.28368.63203.99
ResNet-15282.78110.24110.13131.6982.83
InceptionV3141.9189.31204.35242.7130.2
InceptionV461.68178.6490.656.98
VGG16123.01169.28190.38233133.16
AlexNet2567.383550.113729.644707.672720.59
SSD300111.04148.51153.55186.8107.71

FP16 throughput (Sako)

  • FP16 (half-precision) arithmetic is sufficient for training many networks. We use Yusaku Sako benchmark scripts. The Sako benchmark scripts have both FP16 and FP32 results. From here you can clearly see the 2080 Ti beating out the 1080 Ti's FP16 performance.
Model/GPU20802080 TiTitan VV1001080 Ti
VGG16181.2238.45270.27333.33149.39
ResNet-15262.67103.2984.92108.5462.74

FP32 (Sako)

Model/GPU20802080 TiTitan VV1001080 Ti
VGG16120.39163.26168.59222.22130.8
ResNet-15243.4375.1861.8280.0853.45

FP16 Training Speedup over 1080 ti

Model/GPU20802080 TiTitan VV1001080 Ti
VGG161.211.601.812.231.00
ResNet-1521.001.651.351.731.00

FP32 Training Speedup

Model/GPU20802080 TiTitan VV1001080 Ti
VGG160.921.251.291.701.00
ResNet-1520.811.411.161.501.00

Price Performance Data (Speedup / $1,000 USD) FP32

Model/GPU20802080 TiTitan VV1001080 Ti
Price Per GPU (k$)0.71.239.80.7
Price Per 1 GPU System (k$)1.992.494.2911.091.99
AVG0.510.550.330.160.50
ResNet-500.520.560.340.160.50
ResNet-1520.500.530.310.140.50
InceptionV30.550.580.370.170.50
InceptionV40.540.570.320.140.50
VGG160.460.510.330.160.50
AlexNet0.470.520.320.160.50
SSD3000.520.550.330.160.50

Price Performance Data (Speedup / $1,000 USD) FP16

Model/GPU20802080 TiTitan VV1001080 Ti
AVG0.560.650.370.180.50
VGG160.610.640.420.200.50
ResNet-1520.500.660.320.160.50

Methods

  • All models were trained on a synthetic dataset. This isolates GPU performance from CPU pre-processing performance.
  • For each GPU, 10 training experiments were conducted on each model. The number of images processed per second was measured and then averaged over the 10 experiments.
  • The speedup benchmark is calculated by taking the images / sec score and dividing it by the minimum image / sec score for that particular model. This essentially shows you the percentage improvement over the baseline (in this case the 1080 Ti).
  • The 2080 Ti, 2080, Titan V, and V100 benchmarks utilized Tensor Cores.

Batch sizes used

ModelBatch Size
ResNet-5064
ResNet-15232
InceptionV364
InceptionV416
VGG1664
AlexNet512
SSD32

Hardware

All benchmarks, except for those of the V100, were conducted using a Lambda Quad Basic with swapped  GPUs. The exact specifications are:

  • RAM: 64 GB DDR4 2400 MHz
  • Processor: Intel Xeon E5-1650 v4
  • Motherboard: ASUS X99-E WS/USB 3.1
  • GPUs: EVGA XC RTX 2080 Ti GPU TU102, ASUS 1080 Ti Turbo GP102, NVIDIA Titan V, and Gigabyte RTX 2080.

The V100 benchmark utilized an AWS P3 instance with an E5-2686 v4 (16 core) and 244 GB DDR4 RAM.

Software

All benchmarks, except for those of the V100, were conducted with:

  • Ubuntu 18.04 (Bionic)
  • CUDA 10.0
  • TensorFlow 1.11.0-rc1
  • cuDNN 7.3

The V100 benchmark was conducted with an AWS P3 instance with:

  • Ubuntu 16.04 (Xenial)
  • CUDA 9.0
  • TensorFlow 1.12.0.dev20181004
  • cuDNN 7.1

 

How we define a "typical single GPU system"

The price we use in our calculations is based on the estimated price of the minimal system that avoids CPU, memory, and storage bottlenecking for Deep Learning training. Note that this won't be upgradable to anything more than 1 GPU.

  • CPU: i7-8700K or equivalent (6 cores, 16 PCI-e lanes). ~$380.00 on Amazon.
  • CPU Cooler: Noctua L-Type Premium. ~$50 on Amazon.
  • Memory: 32 GB DDR4. ~$280.00 on Amazon.
  • Motherboard: ASUS Prime B360-Plus (16x pci-e lanes for GPU). ~$105.00 on Amazon.
  • Power supply: EVGA SuperNOVA 750 G2 (750W). ~$100.00 on Amazon.
  • Case:NZXT H500 ATX case ~$70.00 on Amazon
  • Labor: About $200 in labor if you want somebody else to build it for you.

Cost (excluding GPU): $1,291.65 after 9% sales tax.

Note that this doesn't include any of the time that it takes to do the driver and software installation to actually get up and running. That alone can take days of full time work.

Reproduce the benchmarks yourself

All benchmarking code is available on Lambda Lab's GitHub repo. Share your results by emailing s@lambdalabs.com or tweeting @LambdaAPI. Be sure to include the hardware specifications of the machine you used.

Step One: Clone benchmark repo

git clone https://github.com/lambdal/lambda-tensorflow-benchmark.git --recursive

Step Two: Run benchmark

  • Input a proper gpu_index (default 0) and num_iterations (default 10)
cd lambda-tensorflow-benchmark
./benchmark.sh gpu_index num_iterations

Step Three: Report results

  • Check the repo directory for folder <cpu>-<gpu>.logs (generated by benchmark.sh)
  • Use the same num_iterations in benchmarking and reporting.
./report.sh <cpu>-<gpu>.logs num_iterations

We are now taking orders for the Lambda Blade 2080 Ti Server and the Lambda Quad 2080 Ti workstation. Email enterprise@lambdalabs.com for more info.

You can download this blog post as a whitepaper using this link: Download Full 2080 Ti Performance Whitepaper.

  • 6
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值