DATASCIENCEM Telegram 4641
Here is a trick to optimize a neural network that gives about 4x speedup when transferring data from CPU to GPU.

Let's consider an image classification task.

We define the model, load and transform the data.

In the training loop, we transfer data to the GPU and train the network.

What's the problem:

If you look into the profiler,

- most of the resources go to the kernel (i.e., the training itself),
- but a noticeable amount of time is also spent on transferring data from CPU to GPU (cudaMemcpyAsync).

This can be easily reduced.

Initially, the dataset consists of pixels as 8-bit integers. We convert them to 32-bit floats.
Then we send these float tensors to the GPU. As a result, the data size becomes 4 times larger, making the transfer heavier.

The solution:

Shift the transformation step after the transfer. That is, first transfer the 8-bit ints, and then convert them to floats on the GPU.

As a result, the data transfer step speeds up significantly.

Of course, this doesn't work everywhere; for example, in NLP we initially deal with float embeddings.
But in cases where it applies, the speedup is very noticeable.

👉  @DataScienceM
Please open Telegram to view this post
VIEW IN TELEGRAM
4



tgoop.com/DataScienceM/4641
Create:
Last Update:

Here is a trick to optimize a neural network that gives about 4x speedup when transferring data from CPU to GPU.

Let's consider an image classification task.

We define the model, load and transform the data.

In the training loop, we transfer data to the GPU and train the network.

What's the problem:

If you look into the profiler,

- most of the resources go to the kernel (i.e., the training itself),
- but a noticeable amount of time is also spent on transferring data from CPU to GPU (cudaMemcpyAsync).

This can be easily reduced.

Initially, the dataset consists of pixels as 8-bit integers. We convert them to 32-bit floats.
Then we send these float tensors to the GPU. As a result, the data size becomes 4 times larger, making the transfer heavier.

The solution:

Shift the transformation step after the transfer. That is, first transfer the 8-bit ints, and then convert them to floats on the GPU.

As a result, the data transfer step speeds up significantly.

Of course, this doesn't work everywhere; for example, in NLP we initially deal with float embeddings.
But in cases where it applies, the speedup is very noticeable.

👉  @DataScienceM

BY Data Science Machine Learning Data Analysis




Share with your friend now:
tgoop.com/DataScienceM/4641

View MORE
Open in Telegram


Telegram News

Date: |

Your posting frequency depends on the topic of your channel. If you have a news channel, it’s OK to publish new content every day (or even every hour). For other industries, stick with 2-3 large posts a week. Ng, who had pleaded not guilty to all charges, had been detained for more than 20 months. His channel was said to have contained around 120 messages and photos that incited others to vandalise pro-government shops and commit criminal damage targeting police stations. To upload a logo, click the Menu icon and select “Manage Channel.” In a new window, hit the Camera icon. Content is editable within two days of publishing In 2018, Telegram’s audience reached 200 million people, with 500,000 new users joining the messenger every day. It was launched for iOS on 14 August 2013 and Android on 20 October 2013.
from us


Telegram Data Science Machine Learning Data Analysis
FROM American