[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Backpropagaion with GPUs

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 5
Thread images: 1

File: firefoxhascrashed.gif (284KB, 523x310px) Image search: [Google]
firefoxhascrashed.gif
284KB, 523x310px
Hi there. I am currently working on backpropagation with CUDA acceleration. Does anybody have any idea if I call a thread for each dataset that will do the backprop for the entire network or if I then split each dataset up into other threads? The number of threads would then have to vary between layers and the kernel would be have to be called multiple times and would be very costly.
>>
>>55746473
Are you using DIGITS?
I Don't have an answer for you but it may help you find one if you give a bit more details on your set up.

Wait are you working on CNNs?
>>
>>55748288
I am using CNNs, training with stochastic gradient descent back propagation. I am not using DIGITS, I am developing our own soloution. It works great at this moment, but runs only on CPU.

I have working GPU code, but I am unsure how I would want this to work. I have looked at articles about parallelized back propagation with GPUs, but no specifics on how they are implemented. Wondering if a GPU thread for each dataset is the soloution that would scale the best.

Since the variable number of neurons per layer, I can't really have a thread for each neuron and dataset, and they would also have to be synchronized since each layers output depends on the previous layer.
>>
I've never used cuda, but there is a very simple way to tell if the overhead from communication is too big

Do it without parallel processing then do it with, it if takes longer parallel you fucked up
>>
>>55748665
Yes, but I would have to have a huge ass dataset for this to pay off. I have one @ ~1.2M, maybe I should look at that one.
Thread posts: 5
Thread images: 1


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.