[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

RIP Deep Learning AI winter incoming

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 16
Thread images: 1

File: Untitled.png (14KB, 816x192px) Image search: [Google]
Untitled.png
14KB, 816x192px
RIP Deep Learning
AI winter incoming
>>
>>8371499
How about a few more lines of text, smart ass?
>>
>>8371499
wat
>>
>>8371499
Are you high? Big-ass ensembles of models have literally always beat neural networks at most things. This is not news. The point is that the neural networks require vastly less computing power and effort to train and compute.
>>
>>8371771
Not to mention are far more scalable and modular. OP is a faggot as always.
>>
>>8371774
Oh, and the kicker?

That's an ensemble *of multiple different neural network models*, or at least the ensemble they submitted in 2015 was. (I can't find any info on their 2016 entry, which is where OP's pic is from.)
>>
>>8371781
I will be shocked if this 2016 submission isn't also made of convolutional neural networks at least in part. Convnets are the most powerful tool yet discovered in machine learning for image processing, by miles; it would be a fundamental breakthrough, not cause for some incoming AI Winter, if someone found a strictly superior model for a task like this.

The current wave of ML/"Deep Learning" hype is fundamentally being driven by the fact that we're actually getting *results* on *practical-scale problems*, rather than the hope that performance on toy problems means we might be on track to eventually produce something useful.

Continuing to achieve better results cannot possibly "BTFO" the field. That would be dumb.
>>
>>8371499
LOL OP you fucking dumbass...

They are using Ensembles of DEEP LEARNING NETS!!
>>
>>8371499
Poll: should ridiculous deep learning hype be called Derp Learning, or Backpropaganda?
>>
Non of you fags gets what OP's saying.

Classification task winner's top-5 error rates:
ILSVRC12: 15.3%, new architecture: alexnet + dropout
ILSVRC13: 11.7%, -23.5% rel., new architecture: 7x7/2 conv. on input
ILSVRC14: 6.66%, -43.1% rel., new architecture: stacked 3x3 conv.
ILSVRC15: 3.57%, -46.4% rel., new architecture: Residual net, no FC layer except before softmax
ILSVRC16: 3.00%, -16.0% rel., no new architecture

This is the first year since the popularization of convnets where we didn't achieve a technological breakthrough on ILSVRC and only got a marginally better result. As deep learning research bubble is fueled by result and hype, no significant progress = no new money pouring in, which is exactly how the last AI winter happened.
>>
>>8371920
Most researches consider the ILSVRC task a "solved problem". Much of the focus these days is going into recurrent nets and unsupervised techniques.
>>
>>8371926
CNN is still much more popular. And I can count marginal RNN improvements in the last year in one hands: multiplicative integration, zoneout, recurrent BN, layernorm, adaptive computation time. Again, no real progress.
>>
>/r/ML banned me for calling out kikes again
>>
>>8371920
ImageNet has a lot of crap labels - it's estimated that the inherent classification noise from mislabeled or ambiguous images is around 2.5%.

In other words, we stalled at 3% because 3% is basically as good as it's possible to get on the ImageNet dataset. ImageNet is basically solved; what remains is the task of solving it efficiently.

Also "we've had breakthroughs for four consecutive years on this task, but this year we didn't" is not exactly damning.
>>
>>8371937
You forgot a lot of progress there. First thing that comes to mind is the SeqGAN paper - we figured out how to effectively apply GAN techniques to sequence outputs of RNNs. Oh, and there was the Decoupled Neural Interface paper from Google, that managed to train RNNs with a much shorter time horizon by teaching them to predict future error gradients.
>>
>>8371947
Mislabeling does not translate to top-1 error rates 1 to 1, in fact it has much less impact, see "Systematic evaluation of CNN advances on the
ImageNet" Figure 11. Not to mention the metrics is top-5 here.
Thread posts: 16
Thread images: 1


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.