RIP Deep Learning
AI winter incoming
>>8371499
How about a few more lines of text, smart ass?
>>8371499
wat
>>8371499
Are you high? Big-ass ensembles of models have literally always beat neural networks at most things. This is not news. The point is that the neural networks require vastly less computing power and effort to train and compute.
>>8371771
Not to mention are far more scalable and modular. OP is a faggot as always.
>>8371774
Oh, and the kicker?
That's an ensemble *of multiple different neural network models*, or at least the ensemble they submitted in 2015 was. (I can't find any info on their 2016 entry, which is where OP's pic is from.)
>>8371781
I will be shocked if this 2016 submission isn't also made of convolutional neural networks at least in part. Convnets are the most powerful tool yet discovered in machine learning for image processing, by miles; it would be a fundamental breakthrough, not cause for some incoming AI Winter, if someone found a strictly superior model for a task like this.
The current wave of ML/"Deep Learning" hype is fundamentally being driven by the fact that we're actually getting *results* on *practical-scale problems*, rather than the hope that performance on toy problems means we might be on track to eventually produce something useful.
Continuing to achieve better results cannot possibly "BTFO" the field. That would be dumb.
>>8371499
LOL OP you fucking dumbass...
They are using Ensembles of DEEP LEARNING NETS!!
>>8371499
Poll: should ridiculous deep learning hype be called Derp Learning, or Backpropaganda?
Non of you fags gets what OP's saying.
Classification task winner's top-5 error rates:
ILSVRC12: 15.3%, new architecture: alexnet + dropout
ILSVRC13: 11.7%, -23.5% rel., new architecture: 7x7/2 conv. on input
ILSVRC14: 6.66%, -43.1% rel., new architecture: stacked 3x3 conv.
ILSVRC15: 3.57%, -46.4% rel., new architecture: Residual net, no FC layer except before softmax
ILSVRC16: 3.00%, -16.0% rel., no new architecture
This is the first year since the popularization of convnets where we didn't achieve a technological breakthrough on ILSVRC and only got a marginally better result. As deep learning research bubble is fueled by result and hype, no significant progress = no new money pouring in, which is exactly how the last AI winter happened.
>>8371920
Most researches consider the ILSVRC task a "solved problem". Much of the focus these days is going into recurrent nets and unsupervised techniques.
>>8371926
CNN is still much more popular. And I can count marginal RNN improvements in the last year in one hands: multiplicative integration, zoneout, recurrent BN, layernorm, adaptive computation time. Again, no real progress.
>/r/ML banned me for calling out kikes again
>>8371920
ImageNet has a lot of crap labels - it's estimated that the inherent classification noise from mislabeled or ambiguous images is around 2.5%.
In other words, we stalled at 3% because 3% is basically as good as it's possible to get on the ImageNet dataset. ImageNet is basically solved; what remains is the task of solving it efficiently.
Also "we've had breakthroughs for four consecutive years on this task, but this year we didn't" is not exactly damning.
>>8371937
You forgot a lot of progress there. First thing that comes to mind is the SeqGAN paper - we figured out how to effectively apply GAN techniques to sequence outputs of RNNs. Oh, and there was the Decoupled Neural Interface paper from Google, that managed to train RNNs with a much shorter time horizon by teaching them to predict future error gradients.
>>8371947
Mislabeling does not translate to top-1 error rates 1 to 1, in fact it has much less impact, see "Systematic evaluation of CNN advances on the
ImageNet" Figure 11. Not to mention the metrics is top-5 here.