[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Who /machinelearning/ here?

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 90
Thread images: 5

File: toptal-blog-image-1407508081138.png (104KB, 620x620px) Image search: [Google]
toptal-blog-image-1407508081138.png
104KB, 620x620px
Who /machinelearning/ here?
>>
Me.

Honestly thinking about switching. Its so incredibly competitive. Swamped with asian resume stuffers, very hard to stand out.
>>
>>8817095
Interested in information geometry of machine learning algorithm (kernel methods mostly)

Anyone know about geometry of learning?
>>
>>8817095

The machine learning boom is a desperate attempt to generate value from "Big Data" which everyone is extremely invested in, we're in for a huge bubble burst when naive optimism starts declining.
>>
File: outm8.jpg (7KB, 218x231px) Image search: [Google]
outm8.jpg
7KB, 218x231px
>tfw you are interested in computational models of cognition and your field is being overrun by silicon valley dudebros
>>
>>8817151
>computational models of cognition
How's that working out
>>
>>8817146
Tell us more about your insightful contrarian opinions.
>>
>>8817095
Yes, but with focus on agi/asi. Everything else is basically "just" statistics and searching. The big data hype is not to our good.
>>
>>8817177
Better than expected.

Here is state of the art in the field
https://arxiv.org/pdf/1703.01988.pdf
>>
>>8817151
>and your field is being overrun by silicon valley dudebros
iktf bro. Some CS fields are completely devout of any discussion and reduced to DUDE APPLICATIONS DUDE JUST STACK MORE LAYER MUH GOOGLE MUH FACEBOOK even on graduate level
>>
>>8817212
Whose cognition is this modelling, computationally?
>>
>>8817223
Broadly speaking mammalian.

Its pretty much an implementation of this work
https://papers.nips.cc/paper/3311-hippocampal-contributions-to-control-the-third-way.pdf

Reinforcement learning is nearly at the point where computational neural science is being implemented into functional agents. Its a fascinating time.
>>
>>8817229
The connection to actual mammals seems tenuous at best. It seems they've only drawn loose inspiration from data that suggested "the transfer of control from hippocampal to striatal structures over the course of learning". From this they extrapolate an algorithm and test it on MDPs which are awful models if you ask me.
>>
>>8817229
holy shit this is laughable
the bubble is going to pop harder and more painfully than a genital wart
>>
>>8817273
>>8817192
>>8817146
T. Unemployed physics majors mad that cs majors are earning 3x as much without taking complex differential analysis theory
>>
>>8817192
machine learning is literally statistics


"agi/asi" is not research, it's popsci
>>
ML Engineer here.
The weakest point of deep learning is the need for huge amount of labeled data when a human brain needs only a few samples to make a somewhat functional model.

Semi supervised learning aims to fix this but I don't think it is the right track, or only partially. Mostly because the assumption that similar labels have similar labels is not always true.

The AI we build today with NN is not actual Artificial Intelligence, it is more of an Artificial Intuition. It is only based on experience, can only be explained by introspection (deconvolution and parameter inspections)

I think the key to actual AI is to use small neural networks for easy and generic subtasks together with symbolic intelligence algorithms for more abstract reasoning.

Not everything is differentiable, and not everything is based on experience, deep learning only approaches are doomed to fail, even with all the hype.

Maybe deep RL could work all the way up to actual cognition but it would require shitloads of processing power and time to be effective.
>>
>>8817358
>similar labels have similar labels
I meant
>similar samples have similar labels
>>
>>8817291
>thinks you need a CS degree to do programming

top kek
>>
>>8817212
Dude, I agree with what you are saying in the thread, but DeepMind is a fucking joke.
They literally manage to publish a lot because muh Google and because NIPS/ICML reviewers are scared of blowing the money away.
>>
What confused me about machine learning, is how there is not some kind of trash in trash out limit. Like you can get a NN that works really well at a classification task, but all it does is use the information that you have given it. It is quicker and more reliable than a human, but there is no gain over what would in theory be possible with a human because you have given it all the information it knows.

Will it ever be possible for a computer to think creatively and use past knowledge to figure things out?
>>
>>8817095
me

hate it desu
>>
>>8817394
The outstanding results they got in the mentionned article suggest that you are wrong.
Deep Mind are really good at what they do.
>>
>>8817415
I don't know that analogy would work in classification. Like should the network generate an image that it feels is an edge case by itself and ask for a label from the human?

And it also depends on your precise definition of creativity. If you accept that it could mean "extrapolation between known data points", then generative adversarial networks fit the bill.
>>
>>8817358
The human brain has INSANE amounts of data to learn from. Whenever we see something, we don't just have one static picture, we usually have a full moving series of pictures that is even three dimensional, and we have complete models to cancel effects of lighting, perspective etc out. Our brains are capable of extracting information very efficiently to get the most out of it. That is mostly up to complexity, so you can't really compare the human brain to current deep learning efforts.

Anyway, you are of course right, figuring out how to upscale and combine these networks in the future is an exciting question, but calling deep learning doomed to fail is a ridiculous statement, especially considering how general the approach is.
>>
>>8817440
That's true, even the most simple scene or situation provides a person with a vast amount of information. This is why everyone knows the fact that reading about something, following a guide or reading a book about some types of situation doesn't compare to actually experiencing them. This is why learning from mistakes is so much easier and sometimes more effective.

Let's think about dates, for example. You could read dozens of romantic novels and poems but they will never communicate the vast amount of sensory information just one date or relationship communicates. Your parents may give you hundreds of tips and advice but sometimes you need to have some bad experiences in order to really learn and adapt to the concept of relationships. There's such a huge amount of information, from information about the persons surrounding you during a date, the place right next to you, the food you choose, the taste it has, the atmosphere, the other person's perfume, the way they dressed, the way the fork reflexts light, the way the environment changes your mood which in turn changes the way you adapt to the environment which in turn changes the situation, which in turn... you know, you get the point.

So it makes sense that a neural net, even if it will be able to perfectly simulate a human's brain it will need huge amounts of information in order to be actually useful.
>>
>>8817419
As someone who spent 6 months reading DeepMind papers at the beginning of my thesis, I can assure you that when you dig deep enough you clearly start to see the little tweaks and white lies that make their results MUCH less impressive.
The most banal example of this is their computation times: in 2012 everyone was amazed at how they solved Atari games with DQL, but you look at the numbers and... 250 fucking million frames for a total of 10 days of Tensorflow runtime on a K40C.
Duh-fucking-uh it works (not even that actually, they had to keep an epsilon greater than 0 in evaluation because their deterministic policy literally sucked).

6 months later they coin the term "catastrophic forgetting" and solve that too. Well gee thanks, maybe if they hadn't learned catastrophically in the first place we could have saved a slot at NIPS for a real research paper.

I hate how people ride Google's dick just for the sake of it, even if they literally added zero value to ML theory beside "durrr we got datacenters the size of Texas and money to steal people from Stanford XDXD".

Tl;Dr: Deepmind sucks and I hate the current DL community
>>
>>8817614
AlphaGo is pretty cool though
>>
>>8817133
no, why is it interesting?
>>
>>8817614
If you asked any machine learning expert about whether Atari games could be learned, they probably would have given you a laundry list of why it can't be done yet.

yet deep mind did it. hate all you want, they are doing shit no one has ever done before. If its all so trivial, why aren't you publishing that shit yourself, and getting a 6 figure salary?
>>
>>8817212
So at first glance i see all the buzzwords of deep and memory
So LSTMs training on labels and being combined?
>>
>Was the only set of upper division classes I wanted to take
>They're only offered in Odd Years
>I graduate next year
>I'll only have finished the prereqs this fall
Fuck me sideways
>>
>>8818527
Not its not training on labels. This is reinforcement learning.

Also can you not even look at the damn pictures? They created a fully differentiable memory system.

Memory augmented RNNs are literally cutting edge shit. Why are you sperglords pretending this stuff is mundane?
>>
>>8818527
Why are you talking out of your ass?
>>
>>8818527
Also... there are no LSTMs in that work. They reference work that uses LSTMs but they don't use them in their model
>>
>>8818527
Reward/labels, same shit man. If you are updating an NN as a value function. The labels for the net are the rewards you pass through the value function you want your net to learn from whatever meme bellman equation.
>>8818536
Its late mane
>>8818540
Yeah, my mistake
>>
>>8818535
>>8818559
>>
>subsea engineer that is about to go back to university for a masters in aerospace
>intrigued about machine learning
>want to make a neural network to predict hydrate formation in subsea equipment
Is it worth the effort? I'm mediocre at python and matlab and novice at C++. I have access to a lot of data.
>>
ML seems interesting. Could I work in the field with a bachelors in EE/linguistics?

I'm still an undergrad, but am starting to think about my grad degree options so I was curious if it was strictly hardline CS.
>>
>>8817095
I am and just wrote this >>8818716
thoughts? I want to learn how to teach basic deep learning to anyone cuz its fun explaining what I do to people but it usually takes a bit of backstory
>>
>>8817415
check out GANs, they were super big at NIPS this year. basically the model making up fake examples and testing itself to get better
>>
>>8818721
Similar question, can a math or physics BS land a grad school position in ML?
>>
>>8818598
There's libraries like Keras for python which make neural networks extremely accessible to make. Combined with a mind-boggling number of tutorials lying around the net, you could probably pick it up reasonably quick, or at least get an idea of what you're in for.

Be warned though, training a network to get nice results can take a very long time depending on the problem and the hardware you have - you could always look into renting AWS's GPU instances if necessary.
>>
>>8818520
Atari games had already been solved before deepmind did it with deep learning tho, do you even know what you're talking about?
>>
>>8817151
me too anon, headed into uni as freshman next fall at a top research university. Is it too late?
>>
>>8817614
>catastrophic forgetting
>2012
Dude this shit has been solved since 1999
>>
>>8818743
As long as you keep the network simple (no CNN or RNN), neural networks take a very short time to train actually
>>
I took a course on it at a conference last Tuesday.

Dozed off for most of it. It's interesting stuff but goddamn is it boring to hear about. I'd rather get some practical experience than hear about how this classifier can distinguish groups slightly better than this one.
>>
>>8818559
>Reward/labels, same shit man
Not at all.

Using reward is a much harder problem.
>>
>>8818793
not true
Link?
>>
>>8817358
I honestly don't believe you're an ML Engineer from the shallowness of your post.
>>
>>8818793
I'm not aware of any earlier learning algorithm so general that it can learn to play many different atari games just from raw pixels and knowledge of the score.
>>
>>8817614
You're exaggerating, but I agree that DeepMind's amazing results are tied to their unmatched infrastructure.
>>
>>8817133
Never heard of it before your post. A reviewer of "Information Geometry and Its Applications" on amazon suggested that the field doesn't really add anything usable to ML knowledge.

Do you have ideas on what you would be interested in exploring?
>>
>>8819356
>DeepMind's amazing results are tied to their unmatched infrastructure.

True. But also remember DeepMind was only founded in 2011. Deep Learning/Neural Nets were considered fringe research, borderline pseudoscience, before then, so obviously no one was throwing massive resources at them.
>>
>>8817133
Look into the manifold hypothesis
>>
File: collectiveIQof1000.jpg (123KB, 1280x720px) Image search: [Google]
collectiveIQof1000.jpg
123KB, 1280x720px
ITT: Engineering students who still use linear regressions and KNN in MATLAB
>>
>>8819392
u jelly that KNN already beats you are sophisticated learning algorithms on any task anybody cares about and is only an import statement away from any sperg running python?

>inb4 my math will totally be the basis for breakthrough ML techniques some day!

kek, all we actually need is bigger datasets. fuck you're math
>>
>>8819466
SVM>KNN
>>
File: 1478673937250.gif (494KB, 387x305px) Image search: [Google]
1478673937250.gif
494KB, 387x305px
>>8819466
>fuck you're math
>>
>>8818730
In your explanation you seem to miss the fact that if you simply stack linear neurons you'll never be able to learn nonlinear boundaries.
Also this is not really how brains work
>>
>>8819313
>Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning
environment: An evaluation platform for general agents. Journal of Artificial Intelligence
Research, 47:253–279, 2013.
>>
>>8819649
Did you actually read any of that? They get below human results on every game they test.

That's just a platform to test agents...

DeepMind learned to play atari games purely from the pixels of that game
>>
>after years of studying math and codemonkeying finally get a ML job
>it's mostly data cleaning and plugging in ready made packages

should have become a barista
>>
>>8819662
Nah I just copy pasted the first reference in the DQN paper.
Here, faggot:
Matthew Hausknecht, Risto Miikkulainen, and Peter Stone. A neuro-evolution approach to
general atari game playing. 2013.
Almost human performance, your initial claim about the state of the art before your sugar daddies saved the world is unfounded.
Don't worry pal, you can still suck their big overmarketed dick for free.
>>
>>8819681
no, clearly you are not good enough to either become a research in this field or work at a reputable company that does real ML instead of what you described.
>>
>>8817151
>>8817212
>he thinks human cognition has anything to do with machine learning
LMFAO. Better drop out while you can, my friend.
>>8817325
THIS
>>
>>8820182
>>he thinks human cognition has anything to do with machine learning

I don't. But the dude bros are coming anyway because they think what I do is machine learning
>>
>>8820182
>he thinks human cognition has anything to do with machine learning
But it does. How could an RNN based function generator be considered anything but machine learning? We just don't understand human cognition yet.
>>
Redpill me on ML anons, any way to cash on that shit?.
>>
>>8820228
It is. That paper is machine learning. Not sure where you got the impression that the authors were trying to create a computational model of the cognition of anything. At best they are creating a computational model of certain classes of human behavior, but more to the point they are simply trying to create game-playing agents.

If a technique came along tomorrow that was proven definitively to NOT be what happens inside of minds but nevertheless improved their agent's performance, they would use it, without question.
>>
>>8820271
I don't understand this post. Are you implying that human cognition is someone analogous to "an RNN based function generator"?
>>
What language should I program in and what resources are there if I just want to identify and track a specific type of object (with pictures or video)? I know nothing about machine learning but I know java, python, some C++, and some (but probably not enough) matlab.
>>
>>8820299
You are getting hung up on the word cognition. It doesn't mean what you think it does, particularly in this context
>>
>>8820302
>>8820311
You are not ascribing enough to it. Your definition of cognition seems to be "complicated behaviors".
>>
File: baby.png (107KB, 188x198px) Image search: [Google]
baby.png
107KB, 188x198px
Seriously though, what's the supply and demand like for ML work?

Seems like too many people and too much hype resulting in >>8817127
>>
>>8820327
I've heard there is still a large need for ML engineers
>>
>>8820308
Python
>>
>>8820305
Yes, theres neurons with recurrent connections that generate functions for motor control. Do we have a better description of a brain?
>>
>>8820892
No explanation is better than a non-explanation.
>>
>>8820898
Dont know what there is to explain. Literally just stated what the brain is made of and its purpose and related it to an analogous structure that is under the domain of machine learning. What happens in the hidden layers of that network is what is refereed to as "cognition"
>>
>>8820911
>How could an atom-based quantum-wave collapser be considered anything but particle physics?
>Literally just stated what the brain is made of and its purpose and related it to an analogous structure that is under the domain of particle physics. What happens in the solutions to the many-bodied Schrodinger equation is what is refereed to as "cognition".
See how unhelpful that is?
>>
Machine learning will never match the human brain until they stop trying to do everything with backprop and actually study the cortex.
>>
>>8820991
>Abandon something that works and study this thing we have no hope of understanding
>>
>>8821003
with such a defeatist attitude I'm surprised you haven't killed yourself
>>
>>8819308
you mean actually using the value as a label and inputting it into a estimator, or finding the proper value function based on reward?

If the former, I haven't seen a value function that doesn't end up as V(x)=r where you couldn't model V(x) with an estimator labeling x with r.
>>
>>8821019
With your head so far up your own ass I'm surprised you can still use the internet
>>
>>8817146
except machine learning is used in every facet of our technology, not just about data mining
>>
>>8820991
Why would we want to emulate the human brain? It has inherent restrictions and convolution that we can circumvent entirely.
>>
>>8821064
>It has inherent restrictions
It also has inherent advantages.

A brain-like system wouldn't replace all other ML, but it would do many of the things that people are struggling to get ML to work on.
Thread posts: 90
Thread images: 5


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.