>>7811242 Only 3 minutes in but this is hard to watch. Almost seems like they are trying to set him up as a crack pot since his views appear to be non-mainstream. As if he is some sort of animal in a zoo. Seems disrespectful to me how they treat him, but I could be wrong. Have to finish this video before I can make a more accurate assessment.
Guys like Minsky wanted to apply mathematics to intelligence, to understand it at a fundamental level like we do physics. This was at a time when computers were crude and expensive and it was thought that you would need a very smart approach to leverage the computer power you had. This turned out to be very hard to do, and was ultimately of limited success. Meanwhile, computers became much more powerful and cheap. Researchers made progress by trying very simple things, not based on a deep understanding of intelligence, and throwing a ton of computer power at it. This latter approach, at least for now, seems to have won out.
Looking back, Minsky felt that the field has regressed, offering little insight into the nature of intelligence. He looks at the sophistication in early papers and compares it to the crude practicality (and success?) of modern papers and laments over the decay.
>>7811958 >This latter approach, at least for now, seems to have won out. What did it exactly won? Because it sure didn't bring us closer to making a real AI. Thing like Watson are nothing else but dumb grinding machines with no real insight in what's intelligence.
>>7811985 His methods only failed because they needed more time and money than companies/government were comfortable pouring into AI. It doesn't make them wrong.
>>7812031 I cannot say for sure whether or not the way Minsky envisioned the creation of a strong AI is 100% correct. No one can. But what I'm profoundly convinced of is that he was on the right track, thinking of the mind as a set of operators organized as a society, divided in major centers in which occur the several processes of the mind such as language processing, logical thinking, forecasting and such. What we have now is a set of tools that rely on sheer computational power and only are efficient at doing one thing. Computer scientists and engineers nowadays are too focused and encouraged (by Google and such) to create softwares that are efficient at solving very specific problems, when a strong AI should be about understanding and addressing every problem possible, no matter how inefficient it is in its endeavor compared to specialized softwares.
>>7812049 The problem with that metaphor is that we don't have systems that emulate intelligence yet. We don't have the "airplanes" of the AI field. What would be comparable is that we have the ability to make motors, wings and the fuselage, but we don't know how to assemble those to effectively make an airplane.
>>7812074 >I can do everything but I am shit at it This is why Windows is bad. We want AI to do a specific task be that drive a car or fly a drone. The only people who want AI that they can have a chat with are virgins.
Artificially intelligent language (including dialogue) are absolutely integral to AI research. Aside from the ergonomics they offer on the business side (reduced costs in customer service), the models underlying the most effective non-rule-based AI systems have given us pretty significant insight into the limits and future of our current models of intelligence. The language and communication of our own brains is a vital middle point between our thoughts and our actions (though both aren't necessarily correlated), and modeling that accurately may be the first step in achieving limited sentience (which would contribute to your own stated ultimate objective of artificial intelligence by allowing for a greater range of natural learning).
>>7812133 >The language and communication of our own brains is a vital middle point between our thoughts and our actions (though both aren't necessarily correlated), and modeling that accurately may be the first step in achieving limited sentience (which would contribute to your own stated ultimate objective of artificial intelligence by allowing for a greater range of natural learning). Utter bullshit when you consider that animals do not have our linguistic capabilities and still manage to distill their thoughts into actions and have sentience.
This is what you say. What are your research credentials, if you don't mind me asking? I'm only asking so that I can objectively gauge how much I should care about your opinion.
Given that I do have concrete research exposure in the field under figures who agree, I still stand by my statement that communication is a valid and important vector for understanding and replicating sentience. You're free to prove me wrong by doing so otherwise.
Thread replies: 42 Thread images: 4
Thread DB ID: 468474
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at [email protected] with the post's information.