>>7768836 Any book recommendations? Obviously they won't think the way we do, just interested in the idea that an AI which uses parallel processes could have errors which corrupt particular processes without disabling the AI as a whole. Eg if the AI worked something like a neural network and one particular layer of the network had a glitch the layers above that would read corrupted information, and perhaps try to consolidate or decode that data. curious what the ramifications might be
obviously there is no complete answer to this question as True AI is still mysterious and there are many possible routes to it
How do you get a computer to perceive color, or to hear sound subjectively? Most encoding occurs for the client's (human) behalf. A camera's photoelectric sensor does not actually perceive anything. For pattern recognition, the intelligent agent must be "instinctually" seeded. What's really spooky is the sense you could actually generate a sentient conscious in a jar and have it react to an environment that's essentially a "sandbox". Would make for a cool science fiction device, if not already done.
>>7768934 Ridiculed at school by a group of football kids recently(all joining the Air Force) about me going to school for CS. I said my side about how the military is the most autistic path you could take, and I nicely said it was a waste of potential. They said things along the lines of I was a retard because AI will soon(!!!) be reaching a point where it can write code and all the money I dropped on a degree is going to be wasted because I'll be jobless. If you have ever researched into AI for a period longer than 5 mins, you would be aware of how improbable that is to occur(singularity). Anyway, thanks for the book rec anon, I'll be picking up a few to hand out.
>>7768828 With online learning machines, it's possible that a program develops a bias based on its initial learning samples and that this bias is hard to reduce afterwards even after many contradicting data samples have been observed. The usual trick to mitigate this is to make the program "forget" by lower the weights of the old data samples.
I don't think this qualifies as schizophrenia, though.
>>7769352 Yes these are called genetic algorithms, and you are right. Hell, the whole point of machine learning is to produce decision rules, in other words, a program, that programmers are either too lazy or too stupid to write.
However, if we simply compare the amount of energy needed by a computer to do the same task as a small part of the human brain, we can conclude that we are very far from any kind of singularity. A human brain consumes like 20W, less than a smartphone and magnitudes less than any HPC computer, let alone supercomputers.
>>7769763 Sure, now every one on /sci/ is a pluri-PhD veteran from WWII who went to mars and come back thanks to the meme drive he created when he was 9 years old to tell us the the secret for immortality and AI, but the evil government doesn't let him reveal it for the fear of losing control over the 7 billions soup-eating monkeys that live on this fucking planet.
Dafug you talking about? All I said was I was a veteran, which isn't exactly uncommon or anything. I don't have a PHD or anything, I'm just a grad student. Is it super unlikely that a veteran grad student would visit /sci/? Because that doesn't seem particularly unusual to me.
Little story... Legend has it that linux was named after pope linus. The pope after the foundation of the stone. Theres a deeper meaning and allegory as to why that is. And value. Linux true origin is unclear but some people believe it has a lot of potential in AI.
>>7768859 Khanacademy/YouTube are better, examples are key. And you have to practice yourself. MIT has a channel.
Parallel processing is about executing multiple calculations simultaneously, it has to do with optimisation and computer architecture.
If by "corrupt" you mean "bug", nine thousand nine hundred ninety nine times out of ten thousand the program doesn't run. The last time it does something slightly different.
If by corrupt you mean data spontaneously changes, this never happens. Computers don't make mistakes. Neither would a neural net make a glitch in the sense of the word I'm suspecting you're using.
If a computer tries to read corrupted data it throws up an error.
This whole nonsense about neural nets becoming intelligences is a load of Hollywood bunk. Algorithms/heuristics solve specific problems. Neural nets are mostly used to match images with words. It's entirely possible some evil artifical intelligence evolves out of it, like it's entirely possible I jack off and my sperm spontaneously evolves into a tiger on the floor.
Thread replies: 31 Thread images: 4
Thread DB ID: 417370
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at firstname.lastname@example.org with the post's information.