If I play chess against a computer it is able to pick out the best moves against me and deduce the likely actions I take.
Given that chess programs can beat even the most accomplished human chess players, do these programs display human levels (not limited to) of reasoning? They are certainly capable of reasoning that can consistently beat human opponents.
I am curious at the progress of artificial intelligence, given that reasoning is a logical process and for the most part as humans we learn through reasoning, making deductions from patterns.
We know that Chess programs can adapt themselves and essentially learn from their mistakes.
Why is it that we don't have more Human-like A.I? What is lacking in A.I? Computational power? Memory size?
>They are certainly capable of reasoning that can consistently beat human opponents.
>I am curious at the progress of artificial intelligence
You aren't, or you would be reading journals on it or an introductory book like bishop
>We know that Chess programs can adapt themselves and essentially learn from their mistakes.
Genetic algorithms are especially bad at chess
>What is lacking in A.I?
You can't ask what's lacking if you don't bother checking what's there
The advantage of the best modern artificially intelligent chess players over human ones isn't in the ability of the machine to reason, but rather the specialization of its memory and performance. For example, you can have a non-sentient, brute-force chess player that is able to brute-force calculate and traverse all possible outcomes of the game within more steps than a human, having only lots of specialized hardware thrown at it and a simple program that makes use of it.
We have models that are at least approximately similar to the physiology of the brain (namely, neural networks which communicate through a series of activations); however, what we have is a general idea at best. We know generally what happens our own brains, but not to a degree of detail that it can be directly replicated: most modern AI work (at least in neural networks) comes down to pulling the right activation functions and network structures out of a hat for a specialized task.
I personally do my own independent AI research, and my own analysis isn't that the flaw is necessarily in the various neural models that are out there (some are very good at what they do), but that the largest gap lies in a lack of 'sentient architecture' of moving, learning parts (the models that we have) that are capable of working together to self-supervise and experience in the sense that we do.