Your network blocks the Lichess assets!

lichess.org
Donate

Memory: The Key To Chess?

Modern chess is not about using your cognitive skills, Most of the top players memorize the opening theories and then they practice tactics and end the game ( Here they have excellent memory) then they only have to attack the opponent's memory with 10% cognitive skills.

Modern chess is not about using your cognitive skills, Most of the top players memorize the opening theories and then they practice tactics and end the game ( Here they have excellent memory) then they only have to attack the opponent's memory with 10% cognitive skills.
<Comment deleted by user>
<Comment deleted by user>
<Comment deleted by user>

overload opponent memory?

overload opponent memory?

@chessfan124 said in #8:

Furthermore, the article makes an inaccurate comparison between the workings of a neural network chess engine like AlphaZero and the human brain. While neural networks can provide insights into how humans process information, they are not an accurate model of how the human brain works. The article also cites a study that examines AlphaZero's internal representations of chess positions to suggest that grandmasters' chunks could be found there. However, the authors themselves state that not all of these concepts correspond to any known chess concept, which weakens the argument that they can be used to decode grandmasters' memories.

an inaccurate model does not make much sense. or can always be said to be true. it comes with the territory. A model of something is about understanding the thing, not recreating it. It is inaccurate by design almost. A truth-seeking statement would mention what is not incorporated in the model that would matter to whatever question the modeling exercise was attempting to address.

Known chess concept might itself be something that would need scrutiny. Does the language coerced named things have to necessarily be shared by all experiences trajectories?

There may have been some founder effects from the first few individuals having chosen to verbalize some of their own intuition abstractions to build chess theory. Not all chess playing experts could also be chess theory builders and communicators.

The order in which SF has tried to implement such named features one at a time, and its possible contribution to its success may not warrant putting them as canonical internal human representations. A lot of them are likely to be overlapping and even conflicting. The current NNue approach is more likely to isolate which features are actually most explaining of its engine past predictions (although there is some circular information flow going on, in that the features are those that best predict classical search of some moderate depth, through supervised learning).

I have not read the paper enough yet.. but I would have taken an unsupervised approach about the "torso" and heads" set of weights view of chess positions. Again, my pet suggestion of scaling down the complexity, like old masters have kept suggesting in term of chess education, to start with endgame concepts.. where one can control the concept interactions better.

Reading the paper (will take a while), might help understand the flow of the arguments. I am not sure about needing the correspondance between SF named components and those of A0 (or LC0 if possible) for validating A0 type of machine model of chess expert internals representations (pattern or chunk memories). It might be fishing in the wrong waters.

@chessfan124 said in #8: > Furthermore, the article makes an inaccurate comparison between the workings of a neural network chess engine like AlphaZero and the human brain. While neural networks can provide insights into how humans process information, they are not an accurate model of how the human brain works. The article also cites a study that examines AlphaZero's internal representations of chess positions to suggest that grandmasters' chunks could be found there. However, the authors themselves state that not all of these concepts correspond to any known chess concept, which weakens the argument that they can be used to decode grandmasters' memories. an inaccurate model does not make much sense. or can always be said to be true. it comes with the territory. A model of something is about understanding the thing, not recreating it. It is inaccurate by design almost. A truth-seeking statement would mention what is not incorporated in the model that would matter to whatever question the modeling exercise was attempting to address. Known chess concept might itself be something that would need scrutiny. Does the language coerced named things have to necessarily be shared by all experiences trajectories? There may have been some founder effects from the first few individuals having chosen to verbalize some of their own intuition abstractions to build chess theory. Not all chess playing experts could also be chess theory builders and communicators. The order in which SF has tried to implement such named features one at a time, and its possible contribution to its success may not warrant putting them as canonical internal human representations. A lot of them are likely to be overlapping and even conflicting. The current NNue approach is more likely to isolate which features are actually most explaining of its engine past predictions (although there is some circular information flow going on, in that the features are those that best predict classical search of some moderate depth, through supervised learning). I have not read the paper enough yet.. but I would have taken an unsupervised approach about the "torso" and heads" set of weights view of chess positions. Again, my pet suggestion of scaling down the complexity, like old masters have kept suggesting in term of chess education, to start with endgame concepts.. where one can control the concept interactions better. Reading the paper (will take a while), might help understand the flow of the arguments. I am not sure about needing the correspondance between SF named components and those of A0 (or LC0 if possible) for validating A0 type of machine model of chess expert internals representations (pattern or chunk memories). It might be fishing in the wrong waters.

@Toscani said in #5:

Thanks for the interesting link.
storage.googleapis.com/uncertainty-over-space/alphachess/index.html

I should have looked that before jumping on the paper and its appendices....
That link states that they did feature extraction first (which i need to read more closely), and then looked for correspondance with known concepts (using SF somewhere from version 8).

I find interesting the historical human population evolution of opening preferences (not sure how that is quantified or defined, yet), and that of A0 self-play epochs. I mean the question is worth wondering, although I am not sure that A0 self-play is to be expect to follow population of learning trajectories that might be behind the human population statistics. or i am not yet getting the meaning of those graphs..

here is my guess (not having read paper for equivalent figures yet). Epoch might be about self-play batches, where the previous pair of self instances, with one not learning and the other learning are changed for a new learning epoch of self play starting with the same learned previous instance on each side.. That is for the horizontal axis.

For humans (lower curves), we have human years. from 1978 to 2018.
Hovering over one figure through the various connected curve we see that the 2 figures are connected for their actual move sequence (on the right some display of which sequence with a color code).

The color code is redundant hint about which band is which initial sequence of moves.. so the prevalence would be the area between bounding curves. same color same sequence..

Q: is A0 epoch during some number of self play independent runs or just one run and the many games played. That last possible misunderstanding of mine may be was is itching me in that comparison. The human games being played by many pairs in a population of possibly high level players..

So not sure how to interpret this parallel display. Would we have expected the human population to learn over years how to play chess. One may be population knowledge evolution and spread of news and rumors and likes being displayed, while the other is more about an individual learning with many many games.

I thought i would give some legend to the figures there.. hopefully not that off-track. if so, i would be glad to be corrected or opinionated otherwise.

@Toscani said in #5: > Thanks for the interesting link. > storage.googleapis.com/uncertainty-over-space/alphachess/index.html I should have looked that before jumping on the paper and its appendices.... That link states that they did feature extraction first (which i need to read more closely), and then looked for correspondance with known concepts (using SF somewhere from version 8). I find interesting the historical human population evolution of opening preferences (not sure how that is quantified or defined, yet), and that of A0 self-play epochs. I mean the question is worth wondering, although I am not sure that A0 self-play is to be expect to follow population of learning trajectories that might be behind the human population statistics. or i am not yet getting the meaning of those graphs.. here is my guess (not having read paper for equivalent figures yet). Epoch might be about self-play batches, where the previous pair of self instances, with one not learning and the other learning are changed for a new learning epoch of self play starting with the same learned previous instance on each side.. That is for the horizontal axis. For humans (lower curves), we have human years. from 1978 to 2018. Hovering over one figure through the various connected curve we see that the 2 figures are connected for their actual move sequence (on the right some display of which sequence with a color code). The color code is redundant hint about which band is which initial sequence of moves.. so the prevalence would be the area between bounding curves. same color same sequence.. Q: is A0 epoch during some number of self play independent runs or just one run and the many games played. That last possible misunderstanding of mine may be was is itching me in that comparison. The human games being played by many pairs in a population of possibly high level players.. So not sure how to interpret this parallel display. Would we have expected the human population to learn over years how to play chess. One may be population knowledge evolution and spread of news and rumors and likes being displayed, while the other is more about an individual learning with many many games. I thought i would give some legend to the figures there.. hopefully not that off-track. if so, i would be glad to be corrected or opinionated otherwise.
<Comment deleted by user>

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704706

PNAS version of the paper. it has 10 pages. the rest seems to have been put into supplemental pdf files. Might be less of a mountain to read. This is what the above post is about. I assume no real difference in content. just organization.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704706 PNAS version of the paper. it has 10 pages. the rest seems to have been put into supplemental pdf files. Might be less of a mountain to read. This is what the above post is about. I assume no real difference in content. just organization.