Your network blocks the Lichess assets!

lichess.org
Donate

Analysis: Engine On Or Off?

learn from your mistakes is based on SF backward server analysis. Lichess has a filter layer on SF with an adjustable binning from score to (inaccuracy, mistake, blunder) categories (adjustment for cp score conversion to winning odds).

I was advocating more for a filter on strategic/positional/long-term errors versus short-term/tactical errors.

With only the current position scoring as measure of errors to learn from, there is no control in that sense possible.

One has to look a leaf evaluations down the PVs (evaluations actually made, not just any interior nodes), and notion of PV profile.

But anything about multiPV or profile consideration is going to cost ELO points toward current and immutable engine tournament specifications (inherited from human tournaments, i would say). Those ELO measures (implying the engine tournament specifications that are their competition context) are not about analysis worth of the engine.. Coding has been reluctantly including multiPV.. but I am not sure there are multiPV tournament categories.. or anythinking toward competiing while having tree width scrutinize (possibly constraining it, if not directly possible to be a competitive spec itself).

In general, improving the analysis trustworthiness of engine with best ELO in such tournaments, is coutner-productive toward ELO gains.. I claim this.

learn from your mistakes is based on SF backward server analysis. Lichess has a filter layer on SF with an adjustable binning from score to (inaccuracy, mistake, blunder) categories (adjustment for cp score conversion to winning odds). I was advocating more for a filter on strategic/positional/long-term errors versus short-term/tactical errors. With only the current position scoring as measure of errors to learn from, there is no control in that sense possible. One has to look a leaf evaluations down the PVs (evaluations actually made, not just any interior nodes), and notion of PV profile. But anything about multiPV or profile consideration is going to cost ELO points toward current and immutable engine tournament specifications (inherited from human tournaments, i would say). Those ELO measures (implying the engine tournament specifications that are their competition context) are not about analysis worth of the engine.. Coding has been reluctantly including multiPV.. but I am not sure there are multiPV tournament categories.. or anythinking toward competiing while having tree width scrutinize (possibly constraining it, if not directly possible to be a competitive spec itself). In general, improving the analysis trustworthiness of engine with best ELO in such tournaments, is coutner-productive toward ELO gains.. I claim this.

I find that it is hard to analyse on my own so I always use stockfish.

I find that it is hard to analyse on my own so I always use stockfish.

@dboing said in #31:

learn from your mistakes is based on SF backward server analysis. Lichess has a filter layer on SF with an adjustable binning from score to (inaccuracy, mistake, blunder) categories (adjustment for cp score conversion to winning odds).

I was advocating more for a filter on strategic/positional/long-term errors versus short-term/tactical errors.

With only the current position scoring as measure of errors to learn from, there is no control in that sense possible.

One has to look a leaf evaluations down the PVs (evaluations actually made, not just any interior nodes), and notion of PV profile.

But anything about multiPV or profile consideration is going to cost ELO points toward current and immutable engine tournament specifications (inherited from human tournaments, i would say). Those ELO measures (implying the engine tournament specifications that are their competition context) are not about analysis worth of the engine.. Coding has been reluctantly including multiPV.. but I am not sure there are multiPV tournament categories.. or anythinking toward competiing while having tree width scrutinize (possibly constraining it, if not directly possible to be a competitive spec itself).

In general, improving the analysis trustworthiness of engine with best ELO in such tournaments, is coutner-productive toward ELO gains.. I claim this.
That is too much for me to read

@dboing said in #31: > learn from your mistakes is based on SF backward server analysis. Lichess has a filter layer on SF with an adjustable binning from score to (inaccuracy, mistake, blunder) categories (adjustment for cp score conversion to winning odds). > > I was advocating more for a filter on strategic/positional/long-term errors versus short-term/tactical errors. > > With only the current position scoring as measure of errors to learn from, there is no control in that sense possible. > > One has to look a leaf evaluations down the PVs (evaluations actually made, not just any interior nodes), and notion of PV profile. > > But anything about multiPV or profile consideration is going to cost ELO points toward current and immutable engine tournament specifications (inherited from human tournaments, i would say). Those ELO measures (implying the engine tournament specifications that are their competition context) are not about analysis worth of the engine.. Coding has been reluctantly including multiPV.. but I am not sure there are multiPV tournament categories.. or anythinking toward competiing while having tree width scrutinize (possibly constraining it, if not directly possible to be a competitive spec itself). > > In general, improving the analysis trustworthiness of engine with best ELO in such tournaments, is coutner-productive toward ELO gains.. I claim this. That is too much for me to read

@dboing said in #31:

learn from your mistakes is based on SF backward server analysis.
Put a capital at the start
Learn from your mistakes is based on SF backward server analysis.

@dboing said in #31: > learn from your mistakes is based on SF backward server analysis. Put a capital at the start > Learn from your mistakes is based on SF backward server analysis.
<Comment deleted by user>

@WRPeter said in #33:

That is too much for me to read

Use a monitor that is big enough. Or just glaze over it. This might not have been the post you were looking for....

@WRPeter said in #33: > That is too much for me to read Use a monitor that is big enough. Or just glaze over it. This might not have been the post you were looking for....

i always turn off the engine once i finish playing

i always turn off the engine once i finish playing