Your network blocks the Lichess assets!

lichess.org
Donate

AI Slop is Invading the Chess World

@LittleFireRat said in #70:

@TotalNoob69 I argue otherwise, my own app does explain the planning process, threats about king safety, as well as weaknesses such as doubled pawns and if you should be concerned about them or not. If you don't believe it then by all means actually see mine in action.

I am not saying your app doesn't do that, I am saying that SF does not give you that information. The difficulty is in using whatever tools you have (Stockfish, other engines, your own algorithms) to generate information about the position and moves that actually make sense in a human teaching context. Slapping an LLM over it to translate it to natural language is the very easy part - and most of the time, completely unnecessary. I welcome apps like yours and wish to get as good as possible, but Stockfish and/or LLMs alone CANNOT power such applications.

@LittleFireRat said in #70: > @TotalNoob69 I argue otherwise, my own app does explain the planning process, threats about king safety, as well as weaknesses such as doubled pawns and if you should be concerned about them or not. If you don't believe it then by all means actually see mine in action. I am not saying your app doesn't do that, I am saying that SF does not give you that information. The difficulty is in using whatever tools you have (Stockfish, other engines, your own algorithms) to generate information about the position and moves that actually make sense in a human teaching context. Slapping an LLM over it to translate it to natural language is the very easy part - and most of the time, completely unnecessary. I welcome apps like yours and wish to get as good as possible, but Stockfish and/or LLMs alone CANNOT power such applications.

@TotalNoob69 Okay, that's fair, there are LLMs that are learning to play chess however, I think the stronger models are around 1800-1900 level atm but def not worthy of teaching based off that level of play. I do however argue what we created is far more than simply having an LLM slapped over stockfish, we don't template or do anything restrictive towards allowing the LLM to explain the following plans and concepts that stockfish creates for the user. The plans and it's explanations I do argue rival much of what I have heard coaches explain to me for quality the main difference I argue would be that using my app requires you to sometimes ask multiple times in various stages whereas a coach may be able to explain it in one go. Mine does look at the future positions to explain the plans and like with a real coach will touch on all aspects of a position upon request and often adds multiple key aspects in one go.

@TotalNoob69 Okay, that's fair, there are LLMs that are learning to play chess however, I think the stronger models are around 1800-1900 level atm but def not worthy of teaching based off that level of play. I do however argue what we created is far more than simply having an LLM slapped over stockfish, we don't template or do anything restrictive towards allowing the LLM to explain the following plans and concepts that stockfish creates for the user. The plans and it's explanations I do argue rival much of what I have heard coaches explain to me for quality the main difference I argue would be that using my app requires you to sometimes ask multiple times in various stages whereas a coach may be able to explain it in one go. Mine does look at the future positions to explain the plans and like with a real coach will touch on all aspects of a position upon request and often adds multiple key aspects in one go.

Also, BTW, I will tell you what the main difference is between apps like this and human analysis (say, like Igor Smirnov on his popular YouTube channel).

Let's say we are using Stockfish to find 5 lines, then use custom algorithms to define every reached position in human terms: outposts, holes, pawn structure, pins and skewers and the like, open files, square control, pawn breaks, etc. Then we use an LLM to translate the JSON output of these algorithms into human language.

Igor would have shown you the position and then looked at the lines that provide the most satisfaction: lines where the opponent did the "obvious" move and then how easy you destroy them.

Computers have no notion of what move is obvious for a human. I mean they could, but very few projects touched this in a reasonable manner. Even stuff like Maia or Noctie, I believe, do it the wrong way and are hard to integrate into other apps anyway. Stockfish will never show you the lines where the opponent blunders, either. Those 5 lines are the best possible lines it could find.

Therefore when you show it a gambit position, for example, the app will say how badly you played it by losing a pawn which will then give Magnus the perfect opportunity to defeat you. But you're not playing Magnus! And you are not even close to understanding the plans that might be discerned from the best possible lines. And even if you could, you wouldn't have as much fun as with the bait-and-smash prep lines against people your own level.

Also, BTW, I will tell you what the main difference is between apps like this and human analysis (say, like Igor Smirnov on his popular YouTube channel). Let's say we are using Stockfish to find 5 lines, then use custom algorithms to define every reached position in human terms: outposts, holes, pawn structure, pins and skewers and the like, open files, square control, pawn breaks, etc. Then we use an LLM to translate the JSON output of these algorithms into human language. Igor would have shown you the position and then looked at the lines that provide the most satisfaction: lines where the opponent did the "obvious" move and then how easy you destroy them. Computers have no notion of what move is obvious for a human. I mean they could, but very few projects touched this in a reasonable manner. Even stuff like Maia or Noctie, I believe, do it the wrong way and are hard to integrate into other apps anyway. Stockfish will never show you the lines where the opponent blunders, either. Those 5 lines are the best possible lines it could find. Therefore when you show it a gambit position, for example, the app will say how badly you played it by losing a pawn which will then give Magnus the perfect opportunity to defeat you. But you're not playing Magnus! And you are not even close to understanding the plans that might be discerned from the best possible lines. And even if you could, you wouldn't have as much fun as with the bait-and-smash prep lines against people your own level.

@LittleFireRat said in #72:

@TotalNoob69 Okay, that's fair, there are LLMs that are learning to play chess however, I think the stronger models are around 1800-1900 level atm but def not worthy of teaching based off that level of play. I do however argue what we created is far more than simply having an LLM slapped over stockfish, we don't template or do anything restrictive towards allowing the LLM to explain the following plans and concepts that stockfish creates for the user. The plans and it's explanations I do argue rival much of what I have heard coaches explain to me for quality the main difference I argue would be that using my app requires you to sometimes ask multiple times in various stages whereas a coach may be able to explain it in one go. Mine does look at the future positions to explain the plans and like with a real coach will touch on all aspects of a position upon request and often adds multiple key aspects in one go.

I really hope it works fine, as I said. But I also said in this thread that many coaches are bullshitting more than teaching, so...

@LittleFireRat said in #72: > @TotalNoob69 Okay, that's fair, there are LLMs that are learning to play chess however, I think the stronger models are around 1800-1900 level atm but def not worthy of teaching based off that level of play. I do however argue what we created is far more than simply having an LLM slapped over stockfish, we don't template or do anything restrictive towards allowing the LLM to explain the following plans and concepts that stockfish creates for the user. The plans and it's explanations I do argue rival much of what I have heard coaches explain to me for quality the main difference I argue would be that using my app requires you to sometimes ask multiple times in various stages whereas a coach may be able to explain it in one go. Mine does look at the future positions to explain the plans and like with a real coach will touch on all aspects of a position upon request and often adds multiple key aspects in one go. I really hope it works fine, as I said. But I also said in this thread that many coaches are bullshitting more than teaching, so...

Hey, @RuyLopez1000 , you opened this can of worms. You should participate more! :D

Hey, @RuyLopez1000 , you opened this can of worms. You should participate more! :D

We have combatted this issue in a basic way by having it always favor the human master games database and explaining those plans unless said move is a losing move. If we had the funds we would love to create a trained ai that runs locally where it would be suited to human play by taking in examples from aggressive players such as Jobava or Mamedaryov for aggressive play, positional play from players like Karpov or a balanced style such as Magnus back when he was playing around 2018 and grinding players out in endgames but also playing dynamically when the position called for it. We could easily train the 3 core constructs to teach chess in that manner similar to how LC0 worked in the development that LC0 went through. To do so would however need equipment that ranges around the 8-10k range.

We have combatted this issue in a basic way by having it always favor the human master games database and explaining those plans unless said move is a losing move. If we had the funds we would love to create a trained ai that runs locally where it would be suited to human play by taking in examples from aggressive players such as Jobava or Mamedaryov for aggressive play, positional play from players like Karpov or a balanced style such as Magnus back when he was playing around 2018 and grinding players out in endgames but also playing dynamically when the position called for it. We could easily train the 3 core constructs to teach chess in that manner similar to how LC0 worked in the development that LC0 went through. To do so would however need equipment that ranges around the 8-10k range.

See, I believe that AI is not actually slop and neither are humans slop, I think that the reality is that we are just starting to understand how hectic of a problem AI actually is. It may not be slop, but it sure should be kept to a limit just like how
@Telecaster12 said in #7:

The problem with AI? If it doesn't know something, it justs makes up infomation. It's unreliable, and sometimes down right dangerous. And it seems like most of the blogs on lichess are just ChatGPT now.

These days AI is just unrealistic and dangerous. Who knows one day AI might just end humans before we even can notice? AI was never made to be good. Just check google for example. It says that it promotes non-violence. But let me tell you, it is actually google which helped the US to attack Iran and kill so many innocent people. For more information about this, read the book ‘Surveillance Valley’ written by Yasha Levine and you will understand what the Internet was made for. And for me, I feel like AI is soon going to become like drugs— it will take you in and might end you. In chess, AI might turn into the overlord, and make people so lazy that one day, Lichess on its own might have more bots and AI players running around than real people.

See, I believe that AI is not actually slop and neither are humans slop, I think that the reality is that we are just starting to understand how hectic of a problem AI actually is. It may not be slop, but it sure should be kept to a limit just like how @Telecaster12 said in #7: > The problem with AI? If it doesn't know something, it justs makes up infomation. It's unreliable, and sometimes down right dangerous. And it seems like most of the blogs on lichess are just ChatGPT now. These days AI is just unrealistic and dangerous. Who knows one day AI might just end humans before we even can notice? AI was never made to be good. Just check google for example. It says that it promotes non-violence. But let me tell you, it is actually google which helped the US to attack Iran and kill so many innocent people. For more information about this, read the book ‘Surveillance Valley’ written by Yasha Levine and you will understand what the Internet was made for. And for me, I feel like AI is soon going to become like drugs— it will take you in and might end you. In chess, AI might turn into the overlord, and make people so lazy that one day, Lichess on its own might have more bots and AI players running around than real people.

From the APARNET to Windows and Google, from the BRAIN virus to the Pegasus spyware, there wasn’t much threat. But now, AI can make anything for you— from a simple line of code for your game or a complete computer virus, it make anything for you. The Internet was made to be bad. And so has AI. For every discipline of life, AI must be restricted and controlled.

From the APARNET to Windows and Google, from the BRAIN virus to the Pegasus spyware, there wasn’t much threat. But now, AI can make anything for you— from a simple line of code for your game or a complete computer virus, it make anything for you. The Internet was made to be bad. And so has AI. For every discipline of life, AI must be restricted and controlled.

Honestly, AI Slop is omnipresent nowadays. AI Chess Tutors, AI Language Tutors, AI Code Agents, etc. It is so easy to fool people into thinking AI can replace human interaction and it very much appeals to many people who want to have as little human interaction as possible. Because AI cannot judge us, or so we think. But once you interact with these LLMs, you will quickly understand, they can at best be QoL tools that should, however, never replace 1) your own commitment to growth and 2) human interactions. I totally see a generation of young people coming to age who are so overreliant on AI that they fail to grow altogether.

Honestly, AI Slop is omnipresent nowadays. AI Chess Tutors, AI Language Tutors, AI Code Agents, etc. It is so easy to fool people into thinking AI can replace human interaction and it very much appeals to many people who want to have as little human interaction as possible. Because AI cannot judge us, or so we think. But once you interact with these LLMs, you will quickly understand, they can at best be QoL tools that should, however, never replace 1) your own commitment to growth and 2) human interactions. I totally see a generation of young people coming to age who are so overreliant on AI that they fail to grow altogether.

Oh, yeah!
I read so much opinionated talk in this thread, and people being able to express themselves in their native tongue ...
So why not try my own bit?
First of all, i never forget - not for an instant, that the term AI is a misnomer. It seems to convey the dream of some external intelligence, which it really is not. And from that misunderstanding do all those projections (teacher, analyst, coach,...) have their unfounded source.

What strikes me most, as a person, who loves many aspects of the game, is the insight into my own learning process shows huge differences to the image portrayed in here. Of course, one might focus on chess lingo and ideas, but IMHO that is only occasionally helpful, as there is something inherently missing:

Instead, the best lessons came from REALLY going deep with questions like "how did it happen, that i lost this game?" - Not staying at the surface of analysing moves, but looking into emotions, habits of thinking and behaving,, and tracking those to their inner source. Because making a change there makes all the difference and opens up new possibilities.

For the time being, i might be lucky to find a (human) coach, who by compassion can invite such an exploration that helps moving forward. But this is way too far away from what LLMS (or similar tools) can do, therefore my expectations just can't run with the stuff, that is advertised. BTW: Have advertisements ever told the truth?

People are intelligent - some more, some less - Algorithms aren't. Sorry.
Maybe AI should be replaced with SALAMI – “Systemic Approaches to Learning Algorithms and Machine Inferences" ;-)

Oh, yeah! I read so much opinionated talk in this thread, and people being able to express themselves in their native tongue ... So why not try my own bit? First of all, i never forget - not for an instant, that the term AI is a misnomer. It seems to convey the dream of some external intelligence, which it really is not. And from that misunderstanding do all those projections (teacher, analyst, coach,...) have their unfounded source. What strikes me most, as a person, who loves many aspects of the game, is the insight into my own learning process shows huge differences to the image portrayed in here. Of course, one might focus on chess lingo and ideas, but IMHO that is only occasionally helpful, as there is something inherently missing: Instead, the best lessons came from REALLY going deep with questions like "how did it happen, that i lost this game?" - Not staying at the surface of analysing moves, but looking into emotions, habits of thinking and behaving,, and tracking those to their inner source. Because making a change there makes all the difference and opens up new possibilities. For the time being, i might be lucky to find a (human) coach, who by compassion can invite such an exploration that helps moving forward. But this is way too far away from what LLMS (or similar tools) can do, therefore my expectations just can't run with the stuff, that is advertised. BTW: Have advertisements ever told the truth? People are intelligent - some more, some less - Algorithms aren't. Sorry. Maybe AI should be replaced with SALAMI – “Systemic Approaches to Learning Algorithms and Machine Inferences" ;-)