i also asked gemini the same question, it got it wrong, and it also explained it wrongly, thinking promoting would not remove pawns from the board
"Actually, this is not entirely accurate. While a standard game of chess
starts with exactly 32 empty squares, the number of empty squares can decrease if pawns are promoted to new pieces"
thats not how promotion works
i also asked gemini the same question, it got it wrong, and it also explained it wrongly, thinking promoting would not remove pawns from the board
"Actually, this is not entirely accurate. While a standard game of chess
starts with exactly 32 empty squares, the number of empty squares can decrease if pawns are promoted to new pieces"
thats not how promotion works
@Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
@Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
@NHL_1024 said in #52:
@Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
no matter how good ai can be, they always make mistakes no matter the question, the 32 empty squares question made ai stutter, even making up their own information.
@NHL_1024 said in #52:
> @Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
no matter how good ai can be, they always make mistakes no matter the question, the 32 empty squares question made ai stutter, even making up their own information.
I would honestly like to challenge this view on LLMs not being able to teach chess. I've designed one and give demonstrations to the public with how it functions and the plans that it gives me for how to continue the play in my games are great insights. I'm a 2100-2200 strength online player likely around 1700-1800 USCF serious tournament player and it's been assisting me with positions each time I have no idea what to do. The responses are not templated and are tested for quality, sure some small errors are said at times which are corrected by just looking at the board but with how we designed ours it can give long term plans with consistent, quality advice. I'm ace7712 on discord if anyone wants to see what a real LLM can do with chess coaching. And if we had a trained and local model the opportunities to improve on what is already a great product would be endless.
I would honestly like to challenge this view on LLMs not being able to teach chess. I've designed one and give demonstrations to the public with how it functions and the plans that it gives me for how to continue the play in my games are great insights. I'm a 2100-2200 strength online player likely around 1700-1800 USCF serious tournament player and it's been assisting me with positions each time I have no idea what to do. The responses are not templated and are tested for quality, sure some small errors are said at times which are corrected by just looking at the board but with how we designed ours it can give long term plans with consistent, quality advice. I'm ace7712 on discord if anyone wants to see what a real LLM can do with chess coaching. And if we had a trained and local model the opportunities to improve on what is already a great product would be endless.
@Ivanisdabest15 said in #53:
@Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
no matter how good ai can be, they always make mistakes no matter the question, the 32 empty squares question made ai stutter, even making up their own information.
However, the same is also true for humans. When I first read the question in the blog post, for some reason I thought that the answer given by the LLM is actually correct. I also once answered the question "How many 'r's does the word 'strawberry' have?" with "2" instead of "3", so I made the same mistake as many LLMs. I can not give a good explanation for that. So, if many humans actually commit these errors, it may well be that the errors by the AIs are actually the wrong stuff learned from us.
@Ivanisdabest15 said in #53:
> > @Ivanisdabest15 Interesting, that seems to indicate that Google uses different models for Gemini and for the search engine LLM, whichever model that is.
>
> no matter how good ai can be, they always make mistakes no matter the question, the 32 empty squares question made ai stutter, even making up their own information.
However, the same is also true for humans. When I first read the question in the blog post, for some reason I thought that the answer given by the LLM is actually correct. I also once answered the question "How many 'r's does the word 'strawberry' have?" with "2" instead of "3", so I made the same mistake as many LLMs. I can not give a good explanation for that. So, if many humans actually commit these errors, it may well be that the errors by the AIs are actually the wrong stuff learned from us.
never less than 32 empty squares
I would expect AI not to use less when it's fewer (even when the error is present in the question as well):
https://youtu.be/8Gv0H-vPoDc?si=fUk1TLShlT3l-RnO&t=55
> never less than 32 empty squares
I would expect AI not to use *less* when it's *fewer* (even when the error is present in the question as well):
https://youtu.be/8Gv0H-vPoDc?si=fUk1TLShlT3l-RnO&t=55
AI slop should have no home in creative spaces, including chess. If I read something about chess I'm expecting human insight, not a mashed together amalgamation that is both inaccurate and inhuman.
This also applies a lot to any use of AI to be honest as all it tells me is that you are too lazy to actually put effort into creating something special. A poorly made photoshop job has infinitely more soul than any image AI can make.
AI slop should have no home in creative spaces, including chess. If I read something about chess I'm expecting human insight, not a mashed together amalgamation that is both inaccurate and inhuman.
This also applies a lot to any use of AI to be honest as all it tells me is that you are too lazy to actually put effort into creating something special. A poorly made photoshop job has infinitely more soul than any image AI can make.
@Kolskegger said in #49:
The statement claims "never less than 32," which is incorrect—the correct phrasing would be "never more than 32" or "at least 32."
More or least at least
or immoralist which may be a list of immorals at least or morralist where i m moral which would never be immoralist, never the least its false unless its never less or never more than 32.
- I think I'll just sit by, have a dirty gin martini and enjoy the slop, I mean show. Thank you Ruy Lopez!! ;)
Maybe you should think for a few seconds before posting. If there is an ending with say king and one pawn there are 61 empty squares (more than 32 empty squares!)
But please, go ahead and tell me in what position there are 31 empty squares. Also "Never more than 32" and "at least 32" are two completely different statements. The former states that X≤32 and the latter states X≥32.
The statement "There are never less than 32 empty squares" which is X≥32 is correct. Please, this is not even middle school math.
@Kolskegger said in #49:
> The statement claims "never less than 32," which is incorrect—the correct phrasing would be "never more than 32" or "at least 32."
> More or least at least
> or immoralist which may be a list of immorals at least or morralist where i m moral which would never be immoralist, never the least its false unless its never less or never more than 32.
>
> * I think I'll just sit by, have a dirty gin martini and enjoy the slop, I mean show. Thank you Ruy Lopez!! ;)
Maybe you should think for a few seconds before posting. If there is an ending with say king and one pawn there are 61 empty squares (more than 32 empty squares!)
But please, go ahead and tell me in what position there are 31 empty squares. Also "Never more than 32" and "at least 32" are two completely different statements. The former states that X≤32 and the latter states X≥32.
The statement "There are never less than 32 empty squares" which is X≥32 is correct. Please, this is not even middle school math.
@jamemates said in #56:
I would expect AI not to use less when it's fewer (even when the error is present in the question as well)
Interesting point, however, the case seems not so clear: https://en.wikipedia.org/wiki/Fewer_versus_less
@jamemates said in #56:
> I would expect AI not to use *less* when it's *fewer* (even when the error is present in the question as well)
Interesting point, however, the case seems not so clear: https://en.wikipedia.org/wiki/Fewer_versus_less
@Gen_E_chess
Lmao! I thougjt it was obvious that my post was a jole, Maybe you should have thought for a few seconds before posting as welll. Cheets!!
@Gen_E_chess
Lmao! I thougjt it was obvious that my post was a jole, Maybe you should have thought for a few seconds before posting as welll. Cheets!!