Kristina Armitage:

Ironically, LLMs have only themselves to blame for this discovery of one of their limits. “The reason why we all got curious about whether they do real reasoning is because of their amazing capabilities,” Dziri said. They dazzled on tasks involving natural language, despite the seeming simplicity of their training. During the training phase, an LLM is shown a fragment of a sentence with the last word obscured (though technically it isn’t always a single word). The model predicts the missing information and then “learns” from its mistakes.