

in the unable-to-reason-effectively sense
That’s all LLMs by definition.
They’re probabilistic text generators, not AI. They’re fundamentally incapable of reasoning in any way, shape or form.
They just take a text and produce the most probable word to follow it according to their training model, that’s all.
What Musk’s plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.
Frankly, with the garbage Microsoft is producing these days, and the rate at which the quality, for lack of a better word, is degenerating, I’m starting to consider if LLM slop might actually be less worse…