it guessed great commands, but it formatted it always with a colon up front, like :help :browser :search :curl
It was trained on how terminals look, not what you actually type (you don't type the ":")
I have since updated my code in my agent tool to stop fighting against this intuition.
LLMs they learn what commands look like in documentation/artifacts, not what the human actually typed on the keyboard.
Seems so obvious. This is why you have to test your LLM and see how it naturally works, so you don't have to fight it with your system prompt.
This is Kimi K2.5 Btw.
Can you objectively analyze how VSCode adapts to your way of working without our interference?
Did you test your theory with the actual frontier LLMs (which Kimi K2.5 is not BTW?)
"Clear the session," the master said. "Run the same prompt again."
The novice pressed return. The model output: `ls -R /tmp`
"The colons are gone," the novice said. "But my theory explained them perfectly."
"You built a cage for a cloud," the master said. "Do not mistake a single roll of the dice for the rulebook."
I am a bit curious, did you find this behavior consistent across models or is it more pronounced with certain ones?