The overconfident tone is baked in. LLMs don’t have knowledge or world models, and all text they produce is nothing more than statistical relation of input to output based on frequency of appearance and semantic closeness. Therefore you can train the things to lean towards doubtfulness (nobody will use them) or confidence (wow, it must be true if it’s this certain). It’s abusing the human tendency to anthropomorphize to sell a really shitty product.
The overconfident tone is baked in. LLMs don’t have knowledge or world models, and all text they produce is nothing more than statistical relation of input to output based on frequency of appearance and semantic closeness. Therefore you can train the things to lean towards doubtfulness (nobody will use them) or confidence (wow, it must be true if it’s this certain). It’s abusing the human tendency to anthropomorphize to sell a really shitty product.