Anthropomorphizing consumer LLM’s is a bad idea.
Bing Copilot starts the majority of its responses with Certainly!
This is a bad idea.
When I ask a question that is undoubtedly wrong, it starts the response with “Certainly!”
This makes me think my assumption is right.
This makes me confused when the rest of the answer explains how I’m wrong.
The difference of those C allocators is the method of allocating memory.
calloc
is not the “CPU memory allocator.”
It’s a contiguous allocator that initializes every byte to zero.
Useful if you need it, but not fundamentally different from malloc
.
The meat of the answer tells me exactly that.
But the reinforcement training to make Copilot feel more friendly has only succeeded in confusing me.