Oliver Roick's Weblog Nobody reads this anyway.

Jim Nielsen:

With a search engine, fewer quality results means something. But an LLM is going to spit back a response regardless of the quality of training data. When the data is thin, a search engine will give you nothing. An LLM will give you an inaccurate something.

LLMs, at the moment, are the equivalent of an overconfident engineer. You know, the one who thinks they know everything, who always have an answer. It would never occur to them engineer that they simply don’t know some things.

One thing to build trust in large-language models will be to teach them how to say no.

You're reading an entry in Oliver Roick's Weblog. This post was published on . Explore similar posts about Artificial Intelligence.