![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/ca9b0de3-205f-47ca-a620-5fbddb680695.png)
The only reason I can think of is for more on device ai. LLMs like ChatGPT are extremely greedy when it comes down to RAM. There are some optimizations that squeeze them into a smaller memory footprint at the expense of accuracy/capability. Even some of the best phones out there today are barely capable of running a stripped down generative ai. When they do, the output is nowhere near as good as when it is run in an uncompressed mode on a server.
Keep in mind these things don’t really know anything. They’re good at saying things that seem to fit the situation because that’s how they’re trained. They are like that person you may know that thinks he knows everything and will just say stuff that sounds right to them. The only difference is the ai is a lot more practiced than the human. Google’s llm may have some filtering done on the output to at least make sure that all of the books it recommends are real though it wouldn’t surprise me if there’s a fake one in the list somewhere. These things are prone to “hallucinations” which some lawyers found out the hard way.