Doc 05/27/2024 (Mon) 16:34 No.55003 del
(47.92 KB 384x679 GOmBAn_WcAA9t4_.jpg)
(283.28 KB 812x1200 qRNUI8SZ7IY.jpg)
>>55001
Kind of, we can profile what they are good at and bad at and have an idea of what model its based of.
LLMs like chatGPT are trained specifically to figure out language, not really to do problem solving or fact checking or math or programming. It does "okay" in those fields if properly adjusted.
Working together with other algos or AI models it can "interface" human users or human made content, with the other algos/AIs.

The way those search engines are using it is not an interface, is repurposing an LLM to just produce "briefs" of whatever results are in the pipeline. But its done in a purposefully confusing and bizarre way.
Instead of the AI being prompted "do a summary of this search results", it's being prompted "write a response to this search based on the results", which comes out insane like pic related.