• greysemanticist@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 年前

    This is a useful take: I too will use LLMs for search-- but not for search for journal articles with data and evidence. LLMs too easily confabulate these.

    LLM-as-search is fantastic when you want a no-bullshit statistical result for what you’re looking for when you’re wanting an overview or interactive tutorial.

      • Zworf@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 年前

        Not infallible truth. But very often it’s something that is just for personal use.

        Some things I’ve asked it recently were like “Which torch is smaller out of these 5 models?”. Once I find which one I want it’s easy to verify. Or “what does this Spanish expression mean?” or “how do I do …”.

        Not everyone uses it to try and write authoritative stuff. And Google is full of clickbaity “comparison sites” that are nothing but fake advertising.

          • Zworf@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 年前

            Yeah but accuracy isn’t a given with the other methods either. If I ask some randos on reddit I won’t get a perfect answer either. If I google specs or reviews online they are often biased (or even literally fraudulent paid reviews) too.

            So yeah for me the LLM output is more than good enough with a bit of verification if necessary.

            I don’t really understand why people are suddenly hung up about holding LLMs up to this lofty ideal of an unbiased super-truth. Where did that requirement come from all of a sudden? It’s not really realistic and not something we’ve ever had in the past.

            I feel the same about self-driving systems. People get all hung up if they crash once in a while, expecting them to be 100% perfect in all situations. But ignoring the concept that they already might be a hell of a lot safer than human drivers.