• Ilandar@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    All of those questions you asked it return authoritative answers which you take on face value, unless you spend extra time fact checking them yourself.

    • Zworf@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 months ago

      Yeah but accuracy isn’t a given with the other methods either. If I ask some randos on reddit I won’t get a perfect answer either. If I google specs or reviews online they are often biased (or even literally fraudulent paid reviews) too.

      So yeah for me the LLM output is more than good enough with a bit of verification if necessary.

      I don’t really understand why people are suddenly hung up about holding LLMs up to this lofty ideal of an unbiased super-truth. Where did that requirement come from all of a sudden? It’s not really realistic and not something we’ve ever had in the past.

      I feel the same about self-driving systems. People get all hung up if they crash once in a while, expecting them to be 100% perfect in all situations. But ignoring the concept that they already might be a hell of a lot safer than human drivers.

      • Ilandar@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        7 months ago

        I’m sorry, but citing other examples of bad research practices does not magically make AI reliable. That is a whataboutism.