• The Picard Maneuver@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    These are the subtle types of errors that are much more likely to cause problems than when it tells someone to put glue in their pizza.

  • Phegan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn’t learn, it generates. It’s all made up, yet they want to slap it on a search engine like it provides factual information.

    • hellofriend@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      It’s like the difference between being given a grocery list from your mum and trying to remember what your mum usually sends you to the store for.

      • deadbeef79000@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        … Or calling your aunt and having her yell things at you that she thinks might be on your Mum’s shopping list.

        • Malfeasant@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          That could at least be somewhat useful… It’s more like grabbing some random stranger and asking what their aunt thinks might be on your mum’s shopping list.

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      5 months ago

      It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.

      I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

      Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.

      It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.

      • rogue_scholar@eviltoast.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        you can just disable it

        This is not actually true. Google re-enables it and does not have an account setting to disable AI results. There is a URL flag that can do this, but it’s not documented and requires a browser plugin to do it automatically.

      • RageAgainstTheRich@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I think the issue is that most people aren’t that bright and will not verify information like you or me.

        They already believe every facebook post or ragebait article. This will sadly only feed their ignorance and solidify their false knowledge of things.

        • platypus_plumba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          5 months ago

          The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

          If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.

  • Dultas@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can’t find behind section 230 since this is content they are generating but IANAL.

    • CosmicTurtle0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Iirc cases where the central complaint is AI, ML, or other black box technology, the company in question was never held responsible because “We don’t know how it works”. The AI surge we’re seeing now is likely a consequence of those decisions and the crypto crash.

      I’d love CVS try to push a lawsuit though.

      • chiliedogg@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        “We don’t know how it works but released it anyway” is a perfectly good reason to be sued when you release a product that causes harm.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        In Canada there was a company using an LLM chatbot who had to uphold a claim the bot had made to one of their customers. So there’s precedence for forcing companies to take responsibility for what their LLMs says (at least if they’re presenting it as trustworthy and representative)

        • LordPassionFruit@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          This was with regards to Air Canada and its LLM that hallucinated a refund policy, which the company argued they did not have to honour because it wasn’t their actual policy and the bot had invented it out of nothing.

          An important side note is that one of the cited reasons that the Court ruled in favour of the customer is because the company did not disclose that the LLM wasn’t the final say in its policy, and that a customer should confirm with a representative before acting upon the information. This meaning that the the legal argument wasn’t “the LLM is responsible” but rather “the customer should be informed that the information may not be accurate”.

          I point this out because I’m not so sure CVS would have a clear cut case based on the Air Canada ruling, because I’d be surprised if Google didn’t have some legalese somewhere stating that they aren’t liable for what the LLM says.

          • shinratdr@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            But those end up being the same in practice. If you have to put up a disclaimer that the info might be wrong, then who would use it? I can get the wrong answer or unverified heresay anywhere. The whole point of contacting the company is to get the right answer; or at least one the company is forced to stick to.

            This isn’t just minor AI growing pains, this is a fundamental problem with the technology that causes it to essentially be useless for the use case of “answering questions”.

            They can slap as many disclaimers as they want on this shit; but if it just hallucinates policies and incorrect answers it will just end up being one more thing people hammer 0 to skip past or scroll past to talk to a human or find the right answer.

  • Tekkip20@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    I don’t bother using things like Copilot or other AI tools like ChatGPT. I mean, they’re pretty cool what they CAN give you correctly and the new demo floored me in awe.

    But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.

    But this example here? Good god I hope this doesn’t become the norm…

    • velvetThunder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      These text generation LLM are good for text generating. I use it to write better emails or listings or something.

      • valkyre09@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        I had to do a presentation for work a few weeks ago. I asked co-pilot to generate me an outline for a presentation on the topic.

        It spat out a heading and a few sections with details on each. It was generic enough, but it gave me the structure I needed to get started.

        I didn’t dare ask it for anything factual.

        Worked a treat.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Sadly there’s really no other search engine with a database as big as Google. We goofed by heavily relying on Google.

  • suction@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    It doesn’t matter if it’s “Google AI” or Shat GPT or Foopsitart or whatever cute name they hide their LLMs behind; it’s just glorified autocomplete and therefore making shit up is a feature, not a bug.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Making shit up IS a feature of LLMs. It’s crazy to use it as search engine. Now they’ll try to stop it from hallucinating to make it a better search engine and kill the one thing it’s good at …

    • Johanno@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Chatgpt was in much higher quality a year ago than it is now.

      It could be very accurate. Now it’s hallucinating the whole time.

      • Lad@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I was thinking the same thing. LLMs have suddenly got much worse. They’ve lost the plot lmao

          • Ben Hur Horse Race@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            I’m not sure thats definitely true… my sense is that the AI money/arms race has made them push out new/more as fast as possible so they can be the first and get literally billions of investment capitol

            • Cringe2793@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              Maybe. I’m sure there’s more than one reason. But the negativity people have for AI is really toxic.

              • Ben Hur Horse Race@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                is it?

                nearly everyone I speak to about it (other than one friend I have who’s pretty far on the spectrum) concur that no one asked for this. few people want any of it, its consuming vast amounts of energy, is being shoehorned into programs like skype and adobe reader where no one wants it, is very, very soon to become manditory in OS’s like windows, iOS and Android while it threatens election integrity already (mosdt notibly India) and is being used to harass individuals with deepfake porn etc.

                the ethics board at openAI got essentially got dispelled and replaced by people interested only in the fastest expansion and rollout possible to beat the competition and maximize their capitol gains…

                …also AI “art”, which is essentially taking everything a human has ever made, shredding it into confetti and reconsstructing it in the shape of something resembling the prompt is starting to flood Image search with its grotesque human-mimicing outputs like things with melting, split pupils and 7 fingers…

                you’re saying people should be positive about all this?

                • Cringe2793@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  5 months ago

                  You’re cherry picking the negative points only, just to lure me into an argument. Like all tech, there’s definitely good and bad. Also, the fact that you’re implying you need to be “pretty far on the spectrum” to think this is good is kinda troubling.

  • hydroptic@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less

    • loie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      So much this. The whole point is to annihilate entire sectors of decent paying jobs. That’s why “AI” is garnering all this investment. Exactly like Theranos. Doesn’t matter if their product worked, or made any goddamned sense at all really. Just the very idea of nuking shitloads of salaries is enough to get the investor class to dump billions on the slightest chance of success.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Exactly like Theranos

        Is it though? This one is an idea that can literally destroy the economic system. Seems different to ignore that detail.

        • krashmo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Current gen AI can’t come close to destroying the economy. It’s the most overhyped technology I’ve ever seen in my life.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I am starting to think google put this up on purpose to destroy people’s opinion on AI. They are so much behind Open AI that they would benefit from it.

      • hydroptic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I doubt there’s any sort of 4D chess going on, instead of the whole thing being brought about by short-sighted executives who feel like they have to do something to show that they’re still in the game exactly because they’re so much behind "Open"AI

      • blackbelt352@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Ignoring the blatant eugenics of the very first scene, I’d rather live in the idiocracy world because at least the president with all of his machismo and grandstanding was still humble enough to put the smartest guy in the room in charge of actually getting plants to grow.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Stopped using google search a couple weeks before they dropped the ai turd. Glad i did

    • Kiernian@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      What do you use now?

      I work in IT and between the Advent of “agile” methodologies meaning lots of documentation is out of date as soon as it’s approved for release and AI results more likely to be invented instead of regurgitated from forum posts, it’s getting progressively more difficult to find relevant answers to weird one-off questions than it used to be. This would be less of a problem if everything was open source and we could just look at the code but most of the vendors corporate America uses don’t ascribe to that set of values, because “Mah intellectual properties” and stuff.

      Couple that with tech sector cuts and outsourcing of vendor support and things are getting hairy in ways AI can’t do anything about.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Not who you asked but I also work IT support and Kagi has been great for me.

        I started with their free trial set of searches and that solidified it.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        because the sooner corporate meatheads clock that this shit is useless and doesn’t bring that hype money the sooner it dies, and that’d be a good thing because making shit up doesn’t require burning a square km of rainforest per query

        not that we need any of that shit anyway. the only things these plagiarism machines seem to be okayish at is mass manufacturing spam and disinfo, and while some adderral-fueled middle managers will try to replace real people with it, it will fail flat on this task (not that it ever stopped them)

        • lud@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I think it sounds like there are huge gains to be made in energy efficiency instead.

          Energy costs money so datacenters would be glad to invest in better and more energy efficient hardware.

            • lud@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              5 months ago

              It can be helpful if you know how to use it though.

              I don’t use it myself a lot but quite a few at work use it and are very happy with chatgpt

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Because he wants to stop it from helping impoverished people live better lives and all the other advantages simply because it didn’t exist when.he was young and change scares him

        • nomous@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Holy shit your assumption says a lot about you. How do you think AI is going to “help impoverished people live better lives” exactly?

          • VirtualOdour@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            It’s fascinating to me that you genuinely don’t know, it shows not only do you have no active interest in working to benefit impoverished communities but you have no real knowledge of the conversations surrounding ai - but here you are throwing out your opion with the certainty of a zealot.

            If you had any interest or involvement in any aid or development project relating to the global south you’d be well aware that one of the biggest difficulties for those communities is access to information and education in their first language so a huge benefit of natural language computing would be very obvious to you.

            Also If you followed anything but knee-jerk anti-ai memes to try and develop an understand of this emerging tech you’d have without any doubt been exposed to the endless talking points on this subject, https://oxfordinsights.com/insights/data-and-power-ai-and-development-in-the-global-south/ is an interesting piece covering some of the current work happening on the language barrier problems i mentioned ai helping with.

            • nomous@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              he wants to stop it from helping impoverished people live better lives and all the other advantages simply because it didn’t exist when.he was young and change scares him

              That’s the part I take issue with, the weird probably-projecting assumption about people.

              Have fun with the holier-than-thou moral high ground attitude about AI though, shits laughable.

              • VirtualOdour@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                I think you misunderstood the context, I’m not really saying that he actively wants to stop it helping poor people I’m saying that he doesn’t care about or consider the benefits to other people simply because he’s entirely focused on his own emotional response which stems from a fear of change.

      • Sweetpeaches69@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Because they will only be used my corporations to replace workers, furthering class divide, ultimately leading to a collapse in countries and economies. Jobs will be taken, and there will be no resources for the jobless. The future is darker than bleak should LLMs and AI be allowed to be used indeterminately by corporations.

        • JamesFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          We should use them to replace workers, letting everyone work less and have more time to do what they want.

          We shouldn’t let corporations use them to replace workers, because workers won’t see any of the benefits.

          • pyre@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            that won’t happen. technological advancement doesn’t allow you to work less, it allowa you to work less for the same output. so you work the same hours but the expected output changes, and your productivity goes up while your wages stay the same.

            • JamesFire@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              technological advancement doesn’t allow you to work less,

              It literally has (When forced by unions). How do you think we got the 40-hr workweek?

              • mriormro@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                That wasn’t technology. It was the literal spilling of blood of workers and organizers fighting and dying for those rights.

                • JamesFire@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  5 months ago

                  And you think they just did it because?

                  They obviously thought they deserved it, because… technology reduced the need for work hours, perhaps?

                • JamesFire@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  5 months ago

                  Unions fought for it after seeing the obvious effects of better technology reducing the need for work hours.

  • Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I always try to replicate these results, because the majority of them are fake. For this one in particular I don’t get any AI results, which is interesting, but inconclusive

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      How would you expect to recreate them when the models are given random perturbations such that the results usually vary?

      • Nurse_Robot@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        The point here is that this is likely another fake image, meant to get the attention of people who quickly engage with everything anti AI. Google does not generate an AI response to this query, which I only know because I attempted to recreate it. Instead of blindly taking everything you agree with at face value, it can behoove you to question it and test it out yourself.

    • Ace! _SL/S@ani.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Why go out of your way instead of just using a proper search engine? Google has been getting worse and worse for the past 4 or 5 years

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Can you tell folks here what these “proper search engines” are because I can think of like five off the top of my head that all have issues similar to Google’s. Yes, that includes paid search engine Kagi.

        Almost all of them have similar issues except the self-hosted ones, which are a little beyond most people’s basic capabilities.

        • hersh@literature.cafe
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          DuckDuckGo is an easy first step. It’s free, publicly available, and familiar to anyone who is used to Google. Results are sourced largely from Bing, so there is second-hand rot, but IMHO there was a tipping point in 2023 where DDG’s results became generally more useful than Google’s or Bing’s. (That’s my personal experience; YMMV.) And they’re not putting half-assed AI implementations front and center (though they have some experimental features you can play with if you want).

          If you want something AI-driven, Perplexity.ai is pretty good. Bing Chat is worth looking at, but last I checked it was still too hallucinatory to use for general search, and the UI is awful.

          I’ve been using Kagi for a while now and I find its quick summaries (which are not displayed by default for web searches) much, much better than this. For example, here’s what Kagi’s “quick answer” feature gives me with this search term:

          Room for improvement, sure, but it’s not hallucinating anything, and it cites its sources. That’s the bare minimum anyone should tolerate, and yet most of the stuff out there falls wayyyyy short.

          • GreatAlbatross@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            I stopped recommending kagi on lemmy after the umpteenth person accused me of shilling.

            Maybe I should take a screenshot of the £20 leaving my account each month!

  • dkc@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Personally, that’s exactly what’s happening to me. I’ve seen enough that AI can’t be trusted to give a correct answer, so I don’t use it for anything important. It’s a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

      There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we’ll see if it just becomes another passing fad.

    • Psythik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      5 months ago

      To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that’s getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.

      This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google’s), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I’ve had AI cite an AI generated webpage as its source on far too many occasions.

      Going back to what I said at the start, have you ever read an article or watched a video on a subject you’re knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.

  • qx128@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    Are AI products released by a company liable for slander? 🤷🏻

    I predict we will find out in the next few years.

    • dm_me_ufo_pics@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Gmail has something like it too with the summary bit at the top of Amazon order emails. Had one the other day that said I ordered 2 new phones, which freaked me out. It’s because there were ads to phones in the order receipt email.