• db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    3
    ·
    5 days ago

    I can’t wait for this bubble to blow up in all their dumb faces.

    • EvilBit@lemmy.world
      link
      fedilink
      English
      arrow-up
      53
      ·
      5 days ago

      For what it’s worth, “AI” in this context is probably not the content-stealing Generative AI that everyone is trying to cram everywhere it doesn’t belong. This is a much more legitimate application of a similar technology.

      I’m not mad about the idea of AI in radiology because it’s a really good fit. A human radiologist can’t compare a hundred similar slices and cross-correlate possible anomalies, whereas AI can. This improves detection and outcomes and is exactly where medical technology is supposed to help.

      That said, I don’t think we’ll replace radiologists across the board for a long time. This will be a very useful tool and will probably reduce the number of radiologists required and modify their roles significantly, but it’ll be more like how a single worker with editing software can do work that would have required a small team in the pre-digital days of film.

      • phutatorius@lemmy.zip
        link
        fedilink
        English
        arrow-up
        26
        ·
        5 days ago

        Yeah, it sounds more like ML. That’s a good thing, For one thing, it’s reproducible.

        LLMs are intrinsically unfit for use in any situation where human life or health is at stake.

        • EvilBit@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          5 days ago

          Exactly. People keep shoehorning Large Language Models into non-linguistic domains, and that’s dangerous. Human language, with respect to the training sets used, is inherently subjective and imperfect. Healthcare is very fault-intolerant.

        • Miaou@jlai.lu
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          Everything is reproducible, unless you wire your computer to a radioactive source

      • db2@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        5 days ago

        The replacing part is the problem. Using a local system to help is fine, but it still requires humans who know what they’re doing and what they’re looking at.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          5 days ago

          Sometimes, for example human + AI systems used to be better than either one in isolation, but chess AI improved so much that the human partner is actually not helping anymore

        • EvilBit@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 days ago

          It doesn’t replace any individual directly. It improves one person’s capability to the extent that there may be fewer needed to do a job. And that’s not a bad thing in my opinion, especially because it can improve the quality of that person’s work at the same time.

          Edit to elaborate: I am opposed to replacing humans with AI in general. AI is a tool. But if that tool can empower someone to do more and better work, then I’m not opposed. Using stolen intellectual property to replace creatives with an inherently non-creative slop machine is greedy and evil. Using machine learning trained on medical data sets to let a radiologist more comprehensively and deeply review a frankly overwhelming amount of data to better save lives? I’m cool with that. But I also think that, in line with my stance that AI is a tool, there will likely be a well-trained human operating these tools for a long time before radiologists cease to exist.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        That assumes it’s done additively.

        I think a lot of these AI automation promises come down to:

        Are you adding a tool thereby increasing the overall quality of service and cost.

        Or are you trying to reduce cost even if it means reducing service quality.

        The first one doesn’t take any job away and makes everything just a bit better but more expensive.

        The second one is a race to the bottom strategy that just comes down to capitalism doing its thing.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        5 days ago

        If it’s done properly, sure.

        Last time this was in the news, they found that AI had an insanely good accuracy at identifying cancer! Until they realized it was because they included the hospital info in the training data, so it was identifying “cancer” by seeing they were at a cancer treatment facility.