“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    4
    ·
    6 months ago

    My experience with the healthcare system, and especially hospitals, is that the people working there are generally knowledgeable and want to help patients, but they are also very busy and often sleep-deprived. A human may be better at medicine than an AI, but an AI that can devote attention to you is better than a human that can’t.

    (The fact that the healthcare system we have is somehow simultaneously very expensive, bad for medical professionals, and bad for patients is a separate issue…)

    • _lilith@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      6 months ago

      Those are all good points but that assumes that hospitals will use AI in addition to the workers they already have. The fear here would be that they would use AI as an excuse to lay off medical staff, making the intentional under staffing even worse and decreasing the overall quality of care while absolutely burning through medical staff.

    • hoshikarakitaridia@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      Isn’t there a way to do both for every patient as an additional information layer?

      The dangerous part is not the AI, but the idea that AI can REPLACE everything. And that’s usually on the management.

    • SeaJ@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 months ago

      It really depends on how optimized the dataset is for the AI. If it is shitty, it will amplify biases or hallucinate. An AI might be able to give a patient more attention but if it is providing incorrect information, no attention is better than a lot of attention.

  • Mouselemming@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    ·
    6 months ago

    I’m reading a lot of comments from people who haven’t been in a hospital bed recently. AI has increasingly been used by insurance companies to deny needed treatment and by hospital management to justify spreading medical and support personnel even thinner.

    The whole point of AI is that it’s supposed to be able to learn, but what we’ve been doing with it is the equivalent of throwing a child into scrubs and letting them do heart surgery. We should only be allowing it to monitor the care and outcomes as done by humans, in order to develop a much more substantial real-world database than it’s presently working from.

  • EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    6 months ago

    Way back in 2010 I did some paper reading at university on AI in healthcare, and even back then there were dedicated AI systems that could outperform many healthcare workers in the US and Europe.

    Where many of the issues came were not in performance, but in liability. If a single person is liable, that’s fine, but what if a computer program provides an incorrect dosage to an infant, or a procedure with two possible options goes wrong and a human would choose the other?

    The problems were also painted as observational. Often, the AI would get things with a clear solution right far more, but would observe things far less. It basically had the same conclusions that many other industries have - AI can produce some useful tools to help humans, but using it to replace humans results in fuck-ups that make the hospital (more notably, it’s leaders) liable.

    • HubertManne@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      6 months ago

      yes. ai is great is a helper or assistant but whatever it does always has to be doublechecked by a human. All the same humans can get tired or careless so its not bad having it as long as its purely supplemental.

  • NOT_RICK@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    6 months ago

    While I agree AI isn’t a replacement for skilled, human nurses, there are a ton of valid implementations of AI tech in healthcare. I appreciate that they’re just advocating for collaboration with the nursing unions on how this tech is developed and implemented instead of fighting it off fully.

    • Ghostface@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      6 months ago

      Having worked in this space in the past, on the document and imaging processing side. I was unaware that ai was being used in monitoring.

      The dangers I see from the technology side to the end user side is, companies replying on the model data and hiring according, versus skilled nurses using their knowledge and intuition to interpret ai data and responses.

      But from purely a processing scope, AI is extremely beneficial, just the lost of tribal knowledge on why we need to use ai will get lost

  • Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    6 months ago

    At least in the US, the healthcare system is fucked-and-a half with staffing issues alone. With boomers on the way out of the work force and into the fucking ER, we’re in trouble.

    If ‘AI’ algorithms can help manage the dumpster fire, bring it on. Growing pains are expected, but that doesn’t mean we shouldn’t explore its potential.

    • bobs_monkey@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      I’d be all about having an AI system run analysis of data including test results, vitals, and use the output for suggestions like diagnosis, suggested treatment course, etc. These tools should be suggestive and assistive ONLY, with an actual human making the final call. In no way should we be using AI tech to replace qualified healthcare personnel, especially doctors and nurses.

      • Maeve@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        6 months ago

        Sure, but this is the same company that lobbied Nixon to institute HMO rather than public health care.

        • bobs_monkey@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Dude all of these companies are shit stains, and as much as I hate to say it, it’ll probably be a while before we get universal healthcare here in the states, so anything to relieve the problems of the current system should at least be looked at. AI does have the potential to aid in bridging that gap by reducing costs that could ultimately sway public opinion on a single-payer system, while also reducing the workloads of the critically understaffed units so they can actually spend more time per patient and determine proper diagnosis and treatments without making rushed decisions.

          The problem is that the allocation of these potential savings are determined by for-profit asshats, so we’ll see how that goes.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        We should be using to its potential, which is a deliberately vague statement cuz I have no idea what its potential is; but I’d guess there’s some overlap in what it’s capable of and what nurses and doctors do. Displacing their focus from those areas to things that more urgently require their attention is a good thing, provided we’re using algorithms for things that are actually appropriate for algorithms.

        I know a lot of folks don’t trust AI, but what we’re calling “AI” today is basically just a spell-checker on steroids, so using it effectively includes knowing when to say “I know you want that word to change to ‘deer’, but I legit need it to say ‘dear’” and hitting that ignore button.

        …so yea basically what you said. Human makes final call. At least for now; if we ever get actual AI (the thinky sentient kind we see in sci-fi) then we can start delegating more and more advanced interpretive tasks to it as it demonstrates its ability to not fuck them up (or at least, fuck them up less frequently than its human counterparts).

        • QuadratureSurfer@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          I mostly agree with what you’ve said except for this:

          but what we’re calling “AI” today is basically just a spell-checker on steroids,

          That’s only somewhat true if you’re talking about LLMs like ChatGPT.

          AI itself has become a much broader term than it used to be. There are a lot of different kinds of AI out there. Generative AI like text generation (LLMs), image generation (upscaling, or creating images from scratch), or music generation (Suno). Computer Vision is another kind which can include image recognition, object detection, facial recognition, etc. And there are others beyond this.

          The AI we’re talking about here falls more under Computer Vision for AI which includes image recognition. In this case the machine learning model has been trained on massive amounts of images like MRIs or CT scans.

          • Sterile_Technique@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            Fair enough. It’s vague enough that there’s some subjectivity at play here… in my brain, it’s broken into two categories: 1) algorithmic stuff that includes EVERY example of “AI” currently at our disposal, with “AI” being more of a marketing term than an actual description of what it is; and 2) intelligence that’s artificial, which doesn’t exist yet, but is theoretically possible and will most likely manifest as a creation of something from category 1, a point that is dubbed the “singularity” that marks the start of a snowball of self-improvement that eventually matches and surpasses what our own noggins are capable of in every way. And we kinda just hope #2 develops in a way that’s compatible with our own survival and comfort.

            My money’s on climate collapse or nuclear explosions or all of the above wiping us out before we make it to #2, but I guess we’ll see.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    9
    ·
    6 months ago

    “Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

    You “won’t accept” algorithms? What if those algorithms are demonstrably doing a better job than the nurses?

    As a patient I want whatever works best for doing diagnoses and whatnot. If that’s humans then let it be humans. If it’s AI, then let it be AI.

    • DontMakeMoreBabies@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      6 months ago

      Exactly! A BSN is a safety degree if you’re just barely not a moron. Who cares about their feelings? Give me what works.

      Yes, smart RNs exist but they eventually self select out to become PAs, NPs, or otherwise specialize.

      • Maeve@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        I’ve known some damned fine RN that stayed with hospice services or ED, for the need of empathy and compassion. That said, I prefer a FNPor PA to a MD. I’ve also known some nasty RNs who were just in it for the check, and they made patients and every other employee miserable, on their shifts.

  • BoofStroke@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    11
    ·
    edit-2
    6 months ago

    Huh. This is how I feel about LNPs who think they are doctors too. I think in most cases I’d prefer the ai.

    • Maeve@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 months ago

      I’ve worked in health care off and on, in some* capacity, for a long time. I know LPNs who are more knowledgeable than plenty of doctors. As I got older, it dawned on me that it’s really the individual. If one is in it as “just a job” they tend to think that degree makes them know it all, which means diddly, without empathy and compassion.