• recapitated@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    7 months ago

    Computers follow instructions, engineers make mistakes. Now engineers have instructed computers to make huge guesses, this is the new mistake.

  • ramble81@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    7 months ago

    Short of a floating point bug, computers don’t make mistakes. They do exactly what they’re programmed to do. The issue is the people developing them are fallible and QC has gone out the window globally, so you’re going to get computers that operate as good as the Devs and QC are.

    • stewsters@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      Perfectly good computers do make random bit flip mistakes, and the smaller they get the more issues we will see with that.

      Even highly QA’d code like they put on the space shuttle put 5 redundant computers in to reduce the chance they all fail.

      Not every piece of software is worth the resources to do that though. If your game crashes just restart it.

  • MoogleMaestro@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    7 months ago

    Computers mostly don’t make mistake, software makes mistakes.

    edit: Added mostly because I do suppose there are occasions where hardware level mistakes can happen…

  • 7heo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    7 months ago

    We spent decades on educating people that “computers don’t make mistakes” and now you want them to accept that they do?

    We filled them with shit, that’s what. We don’t even know how that shit works, anymore.

    Let’s be honest here.

    • Humanius@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      7 months ago

      I mostly agree with this distinction.

      However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake. In the end the result is the same.

      “Computers make mistakes” is just a way of saying that you shouldn’t blindly trust whatever output the computer spits out.

      • 7heo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 months ago

        if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake

        I’m absolutely expecting corporations to get away with the argument that “they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases”. Or at least to spend billions trying to.

        And in case you think traceability doesn’t matter anyway, think again.

        IMHO it’s crucial we defend the “computers don’t make mistakes” fact for two reasons:

        1. Computers are defined as working through the flawless execution of rational logic. And somehow, I don’t see a “broader” definition working in the favor of the public (i.e. less waste, more fault tolerant systems), but strictly in favor of mega corporations.
        2. If we let the public opinion mix up “computers” with the LLMs that are running on them, we will get even more restrictive ultra-broad legislation against the general public. Think “3D printers ownership heavily restricted because some people printed guns with them” but on an unprecedented scale. All we will have left are smartphones, because we are not their owners.
  • bobs_monkey@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    7 months ago

    All the more reason that devs and admins need to take responsibility and NOT roll out “AI” solutions withoit backstopping them with human verification, or at minimum ensure that the specific solutions they employ are ready for production.

    It’s all cool and groovy that we have a new software stack that can remove a ton of labor from humans, but if it’s too error-prone, is it really useful? I get that the bean counters and suits are ready to boot the data entry and other low level employees to boost their bottom line, but this will become a race to the bottom via blowing their collective loads too early.

    Though let’s be real, we already know that too many companies are going to do this and then try to absolve themselves of liability when shit goes to hell because of their shit.