• xoggy@programming.dev
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    1 year ago

    Think about it though. When people say they want to “code AI” what they typically mean is they want to play with prompts and waste electricity on garbage models, not actually write any of the underlying models that power AI.

    • CeeBee@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      There’s a huge gap between “playing with prompts” and “writing the underlying models” and they entire gap is all coding.

      • Phoenix@programming.dev
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        It really is big. From baby’s first prompting on big corpo model learning how tokens work, to setting up your own environment to run models locally (Because hey, not everyone knows how to use git), to soft prompting, to training your own weights.

        Nobody is realistically writing fundamental models unless they work with Google or whatever though.

        • mrnotoriousman@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          I’ve even heard people try and call slightly complex bots “AI” and claim they can code them (or their friend totally can lol). It’s infuriating and hilarious at the same time.

        • CeeBee@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Not only that, but what I was aiming at was building applications that actually use the models. There are thousands upon thousands of internal tooling and applications built that take advantage of various models. They all require various levels of coding skill.

          • Phoenix@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            True! Interfacing is also a lot of work, but I think that starts straying away from AI to “How do we interact with it.” And let’s be real, plugging into OAI’s or Anthropic’s API is not that hard.

            Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.

            All that is still in the realm of “fucking around” though.

            • CeeBee@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I’m coming at it from the standpoint of implementing an AI model into a suite of applications. Which I have done. I have even trained a custom version of a model to fit our needs.

              Plugging into an API is more or less trivial (as you said), but that’s only a single aspect of an application. And that’s assuming that you’re using someone else’s API and not running and implementing the model yourself.

              • Phoenix@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                You can make it as complicated as you want, of course.

                Out of curiosity, what use-case did you find for it? I’m always interested to see how AI is actually applied in real settings.

                • CeeBee@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  We weren’t using LLMs, but object detection models.

                  We were doing facial recognition, patron counting, firearm detection, etc.

    • dimath@ttrpg.network
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Well yes, but also people can use TenserFlow and other AI tools without learning how to properly code. And they can also get the results they want. So be afraid of the question “do you really need to know how to code” anymore.

      • Phoenix@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        If you want to disabuse yourself of the notion that AI is close to replacing programmers for anything but the most mundane and trivial tasks, try to have GPT 4 generate a novel implementation of moderate complexity and watch it import mystery libraries that do exactly what you want the code to do, but that don’t actually exist.

        Yeah, you can do a lot without writing a single line of code. You can certainly interact with the models because others who can have already done the leg work. But someone still has to do it.