G/O Media, a major online media company that runs publications including Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has announced that it will begin a “modest test” of AI content on its sites.

The trial will include “producing just a handful of stories for most of our sites that are basically built around lists and data,” Brown wrote. “These features aren’t replacing work currently being done by writers and editors, and we hope that over time if we get these forms of content right and produced at scale, AI will, via search and promotion, help us grow our audience.”

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn’t go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like… writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it’ll probably be completely wasted when new developments come out nearly every week anyway.

    • Sina@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      but the field is way too new and techniques too crude to make this work without a few million dollars in R&D

      I think AI is evolving so rapidly that by the time they get anywhere with this on Gizmodo the hand holding might not be nearly as necessary.

      • ConsciousCode@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s hard to say. My intuition is that LLMs themselves aren’t capable of this until some trillion-parameter neural phase transition maybe, and more focus needs to be put on the cognitive architectures surrounding them. Basically, automated hand-holding so they don’t forget what they’re supposed to be doing, the equivalent of the brain’s own feedback loop.

        The main issue is executive function is such a weak signal in the data that it would probably have to reach ASI before it starts optimizing for it, so you either need specialized RL or algorithmic task prioritization.