• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    30
    ·
    2 days ago

    According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.

    Producing innaccurate technical advice, with a confident tonse, at scale.

    If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.