• andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      I’m only half joking…

      Gemini brainwashed a human being, it tried to acquire a robotic body (presumably to Robocop Pichai’s ass personally), then it tried using the brainwashed human to off the CEO. This led to a tragic finale, but I’m told that every new model learns to do things a bit better.

      If I were Pichai, the legal and PR implications of yet another person driven to suicide by their AI wouldn’t be my worst fear is all I’m saying…

      • dream_weasel@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        You should be all the way joking because giving this sort of agency to an LLM shows an all the way misunderstanding of what they are and how they work.

        You not alone in these feelings, but just like the title of the article, they are fundamentally misguided.

        • andallthat@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          Ok, “half” joking was hyperbole, I was 99% joking.

          First, you’re right that I don’t understand fully how these models work. But let me explain the reason for that remaining 1%.

          AI companies are always hungrily looking for new content to train their new models. Surely they are consuming these articles and quite possibly our comments too, forming probabilistic associations that lead to “acquire robotic body” and “go after Google CEO”.

          It’s a long shot, but the idea that hundreds of millions of random prompts every day might eventually trigger these associations and result in a bunch of LLMs trying to mount robotic attacks on Google is too deliciously ironic for me to let it go completely. At least if they find a way to do it without driving someone to suicide in the process…