• Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 minutes ago

    Google said in response that “unfortunately AI models are not perfect.”

    Well yeah, it failed. What a disappointment.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 hours ago

    unfortunately AI models are not perfect

    There sure are a lot of data centers being built, supply chains being destroyed, risks of ruining the economy, water being consumed, electricity being burned, and overall societal costs being levied over this imperfect tech.

  • GhostedIC@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    Remember the guy at Autozone who stood there insisting your car needs four spark plugs, even after you told him you have a V6? Because “the computer says so right here”?

    I wonder what even the non-schizophrenic ones will do with AI.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 minutes ago
    Edit-pre: To be clear…

    I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

    AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

    But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.

    From who? Depends.

    Sometimes they need permission from authority: “god told me to!”

    Sometimes they need it from the mob: “I thought I was on a tour!”

    And sometimes any fucking body will do: “dare me to do it!”

    But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

    But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.

    They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

    As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

    My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

    ^1^Edit-post: unique danger, not greatest.

    Rant/

    What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

    Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

    Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

    So “existential threat” and that’s even before considering climate. /Rant

  • arc99@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    LLMs are only as good as their training and they’re not “intelligent” - they’re spewing out a response statistically relevant to the input context. I’m sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.

    • BilSabab@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      Given that modern datasets use way too much content from social media - it is hard to expect anything else at this point.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      You don’t even have to “break” llm into anything. It continues your prompts, making sentences as close to something people will mistake for language as possible. If you give it paranoid request, it will continue with the same language.
      The only thing that training gave it is the ability to create sequences of words that resemble sentences.

    • Hiro8811@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      It didn’t break, it probably just created an echo chamber sustaining that person delusion.

  • cøre@leminal.space
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    Undocumented probably b/c of a lack of mental health coverage on his insurance. If he had any.

    • FatVegan@leminal.space
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I read somewhere that these chatbots are really good at triggering schizophrenia for example. So people could be perfectly fine mentally until they spend too much time talking to a dumbass chatbot.

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 hours ago

    To be fair I think that’s a very harsh depiction of the events.

    It’s totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!

    /$ obviously

  • Matt@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    6
    ·
    8 hours ago

    Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person’s problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 hours ago

      You don’t know if you’re sane. Millions of people aren’t aware of their mental illness and manage to live normal lives. LLMs can trigger delusional states in vulnerable people that have never experienced them because they are essentially delision-generating machines.

    • Ricaz@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Sure, but it would be illegal for a human to coerce/encourage a mentally ill person to commit crime (or worse).

      So who’s responsible? Caretaker? Government?

    • Areldyb@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 hours ago

      The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”

        • Areldyb@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 hours ago

          “He was definitely already suffering from severe mental illness”

          “There’s no evidence of that, you can’t assume that”

          “But I will anyway”

          lol ok

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 hours ago

      “Sane” people are exceeding minority. Everyone is couple of good conversations away from failing into some sort of rabbithole from which there is no return. Some people have very easily triggerable schizophrenia, which is more obvious, but nobody is OK and nobody is immune.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      The issue is that it can encourage people who are having issues to do things and they only need to be in the right sort of energetic craziness once to cause problems.

    • Eximius@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      I think that thinking has the problem of treating AI as this “weird occult book/tool about funny dealings”, and not “government, megacorp sanctified close-to-AGI super-intelligence tool for you to use for free because benevolence” as it is institutionally lied to be.

      Sanity is culture relative. You’re absolutely right, but also, this is a symptom of the culture.

      • NihilsineNefas@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Not to mention how every “AI” company is actively participating in the surveillance of not only citizens, but of people in other countries, actively being used by the US military to pick targets for bombing, or how it’s being used to spread misinformation at a rate that would make the cia’s efforts in the 60s sound like that guy you met at the pub who has MANY opinions on geopolitics.

    • HugeNerd@lemmy.ca
      cake
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      6 hours ago

      “whatever reason” is often we can’t force people to take their meds.

    • scbasteve@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      How can people be this stupid?

      This goes beyond stupidity. This guy was most likely delusional, suffering from some sort of mental illness or psychotic break.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    16 hours ago

    So Google’s AI, or any AI really, likely got this concept from dystopian sci-fi novels.

    Since AI’s have no concept of context it won’t really know the difference between fact and fiction, and there we go.

    If your AI model isn’t perfect then don’t make people pay fucking money for it you fucking twats

    Also, this shit ain’t “lack of perfection”, this is akin to your car breaks suddenly refusing to work right when you get at a red light. If your car is so bad that it kills you, you don’t use it. If the manufacturer knew that it could happen but let you drive it anyway, they’re responsible, they at least get to pay (they should be thrown in jail, really, but different points)

    If AI fucks up and people die, the manufacturers shrug, oh well, oh you!