• Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    7 hours ago
    Edit-pre: To be clear…

    I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

    AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

    But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.

    From who? Depends.

    Sometimes they need permission from authority: “god told me to!”

    Sometimes they need it from the mob: “I thought I was on a tour!”

    And sometimes any fucking body will do: “dare me to do it!”

    But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

    But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.

    They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

    As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

    My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

    ^1^Edit-post: unique danger, not greatest.

    Rant/

    What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

    Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

    Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

    So “existential threat” and that’s even before considering climate. /Rant