• nutbutter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    2 hours ago

    I have a question. I have tried Cursor and one more AI coding tool, and as far as I can remember, they always ask explicit permission before running a command in terminal. They can edit file contents without permission but creating new files and deleting any files requires the user to say yes to it.

    Is Google not doing this? Or am I missing something?

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      35 minutes ago

      You can give cursor the permission to always run a certain command without asking (useful for running tests or git commands). Maybe they did that with rm?

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 hours ago

    Did you give it permission to do it? No. Did you tell it not to do it? Also, no. See, there’s your problem. You forgot to tell it to not do something it shouldn’t be doing in the first place.

  • NotASharkInAManSuit@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    6 hours ago

    How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 hours ago

    lol.

    lmao even.

    Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      What’s this version control stuff? I don’t need that, I have an AI.

      - An actual quote from Deap-Hyena492

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        Given the tendency of these systems to randomly implode (as demonstrated) I’m unconvinced they’re going to be a long-term threat.

        Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.

  • thethunderwolf@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    9
    ·
    3 hours ago

    recyclbe bin

    This reveals it as fake. AI does not make typos. It works by processing words so it has no ability to put the wrong letter.

  • kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    103
    ·
    edit-2
    13 hours ago

    “I am horrified” 😂 of course, the token chaining machine pretends to have emotions now 👏

    Edit: I found the original thread, and it’s hilarious:

    I’m focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.

    This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.

    • KelvarCherry@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      7 hours ago

      There’s something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about “being a failure”.

      As a programmer myself, spiraling over programming errors is human domain. That’s the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<

    • FinjaminPoach@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      7 hours ago

      TBF it can’t be sorry if it doesn’t have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).

      • Ledivin@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        edit-2
        9 hours ago

        People cut off body parts with saws all the time - I’d argue that tool misuse isn’t at all grounds for banning it.

        There are plenty of completely valid reasons to hate AI. Stupid people using it poorly just isn’t really one of them 🤷‍♂️

        • UnspecificGravity@infosec.pub
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          4 hours ago

          Sure, but if I built a 14 inch demo saw with no guard and got the government to give me permission to give it to kindergartners and then got everyone’s boss to REQUIRE theie workers to use it for everything from slicing sandwiches to open heart surgery, I think you might agree that it’s a problem.

          Oh yeah, also it takes like 20% of the worlds energy to run these saws, and I got the biggest manufacturer of knives and regular saws to just stop selling everything but my 14 inch demolition saw.

          • Ledivin@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 hours ago

            Yeah, you listed lots of the valid reasons that I was talking about. There’s no need to dilute your argument with idiots like this

        • zebidiah@lemmy.ca
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          9 hours ago

          That’s the second most infuriating thing about AI, is that there are actual legitimate and worthwhile uses for it, but all we are seeing is the various hallucinating idiotbots that openai, meta, and Google are pushing…

          • pulsewidth@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            6 hours ago

            Nah, the second most infuriating thing about AI is people who always rush to blame the users when the multibillion-dollar ‘tool’ has some otherwise indefensible failure - like deleting a users entire hard drive contents completely unprompted.

    • Credibly_Human@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      8 hours ago

      I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    38
    ·
    13 hours ago

    Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called “yolo mode” which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no

  • Zink@programming.dev
    link
    fedilink
    arrow-up
    109
    ·
    16 hours ago

    Wow, this is really impressive y’all!

    The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!

    I wonder if it knows how to remove the french language package.

    • greybeard@feddit.online
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      The problem (or safety) of LLMs is that they don’t learn from that mistake. The first time someone says “What’s this Windows folder doing taking up all this space?” and acts on it, they wont make that mistake again. LLM? It’ll keep making the same mistake over and over again.

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I recently had an interaction where it made a really weird comment about a function that didn’t make sense, and when I asked it to explain what it meant, it said “let me have another look at the code to see what I meant”, and made up something even more nonsensical.

        It’s clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.

        • greybeard@feddit.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          One of the fun things that self hosted LLMs let you do (the big tech ones might too), is that you can edit its answer. Then, ask it to justify that answer. It will try its best, because, as you said, it its entire state of mind is on the page.

          • skisnow@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            One quirk of github copilot is that because it lets you choose which model to send a question to, you can gaslight Opus into apologising for something that gpt-4o told you.