Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 4 Posts
  • 149 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Riskable@programming.devtoTechnology@lemmy.worldThe Cult of Microsoft
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    2 days ago

    Ahaha! Microsoft employees are using AI to write hallucinate their own performance reviews and managers are using that very same AI to “review” said performance reviews. Which is exactly the dystopian vision of the future that OpenAI sells!

    What’s funny is that the “cult of Microsoft” is 100% bullshit so the AI is being trained in bullshit and as time goes on its being reinforced with it’s own hallucinated bullshit because everyone is using it to bullshit the bullshitters in management who are demanding this bullshit!




  • Surely they can’t all be this dumb.

    After a few decades following American politics you’ll realize that yes, yes they can all be that dumb.

    Just have a general conversation with your most conservative neighbors about basically anything and you’ll quickly learn that there’s nothing they don’t have an opinion on and their level of ignorance is… Impressive.

    Like, dude, you’re 60+ years old and you think hurricanes are a conspiracy‽ The point where they lost their mind was long ago.

    Sooner or later you can’t help but wonder if they ever had sanity or they just faked it long enough to have a career/survive until retirement.





  • As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

    For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

    Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

    It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.




  • I’ve used this term before in a different context: It’s what happens when someone is about to do something that both scares and excites them at the same time. Like when a person suddenly finds themselves extremely attracted to someone and they want to make a good impression. That’s when their brain seems to be both there and not there at the same time.

    When observing someone in this sort of situation you quickly come to the conclusion that the brain has gone but then later–upon reflection–it may seem like it may have actually been present. The only way to know for sure is to find out how the events eventually concluded; opening the box as it were.

    That’s when you find out whether or not the person was a pussy.


  • As expected, nobody cares about “reader mode”. Only once in my life has it ever come in handy… It was a website that was so badly designed I swore never to go back to it ever again.

    I forget what it was but apparently I wasn’t the only one and thus, it must’ve died a fast death as I haven’t seen it ever again (otherwise I’d remember).

    Basically, any website that gets users so frustrated that they resort to reader/simplified mode isn’t going to last very long. If I had my way I would change the messages:

    “This website appears to be total shit. Do you want Firefox to try to fix it so your eyes don’t bleed trying to get through it?”

    I want an extension that does this, actually! It doesn’t need to actually modify the page. Just give me a virtual assistant to comiserate with…

    “The people who made this website should have their browser’s back button removed entirely as punishment for erecting this horror!”


  • Just a point of clarification: Copyright is about the right of distribution. So yes, a company can just “download the Internet”, store it, and do whatever TF they want with it as long as they don’t distribute it.

    That the key: Distribution. That’s why no one gets sued for downloading. They only ever get sued for uploading. Furthermore, the damages (if found guilty) are based on the number of copies that get distributed. It’s because copyright law hasn’t been updated in decades and 99% of it predates computers (especially all the important case law).

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works. Which is kinda what’s going on but also not really. Let’s say that someone asks ChatGPT to write a few paragraphs of something in the style of Stephen King… His “style” isn’t even cooyrightable so as long as it didn’t copy his works word-for-word is it even a derivative? No one knows. It’s never been litigated before.

    My guess: No. It’s not going to count as a derivative work. Because it’s no different than a human reading all his books and performing the same, perfectly legal function.







  • You had corruption with btrfs? Was this with a spinning disk or an SSD?

    I’ve been using btrfs for over a decade on several filesystems/machines and I’ve had my share of problems (mostly due to ignorance) but I’ve never encountered corruption. Mostly I just run out of disk space because I forgot to balance or the disk itself had an issue and I lost whatever it was that was stored in those blocks.

    I’ve had to repair a btrfs partition before due to who-knows-what back when it was new but it’s been over a decade since I’ve had an issue like that. I remember btrfs check --repair being totally useless back then haha. My memory on that event is fuzzy but I think I fixed whatever it was bitching about by remounting the filesystem with an extra option that forced it to recreate a cache of some sort. It ran for many years after that until the disk spun itself into oblivion.


  • I wouldn’t say, “repairing XFS is much easier.” Yeah, fsck -y with XFS is really all you have to do 99% of the time but also you’re much more likely to get corrupted stuff when you’re in that situation compared to say, btrfs which supports snapshotting and redundancy.

    Another problem with XFS is its lack of flexibility. By that I don’t mean, “you can configure it across any number of partitions on-the-fly in any number of (extreme) ways” (like you can with btrfs and zfs). I mean it doesn’t have very many options as to how it should deal with things like inodes (e.g. tail allocation). You can increase the total amount of space allowed for inode allocation but only when you create the filesystem and even then it has a (kind of absurdly) limited number that would surprise most folks here.

    As an example, with an XFS filesystem, in order to store 2 billion symlimks (each one takes an inode) you would need 1TiB of storage just for the inodes. Contrast that with something like btrfs with max_inline set to 2048 (the default) and 2 billion symlimks will take up a little less than 1GB (assuming a simplistic setup on at least a 50GB single partition).

    Learn more about btrfs inlining: https://btrfs.readthedocs.io/en/latest/Inline-files.html