• 1 Post
  • 777 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • They will accept any negative sum game, they will ruin their own livelihoods and their own lives, if only it helps sad little kings of sad little hills.

    I’m reminded of that book about Authoritarian Personality Types. They did like a model UN / Civilization game kind of thing, where the players represented different countries and could make decisions about policy, war, and so on. There were two groups. Unknown to the players, the people running this experiment put all the people who scored high for authoritarian personality in one group, and everyone else in the other group.

    The group with low authoritarian personality scores? Basically everything was fine. They solved the ozone layer crisis. They were solving world hunger. One guy tried to be a dick and the rest of the group brought him in line.

    The high authoritarian guys? Nuclear apocalypse. They made them sit in the dark for five minutes to think about what they’d done, and let them have a do-over. They still did a shit job. Petty squabbling. Stealing. Out of control climate crisis.

    I don’t think there’s an ethical way to do this in real life, but I do think if you just didn’t allow people with that kind of personality to have any real power, we’d all be much better off.

    It’s also possible i mangled the story because I rewrote it here from memory, but I believe it was in this book: https://theauthoritarians.org/


  • Oh yeah. Cars are bad on like every metric.

    Socially they isolate people. You don’t interact with anyone when you’re driving except to get angry. The micro interactions you have on the train matter. Seeing people that aren’t just like you, also annoyed that the train is delayed, or just having a nice time with their kids, matters. More than makes up for when other people are annoying.

    Economically they hurt. It’s much harder to just pop into an interesting looking shop when you’re cruising along at 40mph. All the space dedicated to parking could be used for other stuff- housing, commerce, communal space, whatever.

    They make spaces less safe. Other than the direct impact (no pun intended) of people getting hit by cars, or crashing into stuff, a space that has steady foot traffic is generally safer. If everyone was in their car instead, you’d probably be alone on foot with no one to help if something happened.

    They’re bad for the environment. Air pollution, micro plastics, whatever.

    Drunk driving is way more dangerous than drunk “riding the train”.

    The more non-car options are built out, the better it will be for people who need to drive for whatever reason.

    Cars culture is trash and if we ever escape from it, it’s going to take years.











  • This reminds me of the new vector for malware that targets “vibe coders”. LLMs tend to hallucinate libraries that don’t exist. Like, it’ll tell you to add, install, and use jjj_image_proc or whatever. The vibe coder will then get an error like “that library doesn’t exist” and "can’t call jjj_image_proc.process()`.

    But you, a malicious user, could go and create a library named jjj_image_proc and give it a function named process. Vibe coders will then pull down and run your arbitrary code, and that’s kind of game over for them.

    You’d just need to find some commonly hallucinated library names



  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don’t understand. I’m not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you’re going to get it telling you to import libraries that don’t exist, or use parts of the standard library that don’t exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn’t exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that’s game over for them.

    So yeah, you could “rigorously check” it but a. all of us are lazy and aren’t going to do that routinely (like, have you used snapshot tests?), b. it’s going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it’s often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn’t really understand anything. To verify it’s correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I’m told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in “largest state in the US” it uses AI. This is not a good use case.







  • yeah, it really depends on the group. Some people love learning new stuff. Some people are like absolutely phobic of it.

    Though I have a half-serious hypothesis: Some players are so bad at rules, the kind of player that asks every week “what do I roll to attack again?”, that you could just change the system without telling them and they wouldn’t notice and do any worse.

    Though that’s less true for systems that require creative player buy-in like Fate. D&D in the “I move and attack” mode can be phoned in easier, I think.