I’m beautiful and tough like a diamond…or beef jerky in a ball gown.

  • 39 Posts
  • 308 Comments
Joined 8 months ago
cake
Cake day: July 15th, 2025

help-circle


  • The world is just as fucked up as it ever was. The only difference now is that every fucked-up thing that ever happens anywhere is getting pushed to your always-on doomscroll device in real time with people attaching their mostly ignorant opinions to it.

    This is where the “touch grass” advice comes into play. In broad terms, the real world is not nearly the hellhole social media portrays it to be.







  • I get what you’re saying and the “individual carbon footprint” is often used to blame shift to regular people just living their lives, but we do still have a carbon footprint. It may be a tiny, rodent-sized footprint compared to the Kaiju-sized ones of big industries, but our actions and choices do have an effect (especially collectively).

    I just don’t like dismissing the individual carbon footprint as total propaganda because it’s not wrong (though I acknowledge it is abused). Dismissing it like that just puts out a defeatist “nothing I do matters” message when our individual choices do matter and add up.

    Can you live a totally carbon-neutral life in the modern age? No, probably not. But we also shouldn’t throw the baby out with the bathwater and do nothing.








  • Disclaimer: : All of my LLM experience is with local models in Ollama on extremely modest hardware (an old laptop with NVidia graphics) , so I can’t speak for the technical reasons the context window isn’t infinite or at least larger on the big player’s models. My understanding is that the context window is basically its short term memory. In humans, short term memory is also fairly limited in capacity. But unlike humans, the LLM can’t really see (or hold) the big picture in its mind.

    But yeah, all you said is correct. Expanding on that, if you try to get it to generate something long-form, such as a novel, it’s basically just generating infinite chapters using the previous chapter (or as much of the history fits into its context window) as reference for the next. This means, at minimum, it’s going to be full of plot holes and will never reach a conclusion unless explicitly directed to wrap things up. And, again, given the limited context window, the ending will be full of plot holes and essentially based only on the previous chapter or two.

    It’s funny because I recently found an old backup drive from high school with some half-written Jurassic Park fan fiction on it, so I tasked an LLM with fleshing it out, mostly for shits and giggles. The result is pure slop that seems like it’s building to something and ultimately goes nowhere. The other funny thing is that it reads almost exactly like a season of Camp Cretaceous / Chaos Theory (the animated kids JP series) and I now fully believe those are also LLM-generated.



  • Nice. I’ve got the Anker version but it’s half the capacity at 1 KWh. It charges exclusively from 800W of PV input (though it can only handle 600W input) and can push out 2,000 W continuous and 3000 peak.

    I’ve got a splitter from the PV that goes to both the Anker and a DC-DC converter which then goes to a few 12v -> USB power delivery adapters. Those can use the excess from the PV to charge power banks, phones, laptops, etc while the rest goes to the Anker (doesn’t seem to affect the MPPT unless there’s basically just no sunlight at all). Without the splitter, anything above 600W is wasted until I expand my setup later this spring.

    All I can say for it is that it absolutely rocks! On sunny days, I run my entire homelab from it, my work-from-home office, charge all my devices, and run my refrigerator from it if I feel like running an extension cord). It’s setup downstairs, so I also plug my washing machine into it and can get a few loads of laundry done as well.

    All from its solar input.