

Apple stores are the embodiment of wasted space.
Otherwise, there are ways to use negative space to help direct the user to important information. It’s just often abused to direct them away from it to sell something, sadly.


Apple stores are the embodiment of wasted space.
Otherwise, there are ways to use negative space to help direct the user to important information. It’s just often abused to direct them away from it to sell something, sadly.


TLDR: data is something you collect over time from users, so you shouldn’t let the contracts for it mindlessly drift, or you might render old data unusable. Keeping those contracts in one place helps keep them organized.
But that explanation sucks if you’re actually five, so I asked ChatGPT to do that explanation for you since that would be hard for me to do:
Here’s a super-simple, “explain it like I’m 5” version of what that idea is trying to say:
🧠 Imagine your toys
You have a bunch of toys in your room — cars, blocks, stuffed animals.
Now imagine this:
You put some cars in the toybox.
You leave other cars on the floor in another place.
You keep some blocks in a bucket… and some blocks on the shelf.
And every time you want a toy, you have to run to a different spot to find its matching pieces.
That would be really confusing and hard to play with, right? Because things are spread out in too many places for no good reason.
🚧 What the blog is really warning about
In software (computer programs), “state” is like where toys are stored — it’s important information the program keeps track of. For example, it could be “what level I’m on in a game” or “what’s in my cart when I shop online.”
The article says the biggest mistake in software architecture is:
Moving that important stuff around too much or putting it in too many places when you don’t need to.
That makes the program really hard to understand and work with, just like your toys would be if they were scattered all over the place. (programming.dev)
🎯 Why that matters
If the important stuff is all over the place:
People get confused.
It’s harder to fix mistakes.
The program gets slower and more complicated for no reason.
So the lesson is:
👉 Keep the important information in simple, predictable places, and don’t spread it around unless you really need to. (programming.dev)


I mean, my car is an older Corolla, so almost entirely physical buttons. My phone, however…


Wow, i immediately installed flickboard, thanks!


There have been touch-specific keyboard layouts, but not really for English or other languages with Latin characters. You can probably imagine that CJK touch KBs can be pretty creative sometimes though.


They also just don’t work at all when they get wet. This tends to be a problem when your car is covered in snow and ice, of course.


“It is not up to me to limit my questions, but up to the minister to provide the answers,” he said. “If at some point such an answer poses a danger or leads to espionage, then the espionage is not my fault, but the minister’s, because he has disclosed information that he should not have disclosed.”
So he’s admitting that he’s untrustworthy? Because why else would it not be his fault if not because he is well-known to be untrustworthy?


I want the power button on my phone to protrude like an actual button so I can actually press it. The thing’s like 4mm wide and I’m supposed to somehow push this thing through the gap in the side of my case with my thumb that’s 5x ~10x (just checked) the width of the button.
As for other buttons, I wouldn’t mind physical buttons for the Android controls at the bottom, but not really a huge deal to me tbh.


wouldn’t it be smart for these companies to bid a bit more if they could, to make these builds with more resellable parts instead of using these crazy server rack combo platters?
Their customers don’t care about if they are resellable. They just want GPUs.
We aren’t their customers, and I mean this in the most literal sense possible. You can’t buy these. They only sell them to big companies.
Yes, it’s shit.


Nvidia sold many of their data center GPUs as full server racks. The GPUs aren’t in a form factor to use with a traditional PC and simply cannot slot into a PCIe slot because they don’t have that kind of interface. Look up the DGX B200, which is shipped in a form factor intended for rack mounting and has 8 GPUs alongside two CPUs and everything else needed to run it as a server. These GPUs don’t support video output. It’s not that they just don’t have an output port. They don’t even have the software for it because these GPUs are not capable of rendering graphics (which makes you wonder why they are even called “GPU” anymore). They cannot be plugged into a PCIe slot because there is no interface for it.


To be more specific here, GPUs are really, really good at linear algebra. They multiply matrices and vectors as single operations. CPUs can often do some SIMD operations, but not nearly as well or as many.
Video games do a lot of LA in order to render scenes. At the bare minimum, each model vertex is being multiplied by matrices to convert from world space to screen space, clip space, NDC, etc which are calculated based on the properties of your camera and projection type.
ML also does a lot of LA. Neural nets, for example. are literally a sequence of matrix multiplications. A very simple neural net works by taking a vector representing an input (or matrix for multiple inputs), multiplies that by a matrix representing a node’s weights, then passes the result to an activation function. Then does that a bunch more times.
Both functions want GPUs, but both need different things from it. AI wants GPUs with huge amounts of memory (for these huge models) which are optimized for data center usage (using cooling designed for racks). Games want GPUs that don’t need to have terabytes of VRAM, but which should be fast at calculating, fast at transferring data between CPU and GPU, and capable of running many shader programs in parallel (so that you can render more pixels at a time, for example).


Also a lot of data scientists aren’t really programmers. They just learned Python to do data science. Learning a new language for this purpose would be both very difficult and, in their eyes, often unnecessary.


I’m not really sure how that’s a response to my comment.
Looks like from another response, you moved to Waterfox? You’re still using a fork of Firefox, though your comment made it sound like you moved entirely away from Mozilla (Waterfox still depends on Mozilla developing FF, from my understanding).
Anyway, seems like a good choice at least. Personally, I’m on Zen now. Hopefully we’ll get a real alternative at some point though.


Well that’s one way to counter spam mail.


My point was that you can also reduce the overestimation by properly moderating these platforms. If there are fewer (or no) posts containing harmful content, then people will naturally estimate that the percentage is less. Plus, you have less harmful content.


Search sucks, so they’re using LLMs?
That really explains a lot about the current state of the world, to be honest.


Well you haven’t exactly specified what browser you’re moving to, so we’re left wondering if you’re using a Servo dev build, a terminal-based browser, or carrier pigeons.


these effects can be mitigated through a targeted educational intervention that corrects this misperception.
Let me make sure I understand this. The solution to “users who post harmful content” on moderated platforms is to educate others that those users are the minority?
Sure, education would be good here, but I’m sure someone out there can think of another solution to this problem.


Even if we assume they want to do discriminatory pricing (they probably do), they can do that without using LLMs. Use facial recognition and other traditional models to predict the person’s demographics and maybe even identify them. If you know who they are, do a lookup for all products they’ve expressed interest in elsewhere (this can be done with either something like a graph DB or via embeddings). Raise the price if they seem likely to purchase it based on the previous criteria. Never lower the price.
That’s a complicated process, but none of that needs an LLM, and they’d be doing a lot of this already if they’re going full big brother price discrimination.
Spouting bullshit? If so, I agree.
Codebases in the 100k+ to 1m+ sloc can be very difficult for a LLM (or human) to answer detailed questions about. A LLM might be able to point you in the right direction, but they don’t have enough context size to fit the code, let enough the capability to actually analyze it. Summarize? Sure, but it can only summarize what it has in context.