

Got 64GB DDR5 earlier this year. I went to check the price of the kit recently and it’s up to $800. Shit’s insane.


Got 64GB DDR5 earlier this year. I went to check the price of the kit recently and it’s up to $800. Shit’s insane.


I’m not the one recommending it lol.
If I had to guess, it’s to improve page performance by prerendering as much as possible, but I find it overkill and prefer to just prerender as much of the page as I can at build time and do CSR for the rest, though this doesn’t work if you have dynamic routes or some kind of server-side logic (good for blogs and such though).


I think their point was that CSR-only sites would be unaffected, which should be true. Exploiting it on a static site, for example, couldn’t be RCE because the untrusted code is only being executed on the client side (and therefore is not remote).
Now, most people use, or at least are recommended to use, SSR/RSC these days. Many frameworks make SSR enabled by default. But using raw React with no Next.js, react-router, etc. to create a client-side only site does likely protect you from this vulnerability.


I think it also doesn’t help that only 4XX (client error) and 5XX (server error) are defined as error status codes, and 4XX errors don’t even necessarily indicate that anything happened that shouldn’t happen (need to reauth, need to wait a bit, post no longer exists, etc).
Trying to think of what 6XX would stand for, and we already have “Service Unavailable” and “Bad Gateway”/“Gateway Timeout”, so I guess 6XX would be “incompetence errors”. 600 is “Bad Implementation”, 601 is “Service Hosted On Azure”, 602 is “Inference Failure” (for AI stuff), and I guess 666 is “Cloudflare Outage”.


No, and your hostility is out of place. Do you have something against GN?


This is the actual answer with respect to Cloudflare. Their config system was fucked in November. It’s still fucked in December. React’s massive CVE just forced them to use it again.
More generally, the issue is a matter of companies forcefully accelerating feature development at the cost of stability, likely due to AI. This is how the company I’m at is like anyway.


That’s why your what wasn’t working?


He also publishes written articles, but not for all videos, and not usually at the same time as the video.
Edit: also, there’s more to the video than reading from a script. You’d know this if you, uh, watched the video.


The year before that wasn’t without controversy as well. Not surprised to see NL on the list of countries noping out this year.


30 is assuming you write code for all 30 days. In practice, it’s closer to 20, so 75 tests per day. It’s doable on some days for sure (if we include parameterized tests), but I don’t strictly write code everyday either.
Still, I agree with them that you generally want to write a lot of tests, but volume is less important than quality and thoroughness. The author using the volume alone as a meaningful metric is nonsense.
If it were a systemic issue and they had massive control over my life, I would wish them only the worst. Speaking from experience, of course.
After moving out, once they were out of my life for the most part, that dulled into indifference.
I’d be glad if my dad died.
People seem to struggle to understand this, from my experience. I never personally felt this way about my dad, but I fully understand why my mom does.


This is a classic strategy to get games or microtransactions for cheaper.


Quoting Kohler:
We encrypt data end-to-end in transit, as it travels between users’ devices and our systems, where it is decrypted and processed to provide and improve our service.
I guess Kohler recently learned about TLS? IBM’s response, which is a bit random in my opinion, addresses the idiocy of the E2EE claim lol.
I’d hope they encrypt data in transit? Not doing so would be an incredible, though unsurprising, show of incompetence. Setting up TLS and getting certs is easy these days with LetsEncrypt, and a company like Kohler could even get certs through AWS or Azure or something if they wanted.
I can’t imagine why I’d ever spend money on a camera for my toilet, especially if it includes a subscription fee. That’s a new level of stupid.


TL;DR: React broke the internet.
Well, that, but also Cloudflare went down because they were trying to fix React’s shit.
This is more likely the actual incident report:
A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
Edit: If you like reading
As someone who stopped all contact with my dad at one point (while still a child, but continuing as an adult), I can say that there were a few specific memorable issues, but that they were by no means isolated.
The impression I get from reading seems to be that it’s an anecdote indicative of a larger, more regular series of incidents.


1500 tests is a lot. That doesn’t mean anything if the tests aren’t testing the right thing.
My experience was that it generates tests for the sake of generating them. Some are good. Many are useless. Without a good understanding of what it’s generating, you have no way of knowing which are good and which are useless.
It ended up being faster for me to just learn the testing libraries and write my own tests. That way I was sure every test served a purpose and tested the right thing.


I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.
From my experience working with people who use them heavily, they introduce new ways of accumulating tech debt. Those projects usually end up having essays of feature spec docs, prompts, state files (all in prose of course), etc. Those files are anywhere from hundreds to thousands of lines long, and there’s a lot of them. There’s no way anybody is spending hours reading through enough markdown to fill twenty encyclopedia-sized books just to make sure it’s all up-to-date. At least, I can promise that I won’t be doing it, nor will anyone I know (including those using AI this way).
Any website using CSR only can’t have a RCE because the code runs on the client. Any code capable of RSC that runs server and client side may be vulnerable.
From what I’ve seen, the exploit is a special request from a client that functionally lets you exec anything you want (via Function’s constructor). If your server is unpatched and recognizes the request, it may be (likely is) vulnerable.
I’m sure we’ll get more details over time and tools to manually check if a site is compromised.