

You really needed a better CS professor.
If you’re writing code that you don’t understand without treating it like a goddamned microbe in a petri dish, you are the slop coder.


You really needed a better CS professor.
If you’re writing code that you don’t understand without treating it like a goddamned microbe in a petri dish, you are the slop coder.


If Americans were allowed books, they’d be really angry about this.
🎶 O say does that star-spangled banner yet wave 🎶 O’er the land of the free and the home of the brave?
I was contributing to Open Source almost certainly before you were born, so wind your neck in kid. But, even if we suppose you are right and the GPL somehow restricts the use of licensed works for AI training (although a quick refresh on the license shows no evidence that it does) - while GPL fanatics make the most noise, it is absolutely not the most popular open source license.
GPL variants covers roughly 20% of the licensed work on Github. The vast majority of such work is permissive open-source licensed, with the MIT license covering almost 50% alone. That’s because the vast majority of open source contributors are not in fact wannabe Citizen Smiths, but rather people who want, without restriction, to contribute to the advancement of computing.
Have you ever read an open source license? They gave away those rights, voluntarily.
If you have evidence that AI is being trained on world licensed to the contrary then by all means get exercised about those “workers” rights (right on, man,) but given the vast quantities of code released under permissive open source licenses, it’s not going to change anything.


Starlink coming back to Earth?
I don’t know if it was a decade ahead, but it was really nice. Well thought out UI, really nice and fluid even on not the best hardware, the address-book integration with messaging and throughout the OS was really nice for its time, and some of the phones were actually pretty good (my Nokia had a great camera compared to most.)
Most important for me, at the time it had by far the best dual-SIM support - with active/active radios and proper management of things like separate default SIM for messaging/calls/data at a time when most Android phones with dual SIM were pretty awful, and nearly a decade before Apple would offer dual SIM at all.
I was genuinely sorry to let my last Windows Phone go.
How is it “stealing” if it’s open source?
I have sympathy with artists, authors and the like whose work was taken to train AIs. But software engineering - nah. We gave the training data away for free, neatly catalogued in bitesize chunks with explanations for every change - we don’t get to call “take backsies” now.


I2C/SPI - and indeed most hardware interfaces - are of course trivial to anyone skilled in the art. Digging through badly written vendor documentation though, then comparing it with the reference implementation that was buried on a website behind a sign that says “beware of the leopard” and which directly contradicts the documentation on various key points, is a non-trivial and ultimately unproductive use of time - and AI tools can be pretty good at that shit.
Generative AIs are a useful tool. Most of the criticisms of AI vendors are also valid (apart from the water one, that’s just bullshit,) but that doesn’t stop them being a useful tool - and engineers who learn to use them as a tool will be more productive and will be more employable than those who stick their fingers in their ears and insist on only producing artisinal code hand-whittled with their grandfather’s tools.
That seems unlikely; trust me, there are services running behind Cloudflare tunnels that are doing more requests per second than whatever you’re hosting does in a year.
The only times I’ve had performance problems with Cloudflare tunnels it’s been intermediate network kit that didn’t like IPv6 or didn’t like QUIC (or both). You can try disabling both in cloudflared to diagnose (at least, you used to be able to disable them/switch to HTTP/2+IPv4, it’s been a very long time since I’ve needed to so I’m just assuming it’s still an option.)


Thanks! (And I should say, it was a genuine question :-)). I’ll take a look at them; I don’t listen to any Spotify exclusives (yay!), but there are one or two that I subscribe to their supporter schemes and at the moment I do that through Spotify. I guess that’s my next task, work out if there’s a way to subscribe directly and use something like Antennapod.
New workers’-day-weekend project dropped…


Hmm. Sadly not available in my country yet (and while I’m sure I could pull some shenanigans to subscribe with a foreign address, I’m going to assume that also means none of our local music labels will be covered, so that’s a non starter.)
I’ll keep an eye on it. But in the meantime, I also gather it doesn’t do podcasts - anyone have any recommendations on where I could be migrating podcasts, from Spotify? Non-US companies only (which for avoidance of doubt, absolutely definitely excludes Apple).


Where are you getting that from? That’s not the case at all.


The New Yorker is still good for long-form journalism, if you’ve never given it a try it’s worth a look.


This isn’t an AI story, it’s a “completely fucking idiotic sysadmins exist” story.
Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That’s entirely on you, zero sympathy. (Actually, give any developer that power? You get what you deserve.)


Honestly, China is amazing, and well worth visiting (not just Shanghai though, which is far too westernized.) Fantastic people, too.
You wouldn’t get me in the US again at gunpoint, though.


That’s brilliant, thank you!
They should cover JPEG compression at some point; when I first learned how that worked (at uni nearly 40 years ago… God) I was blown away by how beautiful that algorithm is.


New Tier 1 actually; Shanghai is very much China for amateurs, of course - but that’s irrelevant; you might not realise this but the vast majority of Chinese also don’t live in Shanghai.


I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.
Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.


The thing is, all this can be true (and I don’t really understand why you’re being downvoted,) but it’s also true that LLMs are no more evidence that we are close to AGI than Eliza was.
AGI is inevitable, but it won’t come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.
The problem is, we don’t need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.
(As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I’ll be filling my bunker with canned goods when the latter appears to be close on the horizon…)
One of the problems the anti-AI crowd have with protrsting this use case is that they don’t seem to appreciate that the enshittification happened long before AI.
Actual Software Engineers make up about 5% of the profession, the other 95% have been turning out slop that they don’t even understand themselves for years. In that environment, an AI that does the same but at least doesn’t complain when asked to do rework doesn’t seem so bad.