At least with the more advanced LLM’s (and I’d assume as well for stuff like image processing and generation), it requires a pretty considerable amount of GPU just to get the thing to run at all, and then even more to spit something out. Some people have enough to run the basics, but most laptops would simply be incapable. And very few people would have resources to get the kind of outputs that the more advanced AI’s produce.
Now, that’s not to say it shouldn’t be an option, or that they force you to have some remote AI baked into your proprietary OS that you can’t remove without breaking user license agreements, just saying that it’s unfortunately harder to implement locally than we both probably wish it was.
Definitely. It helps that I tend to stick to the less mainstream lemmy communities (mostly the queer communities on blahaj.zone). It’s to a point where it can be rare for me to enter a popular comment section without seeing at least 1 or 2 recognizable usernames.
To be honest, I kind of enjoy the smallness of the platform at times, it reminds me of what the old internet has always been described as to me (but with faster data transfer, and more features)