• 0 Posts
  • 71 Comments
Joined 1 year ago
cake
Cake day: August 18th, 2023

help-circle



  • You know, when I read The Handmaid’s Tale back in high school, I didn’t think the ending made any sense. How do you have tourists just walking around taking pictures when there’s horrible human rights violations happening in plain sight?

    I think I get it now.

    Honestly the accounts of the woman who visited almost bother me more than the men. Even as a tourist she wasn’t allowed to do certain things, but she can just leave whenever she wants. Wonder how her friends among the locals feel about that.





  • I was a reddit Sync user and was super bummed when (large scale) API access was shut off, so I jumped on the chance to use Sync for Lemmy. It defaulted to world for signups, presumably for ease of use for migrating reddit users. Knowing that Sync already had a loyal audience that was willing to put in a little effort to migrate, it seems the dev opted to make everything as similar to the reddit UX as possible, including registration.

    Now that I’m more familiar with the fediverse, I’ve been considering migrating to a more specialized instance that matches my interests. Truthfully, though, it seems unlikely that much of anything would change if I did since I’m going to keep using the same app, so I’ve been slow to move.

    To compare this with my experience with Mastodon, I was absolutely overwhelmed by the idea of instances and really had no idea which to join, nor did I have a familiar app to work with. I figured it out eventually, but a lot of the artists I follow didn’t or didn’t have time to, so overall I haven’t spent much time on it. I’ve spent way too much time on Lemmy so far.




  • Part of the problem is that we have relatively little insight into or control over what the machine has actually “learned”. Once it has learned itself into a dead end with bad data, you can’t correct it, only work around it. Your only real shot at a better model is to start over.

    When the first models were created, we had a whole internet of “pure” training data made by humans and developers could basically blindly firehose all that content into a model. Additional tuning could be done by seeing what responses humans tended to reject or accept, and what language they used to refine their results. The latter still works, and better heuristics (the criteria that grades the quality of AI output) can be developed, but with how much AI content is out there, they will never have a better training set than what they started with. The whole of the internet now contains the result of every dead end AI has worked itself into with no way to determine what is AI generated on a large scale.









  • I definitely agree, but that’s true of any system. The particulars of the pitfalls may vary, but a good system can’t overpower bad management. We mitigate the stakeholder issue by having BAs that act as the liason between devs and stakeholders, knowing just enough about the dev side to manage expectations while helping to prioritize the things stakeholders want most. Our stakes are also, mercifully, pretty aware that they don’t always know what will be complex and what will be trivial, so they accept the effort we assign to items.