• nexussapphire@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    It’s so fun to play with offline AI. It doesn’t have the creepy underpinnings of knowing art and journalism as well as musings from social media was blatantly stolen from the internet and sold as a service for profit.

    Edit: I hate theft and if you think theft is ok for training llms go ahead and dislike this comment. I don’t feel bad about what I said, local offline AI is just better because it doesn’t work on the premise of backroom deals and blatant theft. I will never use an AI like DALL.E when there is a talented artist trying to put food on the table with a skill they honed for years. If you condone stealing you are a cheap, heartless, coward.

    • Teanut@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I hate to break it to you, but if you’re running an LLM based on (for example) Llama the training data (corpus) that went into it was still large parts of the Internet.

      The fact that you’re running the prompts locally doesn’t change the fact that it was still trained on data that could be considered protected under copyright law.

      It’s going to be interesting to see how the law shakes out on this one, because an artist going to an art museum and doing studies of those works (and let’s say it’s a contemporary art museum where the works wouldn’t be in the public domain) for educational purposes is likely fair use - and possibly encouraged to help artists develop their talents. Musicians practicing (or even performing) other artists’ songs is expected during their development. Consider some high school band practicing in a garage, playing some song to improve their skills.

      I know the big difference is that it’s people training vs a machine/LLM training, but that seems to come down to not so much a copyright issue (which it is in an immediate sense) as a “should an algorithm be entitled to the same protections as a person? If not, what if real AI (not just an LLM) is developed? Should those entities be entitled to personhood?”