• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      3 days ago

      If someone did an Aaron-Schwartz-style scrape, then published the data they scraped in a downloadable archive so that AI trainers could download it and use it, would you find that objectionable?

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          3 days ago

          That suggestion is exactly the same as what I started with when I said “IMO the ideal solution would be the one Wikimedia uses, which is to make the information available in an easily-downloadable archive file.” It just cuts out the Aaron-Schwarts-style external middleman, so it’s easier and more efficient to create the downloadable data.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              3 days ago

              I don’t understand why the burden is on the victims here.

              They put the website up. Load balancing, rate limiting, and such go with the turf. It’s their responsibility to make the site easy to use and hard to break. Putting up an archive of the content that the scrapers want is an easy and straightforward thing to do to accomplish this goal.

              I think what’s really going on here is that your concern isn’t about ensuring that the site is up, and it’s certainly not about ensuring that the data it’s providing is readily available. It’s that there are these specific companies you don’t like and you just want to forbid them from accessing otherwise freely accessible data.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  3 days ago

                  That is absolutely ridiculous. The pressure AI scraping puts on sites vastly outstrips anything people built for, as evidenced by the fact that the systems are going down.

                  Yes. Which is why I’m suggesting providing an approach that doesn’t require scraping the site.