It’s all made from our data, anyway, so it should be ours to use as we want

  • A1kmmA
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    They are not “compressing data.” Your analogy to making a video recording is not applicable. These AIs learn patterns from the training data. Themes, styles, vocabulary, and so forth. That stuff is not copyrightable.

    A lossy compression algorithm for video is all about finding parameters 𝐖 to a function f that predicts a (time, row, col) vector (call that vector 𝐱) produce a (R, G, B) colour vector 𝐲̂ at 𝐱.

    Encoding means you have some training data - a matrix of pixel colours at different points in time, 𝐘, and a corresponding matrix giving the time, row and column for each row in 𝐘, called 𝐗. The algorithm finds 𝐖 to minimise some loss function between 𝐘̂ = f(𝐗; 𝐖) and 𝐘. A serialised form of 𝐖 makes up the compressed video stream.

    Decoding then is just an inference problem - given 𝐖, find 𝐘̂ = f(𝐗; 𝐖) for each 𝐗 (time, row, column) that you care about. The predicted colours are then displayed at the appropriate points on the screen.

    This scheme tends to work well for interpolating - you can evaluate the pixel colour at any row or column within the intended limits that 𝐖 was trained on, even at subpixel locations that weren’t in the original data, and at times that are between the original frames. Extrapolating beyond those ranges is unlikely to work well. When given the exact input vectors it was trained on, it will produce outputs that are likely slightly different, but are close enough that the video as a whole is perceptually similar enough. The fact that interpolation works, however, tells us that the encoding is learning patterns from the training data, so it can produce - it’s not just recording the raw data.

    Now, the interesting thing is that an LLM is effectively the same thing, with a couple of differences:

    1. Instead of the domain of f being a (time, row, col) 3D space, the input vector is a multidimensional latent space.
    2. Instead of being trained over a single work, it’s trained over lots of different works, and so when there are things in common between those works, compression allows it to be more efficient.

    Just like how the lossily encoded video can’t reproduce the exact pixel colour at every point, a trained LLM usually can’t repeat word-for-word a piece of input data. But for many works that are included and mentioned a lot in the training data, there absolutely are points in the latent space where the parameters allow inference to reproduce the high-level characters and plot of the work, and to do it in a way that could serve as a substitute for the original work.

    Now this does expose gaps in copyright laws (e.g. why should LLM weights be copyright when our brains do a similar thing, and can also reproduce the plot and themes of works?) - applying copyright laws today is extrapolating outside the range of what legislators even imagined was possible when they were created. And in many countries, the law is applied differently to the rich and powerful. But I think if a status quo interpretation of laws and precedent was applied as copyright law stands, it is very likely the outcome would be that LLM model weights are often derivative works.

    Disclaimer: IANAL.