• 0 Posts
  • 45 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2024

help-circle
  • I think this situation is not so black and white. Before we had the current gazillion streaming services and Netflix had almost all content, most would-be pirates weren’t even thinking about piracy since the service was good enough. In the current situation with atrocious monthly fees and content being split across 10+ streaming services, there probably are quite a few who legally stream what they can get with their subscriptions and pirate the rest.


  • True, it’s always a combination of resolution and bitrate, though I personally haven’t had the kind of artifacting you are describing. However I also never stream movies etc below 1080p, so I can’t judge how bad the encoding at 480p is on Disney+. In any case, provided the bitrate / encoding is sufficient, you can never reach the level of visual fidelity of higher resolutions with DVDs.

    And how would you get stuff onto your homeserver legally?

    Buy and rip Blu-rays, in some rare cases you can actually download DRM-free content, depending on your jurisdiction you may also be able to remove DRM protection legally.


  • Well with your DVDs the “HD resolution” question is easily answered: you don’t get HD resolution. Weird comparison there. Especially since you complain about Disney+ not going beyond 480p in your specific case - so why buy DVDs with the same shitty resolution?

    I’m all for media ownership, but I don’t see the point in buying optical discs (with rather limited lifetime) at 720x480px resolution. Blu rays at least offer HD / UHD, but the plastic / coating will still degrade with time.

    I think the way to go is a Homeserver (could even be a raspberry pi) where you can somewhat secure your storage with appropriate redundancy.






  • It’s not a network file system. It’s a regular file system for hard drives, SSDs and such, which is used by default on Windows since Windows NT (that’s where the NT comes from - it doesn’t stand for network but “new technology”).

    The implementation in Windows is closed source meaning the file system had to be reverse engineered to even work at all under Linux. Support nowadays is okay-ish, but as soon as you don’t properly shutdown your computer or use the file system under Windows, you will run into weird problems.

    Also it just straight up doesn’t work for most games running under wine.




  • Funnily enough, this is also my field, though I am not at uni anymore since I now work in this area. I agree that current literature rightfully makes no claims of AGI.

    Calling transformer models (also definitely not the only type of LLM that is feasible - mamba, Llada, … exist!) “fancy autocomplete” is very disingenuous in my view. Also, the current boom of AI includes way more than the flashy language models that the general population directly interacts with, as you surely know. And whether a model is able to “generalize” depends on whether you mean within its objective boundaries or outside of them, I would say.

    I agree that a training objective of predicting the next token in a sequence probably won’t be enough to achieve generalized intelligence. However, modelling language is the first and most important step on that path since us humans use language to abstract and represent problems.

    Looking at the current pace of development, I wouldn’t be so pessimistic, though I won’t make claims as to when we will reach AGI. While there may not be a complete theoretical framework for AGI, I believe it will be achieved in a similar way as current systems are, being developed first and explained after.



  • The goalpost has shifted a lot in the past few years, but in the broader and even narrower definition, current language models are precisely what was meant by AI and generally fall into that category of computer program. They aren’t broad / general AI, but definitely narrow / weak AI systems.

    I get that it’s trendy to shit on LLMs, often for good reason, but that should not mean we just redefine terms because some system doesn’t fit our idealized under-informed definition of a technical term.


  • Ah yes Mr. Professor, mind telling us how you came to this conclusion?

    To me you come off like an early 1900s fear monger a la “There will never be a flying machine, humans aren’t meant to be in the sky and it’s physically impossible”.

    If you literally meant that there is no such thing yet, then sure, we haven’t reached AGI yet. But the rest of your sentence is very disingenuous toward the thousands of scientists and developers working on precisely these issues and also extremely ignorant of current developments.


  • No, at least not in the sense that “hallucination” is used in the context of LLMs. It is specifically used to differentiate between the two cases you jumbled together: outputting correct information (as is represented in the training data) vs outputting “made-up” information.

    A language model doesn’t “try” anything, it does what it is trained to do - predict the next token, yes, but that is not hallucination, that is the training objective.

    Also, though not widely used, there are other types of LLMs, e.g. diffusion-based ones, which actually do not use a next token prediction objective and rather iteratively predict parts of the text in multiple places at once (Llada is one such example). And, of course, these models also hallucinate a bunch if you let them.

    Redefining a term to suit some straw man AI boogeyman hate only makes it harder to properly discuss these issues.