I was looking into the new, probably AI, data center being built in town and noticed it’s built by a private equity backed firm. The data center was rejected by the city and has to operate with a standard cooperate building water supply. They said are switching to air cooling only and reducing the compute power to keep power usage the same. This has caused amazon, the alleged operator, to back out. So they are building a giant reduced capacity data center with no operator and apparently still think that’s a good idea. My understanding of the private equity bubble is that the firms can hide “under performing” assets because it’s all private. From what I read, possibly 3.2 Trillion dollars of it. I feel like this new data center is going on the “under performing” pile.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    It’s important to note that in some previous bubbles, the leftovers of the crash ended up spurring new beneficial growth after.

    GPUlike computing power available at scape for essentially free after the ai crash could be used in all sorts of potential ways.

    Maybe it makes rending movies with special effects super cheap, and available even to tiny indie studios. Maybe scientists grab it for running physics simulations or disease treatment computations.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      20 hours ago

      The problem is that the deprecation/obsolescence/lifetime cycles of GPUs are WAY more rapid than anyone in the “AI” circlejerk bubble is willing to admit. Aside from the generational upgrades that you tend to see in GPUs, which make older models far less valuable in terms of investment, server hardware simply cannot function at peak load indefinitely - and running GPUs at peak load constantly MASSIVELY shortens the MTBF.

      TL;DR: the way GPUs are used in ML applications mean that they tend to cook themselves WAY quicker than the GPU you have in your gaming machine or console - as in, they often have a couple of years lifetime, max, and that failure rate is a bell curve.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        You’re pulling shit out of your ass at this point, there are some doom reports out of people suggesting that may be a problem, but there are also reports out of other companies(meta for example) with documentation saying the rate is much lower and the mean failure is 6+ years.

        The other leftovers from the crash also won’t have that problem. It’s not just about GPUs. Datacenters and their infrastructure last a lot longer, and the electric generation/transportation networks will also potentially be useful for various alternative applications if the AI use case flops.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          MTBF is absolutely not six years if you’re running your H100 nodes at peak load and heat soaking the shit out of them. ML workloads are particularly hard on GPU RAM in particular, and sustained heat load on that particular component type on the board is known to degrade performance and integrity.

          As to Meta’s (or MS, or OpenAI, or what have you) doc on MTBF: I don’t really trust them on that, because they’re a big player in the “AI” bubble, so of course they’d want to give the impression that the hardware they’re using in their data centers still have a bunch of useful life left. That’s a direct impact to their balance sheet. If they can misrepresent extremely expensive components that they have a shitload of as still being worth a lot, instead of being essentially being salvage/parts only, I would absolutely expect them to do that. Especially in the regulatory environment in which we now exist.

    • fullsquare@awful.systems
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      gpus as used for genai aren’t really suitable for normal loads like aerodynamic simulations, genai uses low precision data like fp8, fp4, blackwells and such are optimized for it so hard that you can’t really do anything else on this thing

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        They’re still somewhat functional for those workloads, and they can even use those low-precision components to emulate high-precision using libraries like cublas, though obviously not as fast as hardware that could do it natively.

        It’s not like they can’t do it at all. It’s just that Hopper was better at FP64 than Blackwell is but if Blackwell chips become effectively free due to an AI crash then you could likely still use them in that capacity.

        • fullsquare@awful.systems
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          15 hours ago

          it’s hacky and wrong. or you could use different hardware, maybe from competition, because result isn’t worth electricity it used

    • Munkisquisher@lemmy.nz
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Cloud compute was attractive to 3d rendering for a while, as you could put your non urgent renders on the cloud at the lowest priority and take advantage of off peak pricing. Now model training demand has wiped out off peak pricing and forced the cloud rendering cost way higher than rendering locally.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Running those power hungry gpu based data centers is going to cost beaucoup bucks regardless of the usage.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        The power costs are nothing compared to the hardware costs for those things.

        Getting the massive power required is difficult for datacenters but the usage per unit of compute is actually quite low.

    • litchralee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Absolutely, yes. I didn’t want to elongate my comment further, but one odd benefit of the Dot Com bubble collapsing was all of the dark fibre optic cable laid in the ground. Those would later be lit up, to provide additional bandwidth or private circuits, and some even became fibre to the home, since some municipalities ended up owning the fibre network.

      In a strange twist, the company that produced a lot of this fibre optic cable and went bankrupt during the bubble pop – Corning Glass – would later become instrumental in another boom, because their glass expertise meant they knew how to produce durable smartphone screens. They are the maker of Gorilla Glass.

      • dustyData@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        Rack space is literally the only thing valuable that would be left. Those GPUs are useless for non LLM computation. The optimization of the chips and the massive amounts of soldered RAM. They are purpose made, and they were also manufactured very cheap without common longevity and endurance design features. They will degrade and start failing after less than 5 years or so. Most would be inoperable in a decade. Those data centers are massive piles of e-waste, an absolute misuse of sand.

        • litchralee@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 hours ago

          Racks/cabinets, fiber optic cables, PDUs, CAT6 (OOBM network), top-of-rack switches, aggregation switches, core switches, core routers, external multi-homed ISP/transit connectivity, megawatt three-phase power feeds from the electric utility, internal power distribution and step-down transformers, physical security and alarm systems, badge access, high-strength raised floor, plenum spaces for hot/cold aisles, massive chiller units.

          • dustyData@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            22 hours ago

            Yes, that’s rack space. It is not even half of the costs of a data center. I know because I’ve worked in data centers and read the financial breakdowns of those materials. They are also useless without actual servers and deprecrate their value really fast.