• witx@lemmy.sdf.org
    link
    fedilink
    arrow-up
    4
    ·
    6 days ago

    To run perhaps. But what about the same metrics for debugging? How many hours do we spend debugging c/c++ issues?

    • lustyargonian@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      What if we make a new language that extends it and makes it fun to write? What if we call it c+=1?

  • QuazarOmega@lemy.lol
    link
    fedilink
    arrow-up
    7
    ·
    8 days ago

    This doesn’t account for all the comfort food the programmer will have to consume in order to keep themselves sane

    • arendjr@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.

      In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.

      An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.

        • witx@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          And how testable is that solution? Sure macros are helpful but testing and debugging them is a mess

          • RheumatoidArthritis@mander.xyz
            link
            fedilink
            arrow-up
            1
            ·
            5 days ago

            You mean whether the library itself is testable? I have no idea, I didn’t write it, it’s stable and out there for years.

            Whether the program is testable? Why wouldn’t it be. I could debug it just fine. Of course it’s not as easy as Go or Python but let’s not pretend it’s some arcane dark art

        • arendjr@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          8 days ago

          Well, let’s be real: many C programs don’t want to rely on Glib, and licensing (as the other reply mentioned) is only one reason. Glib is not exactly known for high performance, and is significantly slower than the alternatives supported by the other languages I mentioned.

            • arendjr@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              8 days ago

              Which one should I pick then, that is both as fast as the std solutions in the other languages and as reusable for arbitrary use cases?

              Because it sounds like your initial pick made you loose the machine efficiency argument and you can’t have it both ways.

  • frezik@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    For raw computation, yes. Most programs aren’t raw computation. They run in and out of memory a lot, or are tapping their feet while waiting 2ms for the SSD to get back to them. When we do have raw computation, it tends to be passed off to a C library, anyway, or else something that runs on a GPU.

    We’re not going to significantly reduce datacenter energy use just by rewriting everything in C.

      • TwistyLex@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        For Haskell to land that low on the list tells me they either couldn’t find a good Haskell programmer and/or weren’t using GHC.

      • Mihies@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        Also the difference between TS and JS doesn’t make sense at first glance. 🤷‍♂️ I guess I need to read the research.

        • Feyd@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          My first thought is perhaps the TS is not targeting ESNext so they’re getting hit with polyfills or something

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          8 days ago

          It does, the “compiler” adds a bunch of extra garbage for extra safety that really does have an impact.

          • pHr34kY@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            8 days ago

            I thought the idea of TS is that it strongly types everything so that the JS interpreter doesn’t waste all of its time trying to figure out the best way to store a variable in RAM.

            • Feyd@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              8 days ago

              TS is compiled to JS, so the JS interpreter isn’t privy to the type information. TS is basically a robust static analysis tool

            • Colloidal@programming.dev
              link
              fedilink
              arrow-up
              0
              ·
              8 days ago

              The code is ultimately ran in a JS interpreter. AFAIK TS transpiles into JS, there’s no TS specific interpreter. But such a huge difference is unexpected to me.

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 days ago

                Its really not, have you noticed how an enum is transpiled? you end up with a function… a lot of other things follow the same pattern.

                • FizzyOrange@programming.dev
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  6 days ago

                  No they don’t. Enums are actually unique in being the only Typescript feature that requires code gen, and they consider that to have been a mistake.

                  In any case that’s not the cause of the difference here.

          • mbtrhcs@feddit.org
            link
            fedilink
            arrow-up
            0
            ·
            7 days ago

            Only if you choose a lower language level as the target. Given these results I suspect the researchers had it output JS for something like ES5, meaning a bunch of polyfills for old browsers that they didn’t include in the JS-native implementation…

              • mbtrhcs@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                6 days ago

                Yeah sure, you found the one notorious TypeScript feature that actually emits code, but a) this feature is recommended against and not used much to my knowledge and, more importantly, b) you cannot tell me that you genuinely believe the use of TypeScript enums – which generate extra function calls for a very limited number of operations – will 5x the energy consumption of the entire program.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  This isn’t true, there are other features that “emit code”, that includes: namespaces, decorators and some cases even async / await (when targeting ES5 or ES6).

            • HelloRoot@lemy.lol
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 days ago

              I’m using the fattest of java (Kotlin) on the fattest of frameworks (Spring boot) and it is still decently fast on a 5 year old raspberry pi. I can hit precise 50 μs timings with it.

              Imagine doing it in fat python (as opposed to micropython) like all the hip kids.

    • mbirth@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      Does the paper take into account the energy required to compile the code, the complexity of debugging and thus the required re-compilations after making small changes? Because IMHO that should all be part of the equation.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        It’s a good question, but I think the amount of time spent compiling a language is going to be pretty tiny compared to the amount of time the application is running.

        Still - “energy efficiency” may be the worst metric to use when choosing a language.