We’re all seeing the breathless hype surrounding the vacuous marketing term. It’ll change everything! It’s coming for our jobs! Some 50% of white-collar workers will be laid off!

Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

I was having a conversation on Reddit along these lines a couple of days ago, and after seeing more news that just parrots Altman’s theme-du-jour, I need a sanity check.

Something I’ve always found hilarious at work is someone asking if you have a calculator (I guess that dates me to the flip-phone era) … my canned response was “what’s wrong with the very large one on your desk?”

Like, automation is literally why we have these machines.

And it’s worth noting that you can’t automate the interesting parts of a job, as those are creative. All you can tackle is the rote, the tedious, the structured bullshit that no one wants to do in the first place.

But here’s the thing: I’ve learned over the decades that employers don’t want more efficiency. They shout it out to the shareholders, but when it comes down to the fiefdoms of directors and managers, they like inefficiency, thank you very much, as it provides tangible work for them.

“If things are running smoothly, why are we so top heavy” is not something any manager wants to hear.

Whatever the fuck passes for “AI” in common parlance can’t threaten management in the same way as someone deeply familiar with the process and able to code. So it’s anodyne … not a threat to the structure. Instead of doubling efficiency via bespoke code (leading to a surplus of managers), just let a couple people go through attrition or layoffs and point to how this new tech is shifting your department’s paradigm.

Without a clutch.

I’ve never had a coding title, but I did start out in CS (why does this feel like a Holiday Inn Express ad?), so regardless of industry, when I end up being expected to use an inefficient process, my first thought is to fixing it. And it has floored me how severe the pushback is.

I reduced a team of 10 auditors to five at an audiobook company with a week of coding in VB. A team of three placing ads to 0.75 (with two of us being me and my girlfriend) at a newspaper hub.

Same hub, clawed back 25% of my team’s production time after absurd reporting requirements were implemented despite us having all the timestamps in our CMS – the vendor charged extra to access our own data, so management decided a better idea than paying the vendor six figures was overstaff by 33% (250 total at the center) to get those sweet, sweet self-reported error-laden data!

At a trucking firm, I solved a decadelong problem with how labour-intensive receiving for trade shows was. Basically, instead of asking the client for their internal data, which had been my boss’ approach, I asked how much they really needed from us, and could I simplify the forms and reports (samples provided)? Instant yes, but my boss hated the new setup because I was using Microsoft Forms to feed Excel, and then a 10-line script to generate receivers and reports, and she didn’t understand any of that, so how was she sure I knew what I was doing?

You can’t make this shit up.

Anyway, I think I’ve run far afield of my central thesis, but I think these illustrations point to a certain intransigence at the management level that will be far more pronounced than is being covered.

These folks locked in their 2.9% mortgage and don’t want to rock the boat.

My point is, why would management suddenly be keen on making themselves redundant when decades of data tell us otherwise?

This form of “AI” does not subvert the dominant paradigm. And no boss wants fewer employees.

As such, who’s actually going to get screwed here? The answer may surprise you.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    Why is so much coverage of “AI” devoted to this belief that we’ve never had automation before (and that management even really wants it)?

    I’m going to set aside the question of whether any given company or a given timeframe or a given AI-related technology in particular is effective. I don’t really think that that’s what you’re aiming to address.

    If it just comes down to “Why is AI special as a form of automation? Automation isn’t new!”, I think I’d give two reasons:

    It’s a generalized form of automation

    Automating a lot of farm labor via mechanization of agriculture was a big deal, but it mostly contributed to, well, farming. It didn’t directly result in automating a lot of manufacturing or something like that.

    That isn’t to say that we’ve never had technologies that offered efficiency improvements across a wide range of industries. Electric lighting, I think, might be a pretty good example of one. But technologies that do that are not that common.

    kagis

    https://en.wikipedia.org/wiki/Productivity-improving_technologies

    This has some examples. Most of those aren’t all that generalized. They do list electric lighting in there. The integrated circuit is in there. Improved transportation. But other things, like mining machines, are not generally applicable to many industries.

    So it’s “broad”. Can touch a lot of industries.

    It has a lot of potential

    If one can go produce increasingly-sophisticated AIs — and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations — there’s a pathway to, over time, automating darn near everything that humans do today using that technology. Electrical lighting could clearly help productivity, but it clearly could only take things so far.

    So it’s “deep”. Can automate a lot within a given industry.

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 hours ago

      There is a fundamental limitation of all LLMs that prevents it from doing as much as you might think, regardless of how accurate they are (and they are not):

      LLMs cannot take liability. When they make mistakes, they cannot take responsibility for those mistakes. The person who used the LLM will always be liable instead.

      So any automation as a result of LLMs removing jobs will end up punting that liability to the next person up the chain. Management will literally have nobody to blame but themselves, and that’s their worst nightmare.

      Anyway, this is of course assuming capabilities that don’t exist.

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        11 hours ago

        Interestingly enough, not even making them actually intelligent would be enough to make them liable - because you can’t punish or reward them.

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          Yep! You would need not only an AI superintelligence capable of reflecting and adapting, but legislation which holds liable those superintelligences and grants them the rights and obligations of a human. Because there is no concept of reward of punishment to a LLM, they can never be replacements for people.

          • Lvxferre [he/him]@mander.xyz
            link
            fedilink
            arrow-up
            2
            ·
            8 hours ago

            It’s more than that: they’d need to have desires, aversions, goals. That is not automatically granted by intelligence; in our case it’s from our instincts as animals. So perhaps you’d need to actually evolve Darwin style the AGI systems you develop, and that would be way more massive than a single AGI, let alone the “put glue on pizza lol” systems we’re frying the planet for.

              • Powderhorn@beehaw.orgOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                I’m reminded of the fairy tale of the two squirrels in the Black Forest. As fall came to pass, they BALLROOM!

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      It’s ultimately frustrating to me that I suspect AI here. There are weird inconsistencies.

      But, come on.

      It has a lot of potential

      Really? That’s what everyone says about their toddler while it pukes.

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations

      We already know there are massive fundamental limitations. All of the big name AI companies are all in on LLMs which can’t do anything that hasn’t been done before, unless it is just arbitrarily outputting something randomly mashed together which is not what to do for anything important. It is a dead end without humans doing things it can copy. When a new coding language is developed, it can’t use it until lots and lots of people have used it to suck up their code to vomit forth.

      LLMs, which is what all of the general purpose AIs are, cannot be a long term solution to anything unless we are just pausing technology and society to whenever it can handle ‘everything’. LLMs have already peaked and that is supposedly the road to general AI.