I’ve used spicy auto-complete, as well as agents running in my IDE, in my CLI, or on GitHub’s server-side. I’ve been experimenting enough with LLM/AI-driven programming to have an opinion on it. And it kind of sucks.
…regular coding, again. We’ve been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.
The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.
It’s such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.
They aren’t useful now, but even assuming they were, the fundamental issue is that it’s extremely expensive to train and run them, and there is no current inkling of a business model where they actually make sense, financially. You would need to charge far more than what people could actually afford to pay to make them anywhere near profitable. Every AI company is burning through cash at an insane rate. When the bubble pops and the money runs out, no one will want to train and host them anymore for commercial purposes.
What do you expect to replace LLM coding?
…regular coding, again. We’ve been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.
The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.
It’s such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.
I would bet on LLMs being around and continuing to be useful for some subset of coding in 10 years.
I would not bet my retirement funds on current AI related companies.
They aren’t useful now, but even assuming they were, the fundamental issue is that it’s extremely expensive to train and run them, and there is no current inkling of a business model where they actually make sense, financially. You would need to charge far more than what people could actually afford to pay to make them anywhere near profitable. Every AI company is burning through cash at an insane rate. When the bubble pops and the money runs out, no one will want to train and host them anymore for commercial purposes.
They may not be useful to you… but you can’t speak for everyone.
You are incorrect on inference costs. But yes training models is expensive and the economics are concerning.
I think that the interest in it will go away, and after the ai bubble pops most of the tools for llm-coding wont be financially viable.
I would agree that the interest will wain in some domains where they aren’t aiding in productivity.
But LLMs for coding are productive right now in other domains and people aren’t going to want to give that up.
Inference is already financially viable.
Now, I think what could crush the SOTA models is if they get sued into bankruptcy for copyright violations. Which is a related but separate thread.
There’s viable local models.