• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle














  • This was literally an “Ask Lemmy” question, which pulls on individual personal experience for responses, so I’m not sure what else you would have been expecting.

    I work with MBAs all day every day. Nonstop. They’re the vast majority of my touchpoints as a lifelong software engineer/DBA that manages several teams. I’ve been in the industry for 25+ years and have worked for multiple large (enterprise tier) medium, and small (startup) companies across multiple states including owning my own consulting company and interfaced directly with C-types that held nothing but MBAs.

    So, not uninformed, but it is anecdotal. In the sense that this matches my life experience for 25+ years of working closely with MBA types on hundreds of projects during that time. Someone else might have different experiences. But I’m here answering their question so I’m going to talk about my experiences.

    There’s plenty of MBA holders that are pragmatic and “normal”. However, at the top level, MBAs either attract, or turn people into narcissistic sociopaths, because the majority of narcissistic sociopaths I know and have worked with, hold MBAs.

    Take from that what you will.

    Edit: Apparently he took away a downvote. Getting a sneaking suspicion this guy might have an MBA. :) Not sure why you’re downvoting my life experiences, but sure guy. You win.


  • hahahahahahaahahaha

    no

    Edit: Rather than being full snark (it was a genuinely funny question though), I’ll give a more thoughtful answer. The reason the answer is no, is because MBAs tend to attract narcissistic sociopaths. And the first thing they do in this situation, is blame someone else, not the degree, but the specific person.

    “If only he was a better MBA he would have kept the company focused on its core values”. That sort of thing.

    The thing a degree that’s held by the majority of Narcissists and Sociopaths in the world absolutely won’t do, is inflect.



  • This is incorrect. And I’m in the industry. In this specific field. Nobody in my industry, in my field, at my level, seriously considers this effective enough to replace their day to day coding beyond generating some boiler plate ELT/ETL type scripts that it is semi-effective at. It still contains multiple errors 9 times out of 10.

    I cannot be more clear. The people who are claiming that this is possible are not tenured or effective coders, much less X10 devs in any capacity.

    People who think it generates quality enough code to be effective are hobbyists, people who dabble with coding, who understand some rudimentary coding patterns/practices, but are not career devs, or not serious career devs.

    If you don’t know what you’re doing, LLMs can get you close, some of the time. But there’s no way it generates anything close to quality enough code for me to use without the effort of rewriting, simplifying, and verifying.

    Why would I want to voluntarily spend my day trying to decypher someone else’s code? I don’t need chatGPT to solve a coding problem. I can do it, and I will. My code will always be more readable to me than someone else’s. This is true by orders of magnitude for AI-code gen today.

    So I don’t consider anyone that considers LLM code gen to be a viable path forward, as being a serious person in the engineering field.


  • They’re falling for a hype train then.

    I work in the industry. With several thousand of my peers every day that also code. I lead a team of extremely talented, tenured engineers across the company to take on some of the most difficult challenges it can offer us. I’ve been coding and working in tech for over 25 years.

    The people who say this are people who either do not understand how AI (LLMs in this case) work, or do not understand programming, or are easily plied by the hype train.

    We’re so far off from this existing with the current tech, that it’s not worth seriously discussing.

    There are scripts, snippets of code that vscode’s llm or VS2022’s llm plugin can help with/bring up. But 9 times out of 10 there’s multiple bugs in it.

    If you’re doing anything semi-complex it’s a crapshoot if it gets close at all.

    It’s not bad for generating psuedo-code, or templates, but it’s designed to generate code that looks right, not be right; and there’s a huge difference.

    AI Genned code is exceedingly buggy, and if you don’t understand what it’s trying to do, it’s impossible to debug because what it generates is trash tier levels of code quality.

    The tech may get there eventually, but there’s no way I trust it, or anyone I work with trusts it, or considers it a serious threat or even resource beyond the novelty.

    It’s useful for non-engineers to get an idea of what they’re trying to do, but it can just as easily send them down a bad path.