• 0 Posts
  • 16 Comments
Joined 1 month ago
cake
Cake day: June 28th, 2025

help-circle
  • I’ve tried Copilot for a while and played around with Cursor for a bit. I was better and faster without Copilot due to sometimes not paying enough attention of the lines it would generate. This would cause subtle bugs that took a long time to debug. Cursor just produced unmaintainable code-bases that I had no knowledge of, and to make major changes, would be faster for me to just rewrite it from scratch. The act of typing gives me time to think more about what I’m doing or am going to do, while Copilot generations are distracting and break my thought processes. I work best with good LSP tooling and sometimes AI chatbots (mostly just for customized example snippets for libraries or frameworks I’m unfamiliar with; though that has its own problems because the LLMs knowledge is out of date a lot) that don’t directly modify my code.


  • People have different levels of “nerves” as others, and it kind of sounds like you may filtering out applicants on an arbitrary metric (how nervous a person may be in an interview). Don’t have enough information about your process to say for sure (obviously), but it may be something to think about. Interviews can be very high-stakes for some people (such as “I may become homeless”), and not for others (“my parents are rich”). After hired, it’s not necessarily as high-staked, and toy problems aren’t what SEs work on day-to-day.











  • What you’re describing is basically stagflation. It doesn’t necessarily mean a crash. It’s possible for the majority of people to keep on earning less and less real income for a long time without a crash.

    I do wonder what the effect of all the layoffs from tech and the public sector and all the cuts in federal funding will do though. Dunno if that’s enough to flood the housing market and crash it or not. I think I’ve read that banks are in a good position to absorb housing market losses, so it won’t be like 2008.

    AFAIK, most current economic indicators are OK. Not necessarily great, but not dire either.

    The stock market makes no sense to me. It doesn’t appear most stocks move on the fundamentals of the companies or anything like that. It all appears to be driven by hype/gambling, and propped up from sustained lows by 401ks on auto-pilot and people trained to “buy the dip” by the quick Covid recovery.

    The USD appears to be rapidly losing a lot of value compared to other currencies like the EUR. But, that fits well into the plan to reduce imports and boost US exports. Inflation with stagnant wages makes US exports more attractive/cheaper.




  • Yeah, they probably wouldn’t think like humans or animals, but in some sense could be considered “conscious” (which isn’t well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

    This argument seems weak to me:

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    You can emulate inputs and simplified versions of hormone systems. “Reasoning” models can kind of be thought of as cognition; though temporary or limited by context as it’s currently done.

    I’m not in the camp where I think it’s impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I’m not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a “singularity-like” scenario.

    I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.