Ask HN: Is anyone experiencing "AI brain rot?"

I’ve noticed that the more often I ask for AI assistance or “vibe code” things, the less sharp I am at solving difficult challenges of a similar nature the next time.

Has anyone else felt the boost brings weakness at times when you need to show strength?

49 points | by benguild 3 days ago

29 comments

  • tomgp 3 days ago
    Perhaps not surprising given what we know about how e.g. satnav usage affects our navigational competence. Feels like outsourcing our reasoning and problem solving would have an analagous effect.

    "Habitual use of GPS negatively impacts spatial memory during self-guided navigation": https://www.nature.com/articles/s41598-020-62877-0

    • huevosabio 2 days ago
      I am much better at navigating my home town than San Francisco precisely because I moved to SF by the time of Google Maps.

      I caught myself at some point feeling like that was happening when going over my own codebase. I stopped being aggressive on AI usage for important stuff, and relegated for side stuff that I just want to get started.

  • Jugurtha 13 hours ago
    One method that works really well is the following:

    - Unplug your laptop's charger

    - Disconnect from the internet

    - Work using only using your battery charge and the documentation on your laptop

    If you find that you need an information about something that's in a doc you don't have, only connect to the internet to download it, then disconnect. Nothing more.

    Batteries will only last for about two hours when they're not new, and you have the docs on your machine without distractions online. This forces you to work only on what you're working on, and do it fast as the battery is discharging so you catch yourself more often "drifting" in your thoughts, and you'll get better at catching yourself and focusing again, because "Quick! It's discharging!"

    I know it sounds silly as you can just connect to the internet or plug your charger, but it really works. Its real job is to remind you to focus and get things done.

    Try it, and you'll find soon enough just how much of your attention was scattered, but you'll be weaning as soon.

  • thorin 3 days ago
    Not just satnav, but as a kid I could remember at least 20 phone number of close friends and family. My spelling used to be quite good, but now I struggle with words my 9 year old can spell. I think this is the way with any new technology, we just move to the next level of abstraction and hopefully get to build bigger/better things...maybe.
    • sejje 3 days ago
      I've never heard of spelling regressing.

      I think my spelling ability is mostly correlated to how much I have read. I know how to spell many words I've never used myself.

      • Cerium 3 days ago
        One thing I notice is that I can't spell out loud anymore. I have to type or write to get the right spelling.
        • callbacked 2 days ago
          I would wager that autocorrect/predictive text on your phone is to blame for that. It is such an overlooked convenience to a problem we solved decades ago that it is on virtually any smartphone.

          Typing in a hurry or you forgot to add one more s in the word 'necessary'? No worries! Keep typing incorrectly and the phone will take care of the proper spelling of the word for you! And it will do so faster than you figuring out the right spelling for the misspelled word. Many of us have probably regressed in our spelling ability to some degree at this point.

    • tetris11 3 days ago
      I do find that my ability to multitask has improved as of late, and I can switch contexts much faster by just knowing that I can quickly probe an AI to bring up to speed on a topic I used to know in detail about.

      (Though it could also just be that I've gotten significantly better at taking notes over the last 5 years)

    • paulcole 1 day ago
      Isn’t this just what previous generations called getting old lol?
  • skwee357 2 days ago
    It’s not only AI. It’s short form content, fast paced life. I can’t seem to be able to concentrate as I used to, get into the zone. I get irritated way more, for example when I can’t code a solution fast enough and the AI is being stupid, I get angry.
    • harryquach 2 days ago
      Unfortunately, at least at my company, leadership has caught this bug. This means features need to be delivered faster, with a smaller team, etc. Not a good way to create quality software.
  • kypro 2 days ago
    No, not really. I find it almost impossible to get AI to generate anything of any significant complexity.

    For me the main difference is that I search and read much less when problem solving. In the past I'd often have to search docs and SO for ages looking for information about some bug/functionality. Today about 50% of the time I can just ask AI if it knows what I need. Although the other 50% of the time it will give me some BS hacky answers or just make something up – especially when I'm using less common libraries.

    Where I have got much lazier is when it comes to writing BS code like regexs, etc. It's not that I can't write them, I just know that it will take me longer to think about and create the regex than explaining to an LLM what I want to match on. But you still need to know regex to review it so I don't feel like losing any skills, just able to work faster in some cases like how pulling a email regex from SO would have help you code a solution faster a decade a go.

  • sterlind 3 days ago
    Yes. It makes me lazy, and I take longer because I keep prompting the machine to fix its mistakes rather than understanding how its code works and why it's stuck.

    So I use it for brainstorming, and as a temporary crutch for working with unfamiliar languages/frameworks/tools, and wean myself off it as quickly as I can.

    For my work, which is very research and CS theory-heavy, I'm faster without it.

  • runjake 3 days ago
    Not that I've noticed, but I've made a concerted effort from the beginning to mentally work through any AI-generated code I use. Yes, it does require a lot of discipline.

    I believe it's equally important to ask it to explain why it generated the code when the output is unexpected. More often that not, it's acting more like "fancy autocomplete", though.

    • fzzzy 2 days ago
      I agree, I'm too picky in domains I am knowledgable about to not go over every character of the output. For areas where I have no clue, I have sometimes found success with just pasting the code without understanding it, but it wasn't an easy process.
  • bn-l 3 days ago
    Yes I’m experienced this. It’s a worry. Even though the llms are not capable of doing anything properly complex / interconnected / novel, I think something in my brain is atrophying by offloading the simple stuff.
  • nicbou 1 day ago
    I feel like it's the opposite for me. It rewards my curiosity. I can ask it questions that Google is no longer capable of answering, either because it reinterprets my query wrong or because it only returns generic content from top sites. I use it a lot in art museums to understand the story behind certain paintings or the difference between certain art movements.

    I use AI for coding, but basically as a shortcut to API documentation. It lets me stay focused on my task. It removes a lot of the tedium of coding, so that I can focus on the actual problems.

    • adamtaylor_13 1 day ago
      The has been my experience as well. AI has made me a much more prolific problem solver because I can try so many more things in such a short amount of time, learning from all of it.

      Granted, I refuse to take AI solutions and just plug them blindly without looking at them. I never use “YOLO”-mode. I’m always questioning and thinking about the code it outputs, the same way I’d critique a jr dev’s pull request.

  • Vaslo 2 days ago
    You are right, though I think some of it is our addiction to phones and such. BUT as a very mediocre coder llms have really accelerated my ability to code. I think I am the sweet spot for them. Much more experienced than a beginner because I know when to use classes, certain data types etc and can get the LLM to do what I ask and make changes when it’s crap. But not nearly good enough to construct algorithms in my head, which is probably a big advantage to an experienced coder and LLMs help to catch the rest of us up (though definitely do not replace.)
  • Aperocky 1 day ago
    Depends on what you consider a "difficult challenge". If the challenge is to remember certain algorithm, it's not worth it - coming up with djikstra without hint was fun the first time, but needing to remember the detail for the nth time isn't.

    And for good measure, those stop having value in the future - as they can be referenced from AI as easily as looking up how to convert a date format or center a div.

  • taurath 2 days ago
    I'm a neurodivergent coder and have always been a "code by reference" person, so I will have API docs up even if I've done a thing 100 times. The context helps me get into the mindset of what I'm doing, and has helped me be productive.

    This fails when people expect rote memorization like in coding interviews, but it works well when I'm actually working through problems in a company - I have a set of references up and available, usually of the internal codebase. I use AI as an advanced reference primarily so I haven't had the problem as stated here yet!

    I think a lot of folks are finding that via AI they're switching over to the way that I do code, without the benefits of internal thinking structure I've had to come up with over time to manage it. I try to think of whatever I'm doing in a sort of scaffold, and then change my focus into the details. If you lose your ability to zoom in or out on layers of abstraction (because you don't understand whats happening in detail), that's when you need to spend time diving in.

    You can't just sit back all the time like a farmer who's got a machine to plow and tend to all the fields, you need to be doing QA and making sure its doing what it says its doing. If you're letting it handle what the scaffold is, you know even less.

    You decide which if any mental models you outsource to AI. You decide what you manage, and whats important for you to manage. Don't let short term productivity gains hobble your ability to zoom in!

  • gengstrand 3 days ago
    While I was working on this piece about using LLMs to migrate code https://glennengstrand.info/software/llm/migration/java/type... I experienced a first hand encounter of review fatigue. One of the lines of code in the middle of a LLM generated block was gibberish but not gibberish enough to be caught by the compiler. I study the code before accepting the change but just had a complete blind spot on that line. It was the linter that eventually caught it.
  • abhisek 3 days ago
    Yes. Very much. My brain refuses to focus on harder problem by default. Seems to rely more on AI first before accepting the fact that some problems require deep thinking even to frame to the AI
  • taatparya 3 days ago
    Think what AI will do to junior programmers. Recent discussion: https://news.ycombinator.com/item?id=43444058
  • cableshaft 3 days ago
    Not too much. I turn to it a bit faster when I get stuck at work, as speed is important at my job (I work in consulting), but if I'm not stuck I don't use it, and I'm probably faster than if I tried using it when I'm not stuck.

    If not for it I'd be spending even longer scouring StackOverflow and documentation for ideas and example code, so other than a potential timesaver it's not really that different than what I was doing before.

  • gaeb69 2 days ago
    Yes. It's definitely noticeable. Though, I try to console myself by remembering that my architectural design is definitely hypertrophied. My code is ultimately better. I think my LLM-assisted problem solving is definitely more efficient as well. My code literacy is better.

    Hopefully LLMs won't vanish away... I'd be at a net negative.

  • harryquach 2 days ago
    I have been "encouraged" by my company to go full throttle into integration LLMs into our workflow. I have found is has taken much of the enjoyment out of solving problems with code. As others have said, it has made me lazier and I can see myself losing my edge.
    • nextts 2 days ago
      Yeah it is the strangler fig pattern but the thing getting strangled is human coders.
  • solardev 3 days ago
    Of course. In the same way that Stackoverflow has for years, or asking a coworker, or a teacher. Programming is much easier now because of all these resources, but that only means the job gets busier and ever more productivity is expected by fewer people doing the work.
  • twoquestions 3 days ago
    I find that Copilot's suggestions are not quite what I want in most cases, so I have to edit what they give me anyway.

    I know Primagen had a similar experience that you had, he found it useful to keep the AI chat out of his direct editor, and that's how I've been using it too.

  • admissionsguy 3 days ago
    how is it possible? what size and complexity projects are you working with?

    I have integrated llms into my dev workflow as much as possible and they are just not capable of doing all that much.

    • zenNeko 3 days ago
      One time trying to fix a redux problem gemini started rewriting redux itself.
      • solardev 3 days ago
        Sounds like it's learning The Javascript Way just fine, then.
  • nasaok 2 days ago
    It depends, I run gpt-4o - powered instagram comment automation. We analyze reels with AI (text + image recognition), decide where to comment, and optimize replies based on top-performing engagement patterns. AI basically allowed me to remove hundreds of people from my agency.
  • doruk101 3 days ago
    I started experimenting with a some form of LLM detox in order to get my brain more active again.
  • iExploder 2 days ago
    AI is the best thing ever to happen to finally be able to objectively judge who is willing to do the hard work and face adversity vs looking for an easy ways out and quick fixes.

    Of course, long term misuse by outsourcing your cognitive exercise to a machine has a negative impact.

    Just auto complete your way to being completely dumb.

    As a learning or exploratory tool it is best thing ever. Just don't use it to tab-tab your way out of actual work.

  • penguinos 3 days ago
    No, I do the debugging myself. It’s 10x faster than letting ChatGPT debug
  • Havoc 1 day ago
    Yes and no.

    It's helped me stretch & explore new things that are a touch beyond my technical competence.

    ...but has also made me more lazy in some ways. e.g. I paste errors I don't immediately recognise into an LLM. Pre LLM I would have looked at the error in more detail first, then tried a couple of things blindly, then googled it.

  • keiferski 2 days ago
    Not really, because I mostly use AI as a conversation partner / researcher to answer questions and explore ideas. Often the answers are entirely outside of my current knowledge, so I’m not doing something I could otherwise do myself.
  • zenNeko 3 days ago
    [dead]
  • segmadis 3 days ago
    [dead]