Tips for better coding with ChatGPT

(nature.com)

153 points | by isingle 296 days ago

22 comments

  • agentcoops 296 days ago
    Further evidence of my justified true belief that the world will rapidly divide between those who already had programmatic control over their computers, now infinitely more productive through direct access to the GPT APIs (to say nothing of the firms that will use LLMs only internally, trained on their own codebase for example) and those who believe GPT=ChatGPT, which will slowly become but another conduit for ads through plugins. It's quite sad when even Nature magazine becomes a spokesperson for the latter group, who will continue to believe that LLMs are a racket on the order of Bitcoin.
    • spaceman_2020 296 days ago
      I haven’t been able to get GPT-4 API access months after applying and being a paid subscriber.

      The world right now seems to be divided more into “have GPT-4 API/ don’t have GPT-4 API”.

      • weinzierl 296 days ago
        With more and better models released, getting access to OpenAI's API is becoming less interesting.

        What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".

        My prediction is that this will get worse for a while.

        • boringuser2 296 days ago
          "Better"?

          Get out of town.

          • weinzierl 296 days ago
            Would you care to explain what you mean?

            I meant 40B Falcon is better than 65B LLaMA in many benchmarks and we will see even better models released with time.

            What is wrong with that?

            • boringuser2 296 days ago
              That is fair, your post left it a bit ambiguous if you meant better in reference to GPT-4 or not.

              Competitors aren't even at GPT 3.5.

              • agentcoops 295 days ago
                When I use ChatGPT, I find anything but GPT4 unusable; when I'm programming against the API, I actually tend to find myself using GPT3.5. I still haven't had a chance to experiment with the opensource LLMs, but that's my project for next month.
                • boredemployee 295 days ago
                  I have preferred GPT3.5. GPT4 is slow and verbose, I only resort to this model when all my prompts in 3.5 didn't give the expected result.
      • elemos 296 days ago
        Same here. Instead, I get to pay for ChatGPT and have people tell me it’s inferior.
      • ta988 296 days ago
        They just can't scale to the demand, that's their main issue right now.
    • TechBro8615 296 days ago
      Maybe, but I can't help but notice the "programmatic control" is over someone else's computer. Those who care the most about having programmatic control over their computers are also those who would rather send their prompts to their own GPU rather than an Azure server farm. I believe there will be a programming revolution built on the foundation of LLMs, but it won't really take off until we can use local LLMs for the bulk of the processing.
    • mym1990 296 days ago
      And with computers there was a divide between those willing to learn to program and those using the applications that were programmed...whats your point? The majority of society needs a pretty big level of abstraction to be willing to use something.
    • da39a3ee 296 days ago
      Yes I think I have a similar sentiment. Articles like these would do better to take the approach of

      "Oh shit, something huge is happening and we might not be intelligent enough to see the ramifications, but here's a humble attempt"

      rather than

      "Ha, the techies have done something they think is impressive again. It's certainly interesting, but as usual they're exaggerating it and failing to think from a nuanced human perspective. However us journalists are trained in that sort of thing, so we can help out here, and we know you readers will be all too familiar with the way those techies can only think like computers lol."

    • Jeff_Brown 296 days ago
      > now infinitely more productive

      How much more productive do you feel you are coding with an LLM? As another HN user said, to me it's like a talking dog -- incredible yet useless.

      • fnordpiglet 296 days ago
        I feel it gives me about a 30% lift on mechanical tasks and a 60% lift on learning / unblocking in areas of ambiguity. I use GPT for the mechanical tasks and ChatGPT for the learning tasks. An example of the learning would be “explain to me the use of Box::pin in the context of rust futures” or some such, but also sometimes some common idiom I’m brain farting on. Searching Kagi will yield the answers, just more slowly and deeply embedded in some document or stackoverflow answer vomit requiring lot of wasted effort that fully distracts me from my flow. The fact I can ask follow up questions on areas of ambiguity is useful. When it hallucinates it generally means I’m in an area that’s either undefined as of yet, or is really niche. The nice thing about programming is hallucination feedback is basically instant so I then pull out Kagi and research a bit, and maybe 90% of the time it’s just not possible.

        There has been some work done on generating code in a feedback cycle to winnow out hallucinations and it seems to work fairly well [1]- I think 99% of the challenges LLM face are primarily related to a lack of constraint, optimization, agency, and solver feedback. As they get integrated into a system with the ability to inform and constrain and guide using classic AI techniques their true value will be attainable. But they’re pretty useful even today.

        N.b., I’m a 32 year veteran distinguished engineer level at FAANG and adjacent firms that programs daily.

        1 https://voyager.minedojo.org

      • StackOverlord 296 days ago
        I use GPT to write code sometimes instead of importing a library. For instance a function to breadth first traverse a directed acyclic graph and slice it into levels. Another one to find a node in a nested graph using partial paths. I could have written those functions, but GPT4 does it correctly and faster.
      • lannisterstark 295 days ago
        >incredible yet useless.

        I almost always begin my tasks with ChatGPT, by asking it for a framework. "Design a x request that gets me results in form of this JSON, and now display it on y table according to this criteria"

        It gives me framework, I adjust and expand upon it. Works wonderfully.

        I even have a 'bot' running on one of the IRC channels that was almost 100% self written by ChatGPT.

      • w0m 296 days ago
        > to me it's like a talking dog

        That's if you're explicitly asking it questions. Have you sat down and coded with Copilot offering suggestions as you went? It's honestly incredibly helpful, especially when leveraging a new language or stack.

      • yi_xuan 296 days ago
        From my personal experience, it's quite useful but not as much as a lot of people thought it would be. The ability solving 'simple' questions is great. I used it more as a smarter google.
      • inopinatus 296 days ago
        One simply needs to think more laterally about the dog.
    • throwaway4aday 296 days ago
      The second group is going to be using chat interfaces in every app whether they like it or not in under 2 years. I think it'll be a net benefit, boomers (and everyone else too) will finally get their wish of just telling the computer what to do.
  • endisneigh 296 days ago
    I've used GPT-4 before it changed, and though impressive, writing code has never been the bottleneck for me personally. When these things can understand the business requirements and tell me what I should be building and why, with detailed sensible reasoning then I'll be hyped.
    • Swizec 296 days ago
      > When these things can understand the business requirements

      I’ve been looking into this. Nothing definitive yet, but my hunch is that current LLM’s struggle because they lack curiosity. They will answer your question, however vague, with the first most obvious answer they can think of.

      This is great, if you’re a junior team member. Super talented, very eager. But a more senior engineer approaches the problem differently. They ask more questions than provide answers. They’ll happily spend the first 25min of a 30min meeting asking heaps upon oodles of really dumb sounding questions.

      Then the last 5min, that’s the magic. They now have a solution perfectly tailored to the problem at hand, with all sorts of edge cases either explored or eliminated through questioning. The questions they kept asking weren’t dumb after all, they were pruning a decision/option tree in their head of all possible solutions until they landed on the most optimal solution for a set of known constraints. With further options to dig and improve.

      I think you can make an LLM do this (i’m trying), but it’s very very slow still.

      • agentcoops 296 days ago
        I think you’re still anthropomorphizing LLMs too much by saying the problem is a lack of curiosity. It’s an engineering problem: you haven’t figured out the correct “context” from which business requirements would follow. (And why Microsoft is so incredibly well positioned for the future)
        • Swizec 296 days ago
          Curiosity is how you build that context.
          • agentcoops 296 days ago
            I'm sure in your career you've seen countless attempts by 1000+ employee firms to spinup an ops team that must construct a giant spreadsheet of product priorities between customers, PMs and engineering teams -- versus having all of Salesforce, Slack, emails, meeting transcripts, user support messages, vectorised and fed into an LLM that any employee can converse with. The latter is not a trivial problem by any means and many, many implementation details, but I'm sure it's such an exciting time to be at Microsoft. Stratechery had a fantastic article about this.

            If I had to make a bet, I suspect Google and Meta will royally lose this generation: Apple will dominate personal access to LLMs, Microsoft will dominate business access to LLMs -- and honestly Palantir for the first time terrifies me. The current generation of LLMs are not about automation as such, but helping individuals accomplish self-directed goals (as possible in language and code).

      • pixl97 296 days ago
        >because they lack curiosity.

        This is because curiosity requires free energy, hence it is very expensive when you're limited on very expensive compute.

        This is what tree of thought is attempting to simulate in some ways. Build a set of multiple questions around the original question and then build on and prune that list based on a 'show your work' set of steps, and then keep iterating.

        Humans naturally solve the halting problem when thinking about things... we work on something long enough without a break and we'll pass out. Maybe when we wake, eat, and go to work we'll stop working on the same problem. But an LLM never sleeps. In theory with TOT and no time limit, you could find out your AutoGPT spent 10 million in computing resources contemplating navel lint. So, there are a number of unsolved problems there.

        What really becomes concerning is if Nvidia achieves its goals of speeding up training/inference by 1 million times in the next few years, and if the amount of compute we produce increases by a few million times. You and me simply can't use hundreds of minds thinking for years straight and machines could.

      • pydry 296 days ago
        Whatever it is it goes beyond curiosity - it doesnt just come up with the laziest answer it often invents it.
    • Takennickname 294 days ago
      You're missing out. 90% of my boilerplate code is automatically customized and inserted in my codebase for me by copilot. Maybe "manual labor" makes you happy, that's up to you.
    • naiv 296 days ago
      what do you mean with 'changed'?
      • unshavedyak 296 days ago
        A lot of people think GPT4 got worse with a semi-recent change. Worse as measured by intelligence or accuracy on complex inputs. Most notably i think the change was around the time they significantly increased the speed of GPT4.

        I don't have much opinion on this though, i just know i see that description of GPT4 a lot. I use GPT4 daily, but i think i don't really challenge it most days, as i've never trusted it enough to do so hah.

  • javier_e06 296 days ago
    Anthropomorphize is a good way to get ChatGPT to code some obvious but tedious code and to remind you how to cherry pick in git or other myriad of task that most of know how to do and before we will revisit in some site. I give ChatGPT good context and then do code review of the code it presents in the same way I do code reviews with the code I see elsewhere. One warning here though. Don't ask ChatGPT to code something that you don't know how to interpret and/or not willing to test for correctness.
  • boredemployee 296 days ago
    The article is introductory in nature, tending to be irrelevant for the HN audience.

    What worries me is that I have seen people here more often commenting on the author's life, their academic or professional attributes instead of paying attention to the article.

  • grzracz 296 days ago
    I've been using it for weeks, but recently it was so gutted that it barely understands TypeScript. I do my own coding again because sifting through all the possible bugs is more work than just writing it yourself. A true shame, because before it got "faster" it was capable of creating entire apps.
  • ducharmdev 296 days ago
    On sites like Stack Overflow, incorrect and subtley wrong answers are often peppered with follow-up answers. But in a ChatGPT session, I alone have to critique its output, without contextual info regarding the data's source and without a community to help me critique it.

    I find that a bit exhausting; beyond simple use-cases, I've found it easier to just do it myself.

  • btbuildem 296 days ago
    As I've been moving on from writing code to higher level work, I find these tools very welcome. I can just ask it to give me a quick grid layout or the right ffmpeg string of options and go about my tasks, instead having to dive into the nitty gritty or doing "manual labour" every time.
  • chank 296 days ago
    > Trust but verify...

    Based on my own attempts to have ChatGPT generate code for me, it's better to NOT trust it. It tends to hilucinate even with simple requirements.

    • apples_oranges 296 days ago
      I think they should use a different name altogether for chatGPT 4 as it is very different and superior to 3.5. And when people say chatGPT doesn't work for me, I would like to know which one they tried.
      • ducharmdev 296 days ago
        I was chatting with someone I know that has a paid subscription for ChatGPT 4. I was venting about the hallucinations I was getting for a certain problem; they mentioned how much better v4 is, and proceeded to plug my prompt into it. The answer it gave was actually a more blatant lie compared to the responses I got from v3.5.
      • ben_w 296 days ago
        Mm. One of the cases where cute corporate names for each version would actually help.

        GPT: Alice; GPT-2: Bob; GPT-3: Carol; GPT-3.5: Mountain Carol; GPT-4: Dallas; …; GPT-19: Sydney

        • danielbln 296 days ago
          They did that for a while during the GPT-3 days, Ada, Babbage and Davinci all were/are different variations of GPT-3.
      • paulmooreparks 295 days ago
        I've been seeing this "try ChatGPT 4" advice all over HN. I am using 4, and it still hallucinates and generates quite useless code. It's only marginally better in my experience.
    • agentcoops 296 days ago
      If you're already a programmer, there's no excuse for not working directly with the APIs. Learn about context and prosper.
      • MetaWhirledPeas 296 days ago
        > there's no excuse for not working directly with the APIs

        You mean other than the fact that it costs money?

        • agentcoops 296 days ago
          Certainly, but GPT3.5 is effectively free via the API, especially considering you have to hit $5 or whatever to even be as expensive as ChatGPT (and it's even relatively difficult to rack up a high bill on GPT4 now that they have token rate limits). The problem of general and global access to unrestricted LLMs is a very, very serious question, but here it's largely a question of engineers and, since it's relatively difficult to spend more than a Netflix subscription via the API, I'm assuming this is conceivably in budget.
      • hxugufjfjf 296 days ago
        There is one. OpenAI has not yet given you access (waitlist).
      • rokhayakebe 296 days ago
        How are you using the APIs?
  • Archipelagia 296 days ago
    Eh, I might be a whiner, but I seeing that nature.com, I expected something better than a Malcol Gladwell level article.

    Okay, the tip with anthropomorphisation is okay, and reminding people that you can't believe in LLMs output is always good... but, like, I'd assume an average Nature reader is smart enough to not need tips like "iterate" or "embrace change".

    • culebron21 296 days ago
      I gave this a try and got disappointed seeing "common sense" kind of reasoning, too.

      Regarding Malcolm Gladwell, do you mean the "Blink" book?

  • GiselaWag12 295 days ago
    I never thought I would get back my funds since I already lost hope until I found a hacker and a recovery agent, RecoveryNerd who I contacted via email at (recoverynerd@mail.com) and gave me a leap of faith in finding my funds. I personally would recommend Recovery Nerd since this is the least I could do for him after he saved my life by helping me recover my 47,000 USD worth of Bitcoin in less than three weeks from an online scammer. If you want to recover your lost funds or have any questions in regards to recovery, feel free to contact and be a hundred percent sure never to regret it.
    • lbertaaura99 282 days ago
      I would strongly love to recommend the services of the best team of Jeanson James Ancheta wizard. They are professional and very discreet in carrying out their jobs, they have the best customer service agents and satisfaction at heart. If you have any services, kindly contact him via email:(jeansonjamesanchetawizard62@gmail.com) or on Telegram +31684518136 and he also help track and monitor your cheating partner's phone without being noticed, clear or erase criminal records, all social media hacks, funds recovery and many others.
    • Ginabella 284 days ago
      [flagged]
  • deathmonger5000 296 days ago
    I wrote a coding assistant that makes it easy to use natural language to create and modify code - no copy pasting necessary:

    https://github.com/ferrislucas/promptr

  • anotherpaulg 296 days ago
    I've been using the openai apis to write and edit code for 3-4 months now. So I think that makes me an old timer (ha!). I have also been building tooling for improving the chat based coding experience [0]. All of this work has given me an opportunity to think about how to work best with GPTs on coding, and I've shared some thoughts about this in the past [1].

    Here are some of my thoughts on how to code with chatgpt. Many overlap with topics covered in the Nature article.

      - It really does help to think of chatgpt as a junior coder. I mentioned this in the writeup about my first AI coding project [2]. A bunch of things in this list would also be helpful when working with a junior dev.
      - GPT isn't good at code architecture. It will repeat code and take the shortest, laziest path to coding up your request. You need to walk it through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
      - Don't copy-and-paste between a chat session and your files. Use a tool like Copilot [3] or aider [0] that will give GPT your code to edit and apply its changes automatically. This is critical as it removes friction and lets you iterate quickly on code with GPT.
      - GPT is amazing at generating fresh new self-contained code. It takes more skill and better tools to work with it to edit existing code.
      - Break down a big change and ask for a series of smaller, self contained steps. This is a smart way to code solo, but really helps GPT succeed at more complex code modifications.
      - GPT has a wide breadth of coding knowledge, and impeccable command of syntax... but it is sometimes overconfident about details. So it will usually choose the right library for a task, but sometimes make an error about the details of the api/method/params. Paste doc snippets or error messages into the chat and it will fix those bugs.
      - GPT is really strong at churning out good boilerplate and roughing in a solution. This reduces the activation energy required to start larger changes or changes which will require unfamiliar libraries/packages/languages. GPT can can quickly prepare the ground for you to do the interesting work.
      - If you can, use gpt-4 (not gpt-3.5-turbo) since it can successfully generate larger, more complicated code changes.
    
    
    [0] https://github.com/paul-gauthier/aider

    [1] https://news.ycombinator.com/item?id=36020809

    [2] https://github.com/paul-gauthier/easy-chat

    [3] https://github.com/features/copilot

    • agentcoops 296 days ago
      Really fantastic advice. The only corollary that I'll over-emphasize to those reading: do not rely on ChatGPT and learn to work with the APIs directly -- or tools built explicitly for your purpose.

      I've also been working with the APIs directly for about that time and I only get more depressed daily reading the horrible takes about GPT's problems based only off of ChatGPT (and hearing about how even many companies try to use the product). GPT has forever changed how I interact with computers.

    • james-revisoai 296 days ago
      Great tooling.

      I was using CodeGPT but this looks better. And it's exciting to have those command line options to easily insert other contextual input like command outputs.

      Can I ask about your plans? CodeGPT planned to add semantic embeddings of code, but I haven't seen any updates for quite a time. The idea being that, like walking an Abstract Syntax Tree of the code, you could include relevant functions and function definitions(/classes/constants/etc), to help improve output quality.

      Alongside what you note with module/library versions and pasting in doc snippets, I have often found is necessary to include package.json files to nudge the results towards the right API calls. I think semantic code search might push it finally to being able to work even on comprehensive, multi-service style repos.

      For MVC or services architected solutions, I think this could be more helpful than including whole files - though it may be overkill - do you have any thoughts on doing this?

      • anotherpaulg 296 days ago
        Thanks for checking out aider. Let me know if you give it a try and find it useful.

        Yes, aider already has features to provide GPT with "code context" to let it edit larger, more complex codebases. I wrote up some notes about these features:

        https://aider.chat/docs/ctags.html

        You might be especially interested in the "future work" section near the end. I have actually shipped some of these ideas into the tool already. I need to find some time to write up the details.

        • james-revisoai 296 days ago
          I did read that and find it interesting, but without knowing it was now filtering contextually (awesome!), I didn't consider it for the codebases I was looking at due to the context size needed. Seems like keyword matching vs semantic/index search at first glance but I see you are indeed pushing the envelope here.

          Great to see, thanks again for developing it.

    • SOLAR_FIELDS 296 days ago
      I’ve been using aider since last week when I got GPT4 api access and it has easily quadrupled my coding productivity. I didn’t realize how slow it was to constantly be pasting and copying in the browser until I started using it
      • anotherpaulg 296 days ago
        Glad to hear aider is working well for you!
    • Topfi 296 days ago
      >Break down a big change and ask for a series of smaller, self contained steps.

      These are all good tips and very much align with my experience over the months, but breaking a complex task into as small, self-contained and logical steps as possible is truly going to make the biggest difference for most people.

      Once I understood that, just like previous versions (GPT-3), ChatGPT has no internal memory beyond what has been typed, that changed how I interacted with the model for the better. Depending on the task, I am either providing simple steps or asking the model to write simple steps based on my task layout, then I provide input on those, ask for refinement of said steps with a focus on code generation and remove anything superfluous/not focused on code creation, the output quality became significantly more consistent and usable.

      I can understand why, for a lot of more experienced developers, this can seem tedious and fully get why this leads a lot of observers to feel that, considering the effort required to set these models up for success, they might as well code themselves, especially considering this setup process can eat into the precious 25 prompts limit, which I hit consistently.

      However, I feel that once these models have become more efficient, a lot of this outlining may be handled in the background.

      The same goes for checking code errors in my eyes. For the same reasons (no internal memory), there often can be errors in provided code, yet asking for the model to check for errors generally resolves those. In most cases, this works without providing compile errors, though those of course do improve the response.

      I try my hardest not to anthropomorphize these models, so pardon this comparison, but in fairness, even with our human wetware memory the average dev is rarely able to write flawless code without the need for any revisions on their first try.

      If we get a more efficient model of comparable quality to GPT-4, adding a line to the frontend to request a second pass on all code based request may not be unreasonable and could, in my experience, yield more consistently usable results.

      Considering how quickly OpenAI released gpt-3.5-turbo, I am hopeful that we will see such a development soon, though I currently do not have personal access to the GPT-4 API, so maybe that could be made more accessible first.

    • knowsuchagency 296 days ago
      Aider is such an awesome project! I didn't know about it until I read this comment. I also wanted a way to provide my code as context from within the terminal without having to copy and paste back and forth. The tool I wrote (llmo) seems pretty similar to yours, although it uses the Textual library and Rich.

      https://github.com/knowsuchagency/llmo

      I'm really excited to try out aider, thanks for making it!

    • braindead_in 295 days ago
      Aider looks cool. Gonna try it out today. Have you tried out StarCoder or Replit-code?
  • louiskw 296 days ago
    A lot of the core arguments of the article around trust and safety sound very relevant to the problem I'm trying to solve [0] which is codebase search.

    Many orgs have a big enough codebase that they can improve productivity with existing patterns.

    [0] https://github.com/BloopAI/bloop

  • jasfi 296 days ago
    I'm trying to get it to write full-scale apps, via the APIs. Very difficult once you move past one file, but it's a challenge. The problem is that this tech isn't perfect, and can be unpredictable.
    • throwaway4aday 296 days ago
      Same, it requires a phenomenal amount of tooling and prompt engineering but I'm sure it's possible. At least for relatively small text focused apps.
  • foxbyte 296 days ago
    It underscores the power of AI tools like ChatGPT while stressing the need for careful use. New tools like cwhy and ChatDBG that use AI for explaining compiler errors and debugging are fascinating developments
  • ionwake 296 days ago
    Does anyone know if you can use Copilot with Gpt4 ? I already have gpt4 access but have not been able to approved for code interpreter hence I am still using chat to handle code tasks.
    • mellosouls 296 days ago
      https://github.com/github-copilot/chat_waitlist_signup/join

      I can't remember what the other platform options are, but for VSCode you need to use a preview version, which has meant I haven't had much time to test it yet as I'm full on with the stable release.

      But as an example I can select a block of code and ask it to generate tests, explain it or various other stuff. I haven't tried it further yet to see to what extent is provides value beyond just saving copying and pasting and establishing a context in ChatNormal.

      It's not clear either that it is 4 as you requested; it may be the default 3.5, I've had a very brief look but couldn't see the spec.

      Update, according to the Verge, its 4:

      https://www.theverge.com/2023/3/22/23651456/github-copilot-x...

      • siva7 296 days ago
        I got access to Copilot X and it's certainly nowhere near the capabilities of GPT-4, so i assume its 3.5-turbo, even Phind.com is significantly better
    • leodriesch 296 days ago
      GitHub Copilot X [0] seems to be exactly that, it is in technical preview with a waitlist.

      [0]: https://github.com/features/preview/copilot-x

  • foxbyte 296 days ago
    That's a valid point. Even as AI evolves, human supervision remains crucial. Pair programming with AI could enhance its utility safely.
  • gumballindie 296 days ago
    Now nature.com gives “coding” advice? Is openai paying for such articles or are they bored?
    • juujian 296 days ago
      It's an editorial from one of their technology editors. I have read a ton of useful editorials from Nature throughout my academic career -- not sure if it's beyond their scope, but I would be sad to see it go.
    • visarga 296 days ago
      I thought you needed to publish amazing discoveries to get in. We have a million articles about prompting already.
      • palunon 296 days ago
        For an article in the Nature scientific journal, sure. For an op-ed on Nature.com? Nah.
    • progx 296 days ago
      0185 clickbait with absolut common advices. This could be written by ChatGPT.
      • juujian 296 days ago
        If you frequently browse Hacker News, you are probably not the main audience. I just sent this to one of my less technology-savvy colleagues in my discipline, because it is difficult to find content that provides a realistic expectation of what ChatGPT can and cannot do for people who do not currently write code but need to keep up with things.
    • dlb007 296 days ago
      same thought
  • Richard649 295 days ago
    [dead]
  • iLoveOncall 296 days ago
    [flagged]
    • moron4hire 296 days ago
      I have 20 years of experience and thought the article was a pretty good summary of LLM tools and techniques for programming. While it is admittedly pretty high-level, I don't expect an article in Nature to go much deeper on any topic, let alone software development.

      And it definitely doesn't have "OpenAI Voice". This was a less formal, less pedagogical style than ChatGPT tends to generate.

      • iLoveOncall 296 days ago
        "Six tips for better coding" implies detailed tricks, not a high level presentation.

        I'm sure the "author" rewrote it, I'm just saying they probably got the substance from ChatGPT.

    • lijok 296 days ago
      [flagged]
  • cardanome 296 days ago
    I am absolutely disgusted by the idea of people using ChatGPT for serious coding work.

    Maybe I am just getting old but the idea of using a non-deterministic tool that can hardly be reasoned about that will straight up hallucinate facts for any professional work sounds insane to me.

    Yes, I do see the value for Junior Devs as I am sure it can drastically increase their output in the short term but aren't they shooting themselves in the leg in the long run? That might sound elitist but at the end of the day, you will need to learn to read technical documentation anyway and once you understand it there is no need for ChatGPT.

    Yes, if you are constantly hopping from one framework of the month to the next big thing, you just don't have the time to learn anything in depth and then again ChatGPT can help. But do you really want to live like this? Instead of band aid solutions we might want to push for less hype-driven and more pragmatic development styles that allow us the time to learn our frameworks.

    • RayVR 296 days ago
      I’m not an expert in every area I need to touch. I don’t work at a place like google where there are teams dedicated to solving the same problems across the org which I can rely on.

      Getting a simple example roughly tailored to my needs, which I can use as a launching point, is often extremely useful. For example, I encounter scientific libraries with poor documentation and I could spend half my day reading through it or searching stack overflow for a good explanation…or, I can ask ChatGPT4 for a specific implementation using a specific library, then ask it to explain anything I don’t understand.

      These tools aren’t solving any real problems for me - they are replacing slower and less responsive systems I already relied on for research and development.

    • lucideer 296 days ago
      > a non-deterministic tool that can hardly be reasoned about that will straight up hallucinate facts for any professional work

      This sounds like a decent enough description of the human brain.

      Don't get me wrong: I'm not for a moment suggesting that what we have today is anything approaching "General Artificial Intelligence", and I am extremely concerned & worried about the inevitable massive damage AI is going to do to our world, but I do think it's a little funny that many people's specific objections to it amount to "it can do what people do". Yes. That's the idea.

      The main concern with AI usage is not that it will write bad buggy code or lie: we already do that ourselves plenty, so that's not a novel skill in the professional arena. The main concern is that we can scale that stupidity.

      But that's a problem of scale: individual usage isn't really going to do you notable damage on an individual level.

      > Yes, if you are constantly hopping from one framework of the month to the next big thing, you just don't have the time to learn anything in depth and then again ChatGPT can help.

      I actually think the opposite is true. Having used ChatGPT quite a lot for work (by mandate - I wouldn't have chosen to either but am glad now that I had to), I've found it's really very good at generating bad code and being confidently wrong (again, much like people): if you ask it about a subject you're not deeply knowledgeable about, it's going to lead you astray. The most astute way to use it is actually in going the last mile on something you're already very confident in, so that you can correct it as needed.

    • spicyusername 296 days ago
      Most senior devs I know use it as a rubber duck.

      It's good at generating simple things that are 60% to 100% correct.

      It's good at summarizing high level concepts or summarizing how multiple high level concepts relate with 60% to 100% accuracy

      It's ok at helping you think about things to troubleshoot when you have an error you're not sure to do with.

      To be honest I would say it's actually not going to be good for junior devs because they don't have the skills to properly fact check, quickly. But for more senior folks it's very easy to immediately see what's wrong, ignore those parts or ask it for clarification, and use what's right.

      If you mostly know what you're doing, it can be very helpful to get immediate feedback, niche examples targeted exactly to what you're working on, or summaries of whatever high level concept you need a little more clarification on.

      Much much faster than spelunking through Google.

      As a concrete example I just this morning had it summarize Python's asyncio Queues versus Tasks, what they are, a few examples of using them, and when you would want to use one versus the other. All in just a few minutes.

    • meesles 296 days ago
      I've been giving it a shot based on recommendations from my bosses, and I'm a senior+ dev.

      80% of the time, the suggestions are either too generic for our codebase and so has to be patched up with our variable names, methods, etc. About half the time it completely misses the mark/intent of what is being written, and offers an auto-complete for a different problem.

      20% of the time, it actually acts like a full line or full section autocomplete. Areas I've found it particularly helpful are writing tests. For example I added a new integration yesterday that touched a bunch of interface files for those integrations. Copilot was able to pick up on that and auto-generate a 90% functional test with ~30 lines, including setup and assertions.

      I'm not convinced it's saving me time yet since I have to correct it/ignore it most of the time. When it works, it's cool. And like another comment said, typing speed has never been a bottleneck for me (record of 155 wpm on typing tests, I probably sit around 120-130 for average day-to-day typing).

    • t43562 296 days ago
      I often start writing (short) programs based on something else I wrote - I take what I did before and hack it to start doing what I need now. Even when you end up changing everything there's some advantage for me in not having to start entirely from scratch. I can see it being useful for that.

      I feel exactly as you do and I've reviewed code, found all sorts of weirdness and then realised that the developer used ChatGPT to do it and probably to do its unit tests. It all worked within itself but clearly didn't work in the real world at all! :-)

      Ultimately, I cannot really be bothered to use ChatGPT. It might be my backwardness but I can be bothered to use an IDE when it helps (instead of VIM) so I don't think I'm a hopeless fundamentalist. Why has my brain written it off? I don't know but I know I am lazy and I do things that make life easier.

    • glau 296 days ago
      Personally, I’M terrified of people using ChatGPT for serious coding work. They will be seeding the future with vulnerable/buggy software that no one understands.

      Granted, us humans are doing that now. Just many orders of magnitude slower. Probably slow enough that we can find/fix the important stuff.

      The other aspect that terrifies me is the potential for nation state entities with deep pockets to inject vulnerabilities. What would it be worth to the NSA to seed the future with programs they could exploit?

      • throwaway4aday 296 days ago
        LLMs are also capable of analyzing, describing, and debugging software. There have been several papers published on this. So I wouldn't worry too much about any new buggy software being produced. In a few years, LLMs will probably be issuing PRs on your repos to fix things you haven't gotten around to or noticed yet.
        • glau 295 days ago
          This actually seems like a great use case. Read and flag potential bugs for review. Although if it wrote the code in the first place how would it identify the bug? What it produced was something close to the most likely result given what it has seen so far
      • mym1990 296 days ago
        Are you able to back up the claim that ChatGPT code, when refined and deployed to production, is more buggy than code that is patchworked together by 10-20 stack overflow posts until something sticks?
        • glau 295 days ago
          I didn’t make that assertion. My fears are based on the speed that code of unknown quality, which is also poorly understood can be produced. I specifically said:

          Granted, us humans are doing that now. Just many orders of magnitude slower. Probably slow enough that we can find/fix the important stuff.

    • juujian 296 days ago
      What's so sanctimonious about writing code? If I vaguely know what I want to do in bash, but I don't know everything off the top of my head, I could search stackoverflow, or I could just type it into ChatGPT, apply some judgement, and run it. We're not slaughtering a holy cow, it's just another tool in the tool box in addition to the manual and google.
    • GolDDranks 296 days ago
      Don't be. There are some use cases that fit very well.

      For example, we plan to use GPT-4 for generating text that ends up being seen and read by our customers. (Can't disclose the details, as it's near to core of our business model.) We need a _lot_ of text, and its in style that is time-consuming to think and write by hand. However, all the generated data is checked for correctness by humans. I think that these tips are excellent for designing the generating process so that the end result is of as high quality as it could reasonably be expected to be.

      Edit: Oh, you mean coding work as for generating code for developers? Not in the sense of using it as a component of a data-generating/transforming system? In that case, I pretty much agree.

      Edit2: Oof, I just realized that I mistook this whole thread for another thread that was posted earlier on Hacker News. So I thought we were talking about engineering systems that has GPT-4 as a subsystem: https://platform.openai.com/docs/guides/gpt-best-practices

      • cardanome 296 days ago
        Yeah, I meant coding.

        Text generation with human proofreading/editing is definitely a valid use case. Maybe not for top-tier prose but yeah if you need large amounts of decent quality text it is the way to go. Definitely makes some new business ideas viable.

    • jsight 296 days ago
      I've found the opposite to be true, actually. I'd prefer it in the hands of someone more experienced.

      Real world example: Hey chatgpt, write me some Camel routes to take an input off of a JMS queue, wire it up to the camel salesforce getSObject endpoint, use these headers for the lookup, and write some unit tests.

      Actual result: Code that is 99% correct, but messed up a couple of parameters and used a slightly wrong approach to mocking some things in the unit test. Having used all of the frameworks extensively, it took ~5 minutes to fix and certainly saved time. Partially because I'm used to reviewing code from people who make similar mistakes.

      I can't imagine that working out as well if it was a truly junior engineer (not just a junior in name only) who didn't understand what was going on with the code very well.

      And this is just the beginning. It is hard to imagine not using this for the bulk of the work within a few years.

    • louhike 296 days ago
      I don’t think it’s more insane than copying and pasting code from anywhere on the internet. You have to understand what you use and you must not trust it blindly, whatever the source.
      • cardanome 296 days ago
        Well, I don't copy and paste code from the internet either. It's fine for learning but at some point you should be able to write your own code from scratch.
    • wouldbecouldbe 296 days ago
      Ive been moving a large HTML / nunjuck CMS to nextjs. It's pretty decent and TSX. And when I said, this is the template, rewrite it based upon this example it does a decent job and definitely saves me time. For rewriting things like these it's been a time saver. There were also tasks where it mostly wasted my time.
    • AbrahamParangi 296 days ago
      Consider whether or not this is an ego driven reaction. Do you have the same feelings about your compiler? It also writes code that you likely don’t read, and likely don’t understand in depth.
      • cardanome 296 days ago
        A non-deterministic compiler would be pretty terrifying. In that case I would expect many people still writing assembly by hand for any serious work.

        Yes, there is missing/unclear language specifications and compiler bugs and sometimes you get surprised by certain optimizations but you can reason about compilers well enough for practical purposes. Plus it's not economically viable anymore to do without them, sadly.

    • rejectfinite 296 days ago
      It's nice to get a template out to build from.

      But yes, I have seen it make up its own Powershell cmdlets (my main usage)

      So cannot be used unless you already know.

    • adamsmith143 296 days ago
      Strong "Just read the manual" vibes.
      • cardanome 296 days ago
        Just reading the manual would have saved me so many times when I was starting programming.
  • pwython 296 days ago
    For those saying Jeffrey Perkel has "no programming experience," please see: https://scholar.google.co.uk/scholar?as_q=&btnG=Search+Schol...
    • sanitycheck 296 days ago
      I had a good skim through those myself after seeing the (now flagged) comment you're referring to, and I didn't spot any evidence of programming experience. That's not to say there isn't any, of course! He seems to be a career science writer, and former biologist.