31 comments

  • ses1984 1 hour ago
    I asked copilot how developers would react if AI agents put ads in their PRs.

    >Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.

    Sometimes AI can be right.

    • simonw 1 hour ago
      Which product called Copilot did you ask?
    • temp0826 1 hour ago
      I'm reminded of the ads when logging into Ununtu in the motd...nothing infuriated me more (I only used it for a short period).
      • Meneth 38 minutes ago
        Me too, main reason I switched to Debian.
    • hk__2 1 hour ago
      It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.
      • phoe-krk 1 hour ago
        I agree. It's not an advertisement, it's simply a piece of information about your particular choice of technology.

        --------------

        Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io

        • cozzyd 10 minutes ago
          I'm curious about how a hacker news client on a smart TV would work...
          • phoe-krk 7 minutes ago
            You can try it now! Prices starting at €13.99 per month, billed yearly.
        • NetOpWibby 44 minutes ago
          Domain available for $50 from Cloudflare
      • layer8 19 minutes ago
        "Sent from my iPhone" actually is an ad when it’s the result of default settings.

        Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.

        • saidnooneever 11 minutes ago
          companies pay for ad distribution. its not like they give a free ad service -$-. maybe they dont chose how the campaigns are done (and dont give shits)

          brawndo - its what your brain needs

      • cozzyd 1 hour ago
        which is an ad...

        Sent from Firefox on AlmaLinux 9. https://getfirefox.com https://almalinux.org

      • MarsIronPI 18 minutes ago
        "Sent from my iPhone" is just as bad. If you don't see it then IDK what to tell you.
      • flumes_whims_ 27 minutes ago
        If it only mentioned made with copilot that would be one thing, but it didn't just mention Copilot. It advertised a different third party app.
      • godzillabrennus 22 minutes ago
        It's not an ad, it's a message from our sponsor.

        This message brought to you by TempleOS

  • Aurornis 1 hour ago
    I actually love these ads and also the way Claude injects itself as a co-author.

    Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

    I think we should continue encouraging AI-generated PRs to label themselves, honestly.

    I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.

    • mikkupikku 42 minutes ago
      It's not a self-own, it's honest disclosure. It's unethical (if not outright fraudulent) to publish LLM work as if it were your own. Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
      • zeroonetwothree 41 minutes ago
        I think it depends a lot if you reviewed it as carefully as you would your own code.

        Of course most people don’t do that

        • mikkupikku 35 minutes ago
          I don't put human code reviewers down as coauthors let alone the sole authors of my commit. So honestly, the fact that a vibe coded commit lists me as the author at all is a little bit dodgy but I think I'm okay with it. The LLM needs to be coauthor at least though, if not outright the author.

          So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.

          • hombre_fatal 19 minutes ago
            The implementor only got credit in the day where the implementor was a human who had to do a lot of the work, often all of the work.

            Now that the cost of writing code is $0, the planner gets the credit.

            Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.

            It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.

            Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.

            "Coauthored by computer" doesn't help this distinction.

        • raphinou 12 minutes ago
          In my project's readme I put this text:

             "There is no commit by an agent user, for two reasons:
          
              * If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
              * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
          
          
          It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.

          Interested to read opinions on this approach.

          • embedding-shape 8 minutes ago
            > * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."

            Seems... Not that useful?

            Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?

            I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".

    • QuantumNomad_ 1 hour ago
      > […] and also the way Claude injects itself as a co-author.

      > Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

      I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.

      For changes that I made myself, I commit with myself as author.

      Why would I commit something written by AI with myself as author?

      > I think we should continue encouraging AI-generated PRs to label themselves, honestly.

      Exactly.

      • yarn_ 1 hour ago
        "Why would I commit something written by AI with myself as author?"

        Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.

        What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?

        The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?

        Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.

        • Krssst 55 minutes ago
          As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.

          Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).

          As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).

          • corndoge 50 minutes ago
            Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.
            • josephg 23 minutes ago
              I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.
          • yarn_ 39 minutes ago
            Future analysis is a valid reason to keep it, thats a good point and I agree with that.
        • waisbrot 50 minutes ago
          Claude adds "Co-authored by" attribution for itself when committing, so you can see the human author and also the bot.

          I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.

          • yarn_ 42 minutes ago
            > I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.

            Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?

            • 59nadir 35 minutes ago
              Personally it would make the choice to say no to the entire thing a whole lot easier if they self-reported on themselves automatically and with no recourse to hide the fact that they've used LLMs. I want to see it for dependencies (I already avoid them, and would especially do so with ones heavily developed via LLMs), products I'd like to use, PRs submitted to my projects, and so on, so I can choose to avoid them.

              Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.

              • rapind 16 minutes ago
                This is shouting at the clouds I'm afraid (I don't mean this in a dismissive way). I understand the reasoning, but it's frankly none of your business how I write my code or my commits, unless I choose to share that with you. You also have a right to deny my PRs in your own project of course, and you don't even have to tell me why! I think on github at least you can even ban me from submitting PRs.

                While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.

              • yarn_ 11 minutes ago
                I mean sure, in the same sense that law enforcement would be a lot easier if all the criminals just came to the police station and gave themselves up

                Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.

                Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.

            • ctxc 36 minutes ago
              Accountability. Same reason I want to read human written content rather than obvious AI: both can be equally shit, but at least with humans there's a high probability of the aspirational quality of wanting to be considered "good"

              With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.

              • yarn_ 16 minutes ago
                The human who submitted the PR is 100% accountable either way, thats partly my point.

                Disclosing AI has its purposes, I agree, but its not like we can reliably get everyone to do it anyway, which also leads me to thinking this way.

            • jacobgkau 32 minutes ago
              LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."

              Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).

              Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.

              • yarn_ 20 minutes ago
                Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).

                >Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.

                I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.

                This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.

      • smrtinsert 1 hour ago
        If you review the code then committing as yourself makes perfect sense to me
        • homebrewer 29 minutes ago
          Linux has used "Reviewed-by" trailers for many years. If you've only done minor editing, or none at all, it's something to consider.
        • nemomarx 1 hour ago
          If you review a juniors code, do you commit it under your name?
          • corndoge 53 minutes ago
            A junior is a person. A tool is a tool. Do you credit your text editor with authorship?
            • scottyah 44 minutes ago
              If it contributed significantly to the design and execution, and was a major contributing factor yes. Would you say a reserve parachute saved your life or would you say you saved your own life? What about the maker of the parachute?

              I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.

              Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.

            • jacobgkau 30 minutes ago
              False equivalence. A text editor does not type characters that you didn't explicitly type or select.
          • data-ottawa 42 minutes ago
            That’s reviewing code vs contributing code.
      • Imustaskforhelp 1 hour ago
        > Why would I commit something written by AI as myself?

        I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.

        I then sometimes, manually paste it and just hit enter.

        These are prototypes though, although I build in public. Mostly done for experimental purpoess.

        I am not sure how many people might be doing the same though.

        But in some previous projects I have had projects stating "made by gemini" etc.

        maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.

    • lokimedes 1 hour ago
      I just submitted my first Claude authored application to Github and noticed this. I actually like it, although anthropomorphizing my coding tools seems a bit weird, it also provides a transparent way for others to weigh the quality of the code. It didn’t even strike me as relevant to hide it, so I’d not exactly call it lazy, rather ask why bother pretending in first place?
      • waisbrot 47 minutes ago
        Looking back, it would have been neat to have more metadata in my old Git commits. Were there any differences when I was writing with IntelliJ vs VSCode?
        • scottyah 41 minutes ago
          Probably your linter, language, or intelligence/whatever tab-complete you used. Claude writes which model they used to write the code, not whether it was in the web ui, tui app, or desktop app.
    • 8cvor6j844qw_d6 1 hour ago
      It's part of the attribution settings from `.claude/settings.json` if you're referring to Claude Code.

      Personally, I adjusted the defaults since I don't like emojis in my PR.

      [1]: https://code.claude.com/docs/en/settings#attribution-setting...

      • silverwind 1 hour ago
        I have instructions for these because the attribution settings don't accept placeholder tokens like `<model>`, `<version>` etc.
    • junon 16 minutes ago
      Agreed! Easy close/ban for me.
    • neya 1 hour ago
      > I would like to know when someone is trying to have the tool do all of their work for them.

      Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.

      If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.

      Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?

      • scottyah 38 minutes ago
        There will always be room for craftsmen stamping their work, like the expensive Japanese bonsai scissors. Most of the world just uses whatever mass-produced scissors were created by a system of rotating people, with no clear owner/maker. There's plenty of middle ground for systems who put their mark on their product.
        • neya 19 minutes ago
          Fair enough.
  • kstenerud 1 hour ago
    The ads are annoying, and I'm glad Microsoft will stop doing it.

    One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).

    Even when I edit the commit message, I still leave in the Claude co-author note.

    AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.

    • yarn_ 1 hour ago
      I don't quite see the benefit of this, personally.

      Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.

      Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).

      The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.

      • kstenerud 18 minutes ago
        You're quite right that the quality of the code is all that matters in a PR. My point is more historical.

        AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.

        I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.

        • yarn_ 10 minutes ago
          Yep other people pointed this out as well, this makes sense to me.
      • layer8 13 minutes ago
        It’s not about who wrote it, but about who is submitting it. The LLM co-author indicates that the agent submitted it, which is a contraindication of there being a human taking responsibility for it.

        That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.

        • yarn_ 11 minutes ago
          Well if an agent is submitting it I'm just going to reject it, thats no problem. "Just send me the prompt".
      • sheept 51 minutes ago
        As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.

        Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.

        But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.

        Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.

        • yarn_ 44 minutes ago
          I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?

          As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.

          I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.

          The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".

          (But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)

      • Forgeties79 1 hour ago
        > Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?

        Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.

        • yarn_ 54 minutes ago
          > Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."

          That was my point here, it is a false signal in both directions.

          • Forgeties79 28 minutes ago
            According to you it’s all false. I don’t agree, and it certainly shouldn’t just be taken as a given.

            For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.

            • yarn_ 4 minutes ago
              I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"

              If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).

              I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.

    • jackp96 1 hour ago
      So, philosophically speaking, I agree with this approach. But I did read that there was some speculation regarding the future legal implications of signalling that an AI wrote/cowrote a commit. I know Anthropic's been pretty clear that we own the generated code, but if a copyright lawsuit goes sideways (since these were all built with pirated data and licensed code) — does that open you or your company up to litigation risk in the future?

      And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.

      • mikkupikku 39 minutes ago
        Let your employer's lawyers worry about that. If they say not to use LLMs, then you should abide by that or find a new job. But if they don't care, then why should you?

        As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.

      • nemomarx 1 hour ago
        If you're concerned about copyright risk, don't you want that kind of tagging so you could prove it wasn't used on particular code?
        • PunchyHamster 39 minutes ago
          not tagging something doesn't prove AI wasn't used
      • dpoloncsak 1 hour ago
        I'm pretty sure IF a copyright lawsuit went sideways you would still be open to litigation risk, just hiding the evidence.

        What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions

        Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.

  • simonw 1 hour ago
    In case people missed it in the other thread, GitHub have now disabled this: https://twitter.com/martinwoodward/status/203861213108446452...

    > We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.

    • pinkmuffinere 1 hour ago
      I’m grateful they disabled it, but their response still feels a bit tone deaf to me.

      > Disabled product tips entirely thanks to the feedback.

      This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”

    • da_grift_shift 1 hour ago
      Accepting the megacorp euphemisms without critique ("product tips") is how enshittification festers.
      • simonw 39 minutes ago
        I've not seen any evidence that these were ads and not "tips".

        Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.

        • wat10000 4 minutes ago
          I could buy it if this was just being shown to the person who was using Copilot. Hey, here's a feature you might like. Seems OK. But it was put into the PR description. That gets seen by potentially many people, who are not necessarily using Copilot.
        • iso1631 5 minutes ago
          When apple puts an advert for an apple show in front of for all mankind, that's an advert.

          Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.

          If it isn't an advert, then at very least there's a button to disable it.

  • john_strinlai 1 hour ago
    related: https://news.ycombinator.com/item?id=47570269

    response from timrogers (product manager at github):

    "Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.

    We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again."

    https://news.ycombinator.com/item?id=47573233

    • rvz 1 hour ago
      > "We won't do something like this again."

      They (Microsoft / GitHub) will do it again. Do not be fooled.

      Never ever trust them because their words are completely empty and they will never change.

      • Hussell 1 hour ago
        "We" here likely refers to Tim and his current coworkers who were present to see this, not every current and future employee of Microsoft / Github. Try not to think of any organization or institution as a person, but as lots of individual people, constantly joining and leaving the group.
        • embedding-shape 3 minutes ago
          Yeah, which is exactly why "We won't do something like this again" has about much value as Kubernetes would have value for HN.

          Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.

          Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.

  • fraywing 53 minutes ago
    As the "agent web" progresses, how will advertisers actually get access to human eyeballs?

    Will our agents just be proxies for garbage like injected marketing prompts?

    I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.

  • VadimPR 41 minutes ago
    This is why one reason why local coding models are quite relevant, and will continue to be for the foreseeable future. No ads, and you are in control.
    • fph 33 minutes ago
      In principle, one could train the AI to insert ads in its answers. So no, if you only do inference locally with an open-weight model you are still not in control.
  • siruwastaken 20 minutes ago
    I really wish this was an April fools story. It's good to see that at least it has been disabled again, although I can't imagine that it will be long before this comes back again. Also, (I can't find it now, but) I thought there was an article here on HN recently that clarified that inference cost can probably be covered by the subscription prices, just not training costs?
  • thomasgeelens 7 minutes ago
    Damn Microsoft out here really finding new ways to serve ads.
  • dboreham 2 minutes ago
    At some point he who pays the piper was going to call the tune...
  • nickdothutton 55 minutes ago
    Title is wrong, should be "New form of cancer discovered".
  • vicchenai 32 minutes ago
    the SourceForge parallel is what gets me. they did the exact same thing with installers and it killed them. people moved to GitHub specifically to get away from that.

    1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.

  • ajkjk 33 minutes ago
    This only gets better when there's a financial penalty for doing it. Ads do almost nothing but it costs them even less.
  • sandeepkd 52 minutes ago
    It took me some time to understand how big the advertisement market is, things flowing in the direction seem natural when it comes to making money out of the investment.
  • sanex 44 minutes ago
    Cursor does similar at least. I hate it and therefore write my own commit messages.
  • gadders 1 hour ago
    The irony when NeoWin covers it's whole page with "promoted content" when you try and back out of the page.
  • liendolucas 11 minutes ago
    Not surprised at all, just another enshitified product by Microsoft. Carry on.
  • righthand 1 hour ago
    The future is here! Glorious ads that will make you so efficient! Save time coding by consuming ads, you were never going to attain expert level professional skills anyways.
  • m132 1 hour ago
    I remember open-source projects announcing their intent to leave GitHub in 2018, as it was being acquired by Microsoft. I was thinking to myself back then: "It's really just a free Git hosting service, and Git was designed to be decentralized at its very core. They don't own anything, only provide the storage and bandwidth. How are they even going to enshittify this?".

    8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.

    • surgical_fire 53 minutes ago
      This is nothing.

      I would bet that soon it will inject ads within the code as comments.

      Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.

      The possibilities are limitless.

      • m132 42 minutes ago
        If I recall correctly, what sparked the mass migration to GitHub was the controversy around SourceForge injecting ads into installers of projects hosted there. Now that we have tools that can stealthily inject native-looking ads into programs at the source code level...
        • data-ottawa 22 minutes ago
          Same as it ever was. Same as it ever was.
  • dboreham 1 hour ago
    Ironically tfa is festooned with ads.
    • sunaookami 1 hour ago
      Over 1.5 trillion news articles have ads injected into them by the company's commerce team!
    • da_grift_shift 1 hour ago
      Sure, but the source blogpost isn't.
  • j45 1 hour ago
    It's the hotmail signature all over again?
  • kingjimmy 52 minutes ago
    microslop at it again
  • saberience 1 hour ago
    It's the same with Claude Code actually, and recently Codex too...

    Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.

    Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.

    Really, really annoying.

    • ray_v 1 hour ago
      Adding the agent (and maybe more importantly, the model that review it) actually seems like a very useful signal to me. In fact, it really should become "best practice" for this type of workflow. Transparency is important, and some PMs may want to scrutinize those types of submissions more, or put them into a different pipeline, etc.
    • coder543 1 hour ago
      That Codex one comes from the new `github` plugin, which includes a `github:yeet` skill. There are several ways to disable it: you can disconnect github from codex entirely, or uninstall the plugin, or add this to your config.toml:

          [[skills.config]]
          name = "github:yeet"
          enabled = false
      
      I agree that skill is too opinionated as written, with effects beyond just creating branches.
      • saberience 1 hour ago
        What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.
    • bonesss 1 hour ago
      When I started my career there was this little company called SCO, and according to them finding a comment somewhere in someone’s suppliers code that matched “x < y” was serious enough to trip up the entire industry.

      Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?

      Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.

    • bundie 1 hour ago
      I believe its easy to disable the Claude Code one.
  • lpcvoid 1 hour ago
    Once again, Microslop doing Microslop things
    • toastal 7 minutes ago
      Yet folks are refusing to migrate off their products/services—as if it hasn’t been like this for 3 decades already.
  • ChrisArchitect 1 hour ago
    • tyleo 1 hour ago
      It’s a dupe but I hope the discussion continues in this more general thread. That other thread was earlier but more of an individual POV that doesn’t make it obvious there was ecosystem impact.
      • ChrisArchitect 14 minutes ago
        The article barely expands on the source content. Either way it's the same discussion. And there's lots of it. Over there.
        • iso1631 1 minute ago
          The original looked like a one-off

          The article shows thousands of adverts, millions if you look more widely. It massively changes the scale.

  • weiyong1024 57 minutes ago
    [dead]
  • mergeshield 41 minutes ago
    [dead]
  • panavinsingh 53 minutes ago
    [dead]
  • wendy7756 49 minutes ago
    [dead]
  • prvt 1 hour ago
    [dead]