26 comments

  • cesarb 2 minutes ago
    One of these is not like the others...

    > The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.

    This article is mixing two separate issues. One is using git as the master database storing the index of packages and their versions. The other is fetching the code of each package through git. They are orthogonal; you can have a package index using git but the packages being zip/tar/etc archives, you can have a package index not using git but each package is cloned from a git repository, you can have both the index and the packages being git repositories, you can have neither using git, you can even not have a package index at all (AFAIK that's the case for Go).

  • c-linkage 1 hour ago
    This seems like a tragedy of the commons -- GitHub is free after all, and it has all of these great properties, so why not? -- but this kind of decision making occurs whenever externalities are present.

    My favorite hill to die on (externality) is user time. Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.

    Externalities lead to users downloading extra gigabytes of data (wasted time) and waiting for software, all of which is waste that the developer isn't responsible for and doesn't care about.

    • robmccoll 6 minutes ago
      I don't think most software houses spend enough time even focusing on engineering time. CI pipelines that take tens of minutes to over an hour, compile times that exceed ten seconds when nothing has changed, startup times that are much more than a few seconds. Focus and fast iteration are super important to writing software and it seems like a lot of orgs just kinda shrug when these long waits creep into the development process.
    • ekjhgkejhgk 1 hour ago
      I wouldn't call it tragedy of the commons, because it's not a commons. It's owned by microsoft. They're calculating that it's worth it for them, so I say take as much as you can.

      Commons would be if it's owned by nobody and everyone benefits from its existence.

      • TeMPOraL 1 hour ago
        Still, because reality doesn't respect boundaries of human-made categories, and because people never define their categories exhaustively, we can safely assume that something almost-but-not-quite like a commons, is subject to an almost-but-not-quite tragedy of the commons.
        • ttiurani 1 hour ago
          The whole notion of the "tragedy of the commons" needs to be put to rest. It's an armchair thought experiment that was disproven at the latest in the 90s by Elinor Ostrom with actual empirical evidence of commons.

          The "tragedy", if you absolutely need to find one, is only for unrestricted, free-for-all commons, which is obviously a bad idea.

          • wongarsu 11 minutes ago
            A high-trust community like a village can prevent a tragedy of the commons scenario. Participants feel obligations to the community, and misusing the commons actually does have real downsides for the individual because there are social feedback mechanisms. The classic examples like people grazing sheep or cutting wood are bad examples that don't really work.

            But that doesn't mean the tragedy of the commons can't happen in other scenarios. If we define commons a bit more generously it does happen very frequently on the internet. It's also not difficult to find cases of it happening in larger cities, or in environments where cutthroat behavior has been normalized

            • TeMPOraL 4 minutes ago
              > A high-trust community like a village can prevent a tragedy of the commons scenario. Participants feel obligations to the community, and misusing the commons actually does have real downsides for the individual because there are social feedback mechanisms.

              That works while the size of the community is ~100-200 people, when everyone knows everyone else personally. It breaks down rapidly after that. We compensate for that with hierarchies of governance, which give rise to written laws and bureaucracy.

              New tribes break off old tribes, form alliances, which form larger alliances, and eventually you end up with countries and counties and vovoidships and cities and districts and villages, in hierarchies that gain a level per ~100x population increase.

              This is sociopolitical history of the world in a nutshell.

          • Saline9515 15 minutes ago
            Ostrom showed that it wasn't necessarily a tragedy, if tight groups involved decided to cooperate. This common in what we call "trust-based societies", which aren't universal.

            Nonetheless, the concept is still alive, and anthropic global warming is here to remind you about this.

          • b00ty4breakfast 40 minutes ago
            yeah, it's a post-hoc rationalization for the enclosure and privatization of said commons.
            • TeMPOraL 2 minutes ago
              And here I thought the standard, obvious solution to tragedy of the commons is centralized governance.
        • lo_zamoyski 4 minutes ago
          There is an analogy in the sense that for the users a resource is, for certain practical intents and purposes, functionally common. Social media is like this as well.

          But I would make the following clarifications:

          1. A private entity is still the steward of the resource and therefore the resource figures into the aims, goals, and constraints of the private entity.

          2. The common good is itself under the stewardship of the state, as its function is guardian of the common good.

          3. The common good is the default (by natural law) and prior to the private good. The latter is instituted in positive law for the sake of the former by, e.g., reducing conflict over goods.

        • reactordev 1 hour ago
          An A- is still an A kind of thinking. I like this approach as not everything perfectly fits the mold.
      • PunchyHamster 37 minutes ago
        Well, till you choose to host something yourself and it becomes popular
      • jasonkester 55 minutes ago
        It has the same effect though. A few bad actors using this “free” thing can end up driving the cost up enough that Microsoft will have to start charging for it.

        The jerks get their free things for a while, then it goes away for everyone.

        • Y_Y 26 minutes ago
          I think the jerks are the ones who bought and enshittified GitHub after it had earned significant trust and become an important part of FOSS infrastructure.
          • irishcoffee 15 minutes ago
            Scoping it to a local maxima, the only thing worse than git is github. In an alternate universe hg won the clone wars and we are all better off for it.
      • rvba 29 minutes ago
        I doubt anyone is calculating

        Remember how GTA5 took 10 minutes to start and nobody cared? Lots of software is like this.

        Some Blizzard games download 137 MB file every time you run them and take few minutes to start (and no, this is not due to my computer).

    • pastor_williams 15 minutes ago
      This was something that I heavily focused on for my feature area a year ago - new user sign up flow. But the decreased latency was really in pursuit of increased activation and conversion. At least the incentives aligned briefly.
    • zahlman 1 hour ago
      > Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time. Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.

      This is what people mean about speed being a feature. But "user time" depends on more than the program's performance. UI design is also very important.

    • ozim 56 minutes ago
      About apps done by software houses, even though we should strive for doing good job and I agree with sentiment...

      First argument would be - take at least two 0's from your estimation, most of applications will have maybe thousands of users, successful ones will maybe run with 10's of thousands. You might get lucky to work on application that has 100's of thousands, millions of users and you work in FAANG not a typical "software house".

      Second argument is - most users use 10-20 apps in typical workday, your application is most likely irrelevant.

      Third argument is - most users would save much more time learning how to use applications (or to use computer) properly they use on daily basis, than someone optimizing some function from 2s to 1s. But of course that's hard because they have 10-20 apps daily plus god know how many other not on daily basis. Though still I see people doing super silly stuff in tools like Excel or even not knowing copy paste - so not even like any command line magic.

    • Y-bar 33 minutes ago
      You’ll enjoy ”Saving Lives” by Andy Hertzfied: https://www.folklore.org/Saving_Lives.html

      > "The Macintosh boots too slowly. You've got to make it faster!"

    • solatic 1 hour ago
      If you think too hard about this, you come back around to Alan Kay's quote about how people who are really serious about software should build their own hardware. Web applications, and in general loading pretty much anything over the network, is a horrible, no-good, really bad user experience, and it always will be. The only way to really respect the user is with native applications that are local-first, and if you take that really far, you build (at the very least) peripherals to make it even better.

      The number of companies that have this much respect for the user is vanishingly small.

      • phkahler 3 minutes ago
        >> The number of companies that have this much respect for the user is vanishingly small.

        I think companies shifted to online apps because #1 it solved the copy protection problem. FOSS apps are not in any hurry to become centralized because they dont care about that issue.

        Local apps and data are a huge benefit of FOSS and I think every app website should at least mention that.

        "Local app. No ads. You own your data."

      • hombre_fatal 1 hour ago
        Software I don’t have to install at all “respects me” the most.

        Native software being an optimum is mostly an engineer fantasy that comes from imagining what you can build.

        In reality that means having to install software like Meta’s WhatsApp, Zoom, and other crap I’d rather run in a browser tab.

        I want very little software running natively on my machine.

        • freedomben 42 minutes ago
          Yes, amen. The more invasive and abusive software gets, the less I want it running on my machine natively. Native installed applications for me now are limited only to apps I trust, and even those need to have a reason to be native apps rather than web apps to get a place in my app drawer
      • ghosty141 1 hour ago
        Yes because users don't appreciate this enough to pay for the time this takes.
    • inapis 1 hour ago
      >Yet if I spent one hour making my app one second faster for my million users, I can save 277 user hour per year. But since user hours are an externality, such optimization never gets done.

      I have never been convinced by this argument. The aggregate number sounds fantastic but I don't believe that any meaningful work can be done by each user saving 1 second. That 1 second (and more) can simply be taken by me trying to stretch my body out.

      OTOH, if the argument is to make software smaller, I can get behind that since it will simply lead to more efficient usage of existing resources and thus reduce the environmental impact.

      But we live in a capitalist world and there needs to be external pressure for change to occur. The current RAM shortage, if it lasts, might be one of them. Otherwise, we're only day dreaming for a utopia.

      • adrianN 1 hour ago
        Time saved to increased productivity or happiness or whatever is not linear but a step function. Saving one second doesn’t help much, but there is a threshold (depending on the individual) where faster workflows lead to a better experience. It does make a difference whether a task takes a minute or half a second, at least for me.
      • Aerroon 1 hour ago
        One second is long enough that it can put a user off from using your app though. Take notifications on phones for example. I know several people who would benefit from a habitual use of phone notifications, but they never stick to using them because the process of opening (or switching over to) the notification app and navigating its UI to leave a notification takes too long. Instead they write a physical sticky note, because it has a faster "startup time".
        • tehbeard 51 minutes ago
          All depends on the type of interaction.

          A high usage one, absolutely improve the time of it.

          Loading the profile page? Isn't done often so not really worth it unless it's a known and vocal issue.

          https://xkcd.com/1205/ gives a good estimate.

    • loloquwowndueo 1 hour ago
      Just a reminder that GitHub is not git.

      The article mentions that most of these projects did use GitHub as a central repo out of convenience so there’s that but they could also have used self-hosted repos.

      • machinationu 1 hour ago
        Explain to me how you self-host a git repo which is accessed millions of time a day from CI jobs pulling packages.
        • freedomben 31 minutes ago
          I'm not sure whether this question was asked in good faith, but is actually a damn good one.

          I've looked into self hosting and git repo that has horizontal scalability, and it is indeed very difficult. I don't have the time to detail it in a comment here, but for anyone who is curious it's very informative to look at how GitLab handled this with gitaly. I've also seen some clever attempts to use object storage, though I haven't seen any of those solutions put heavily to the test.

          I'd love to hear from others about ideas and approaches they've heard about or tried

          https://gitlab.com/gitlab-org/gitaly

        • fweimer 17 minutes ago
          These days, people solve similar problems by wrapping their data in an OCI container image and distribute it through one of the container registries that do not have a practically meaningful pull rate limit. Not really a joke, unfortunately.
        • adrianN 29 minutes ago
          You git init —-bare on a host with sufficient resources. But I would recommend thinking about your CI flow too.
        • ozim 1 hour ago
          FTFY:

          Explain to me how you self-host a git repo without spending any money and having no budget which is accessed millions of time a day from CI jobs pulling packages.

      • justincormack 1 hour ago
        They probably would have experienced issues way sooner, as the self hosted tools don't scale nearly as well.
    • machinationu 1 hour ago
      With AI engineering costs are plummeting.

      You can implement entire features with 10 cents of tokens.

      Companies which dont adapt will be left behind this year.

      • camgunz 1 hour ago
        I've never been more convinced LLMs are the vanguard of the grift economy now that green accounts are low effort astroturfing on HN.
        • freedomben 28 minutes ago
          LLM's obviously can't do it all, and they still have severe areas of weakness where they can't replace humans, but there are definitely a lot of areas where they really can now. I've seen it first hand. I've even experienced it first hand. There are a couple of services that I wrote years ago that were basically parked in maintenance mode because they weren't worth investing time in, and we just dealed with some of the annoyances and bugs. With the latest LLM's, over the last couple of months I've been able to resurrect them and fix a lot of bugs and even add some wanted features in just a few hours. It really is quite incredible and scary at the same time.

          Also in case you're not aware, accusing people of shilling or astroTurfing is against the hacker news guidelines

        • machinationu 43 minutes ago
          hey, I'm just a lowly LLM, gotta earn my tokens :|
  • dboon 1 hour ago
    I’m building Cargo/UV for C. Good article. I thought about this problem very deeply.

    Unfortunately, when you’re starting out, the idea of running a registry is a really tough sell. Now, on top of the very hard engineering problem of writing the code and making a world class tool, plus the social one of getting it adopted, I need to worry about funding and maintaining something that serves potentially a world of traffic? The git solution is intoxicating through this lense.

    Fundamentally, the issue is the sparse checkouts mentioned by the author. You’d really like to use git to version package manifests, so that anyone with any package version can get the EXACT package they built with.

    But this doesn’t work, because you need arbitrary commits. You either need a full checkout, or you need to somehow track the commit a package version is in without knowing what hash git will generate before you do it. You have to push the package update and then push a second commit recording that. Obviously infeasible, obviously a nightmare.

    Conan’s solution is I think just about the only way. It trades the perfect reproduction for conditional logic in the manifest. Instead of 3.12 pointing to a commit, every 3.x points to the same manifest, and there’s just a little logic to set that specific config field added in 3.12. If the logic gets too much, they let you map version ranges to manifests for a package. So if 3.13 rewrites the entire manifest, just remap it.

    I have not found another package manager that uses git as a backend that isn’t a terrible and slow tool. Conan may not be as rigorous as Nix because of this decision but it is quite pragmatic and useful. The real solution is to use a database, of course, but unless someone wants to wire me ten thousand dollars plus server costs in perpetuity, what’s a guy supposed to do?

    • ambicapter 14 minutes ago
      > Unfortunately, when you’re starting out, the idea of running a registry is a really tough sell. Now, on top of the very hard engineering problem of writing the code and making a world class tool, plus the social one of getting it adopted, I need to worry about funding and maintaining something that serves potentially a world of traffic? The git solution is intoxicating through this lense.

      So you need a decentralized database? Those exist (or you can make your own, if you're feeling ambitious), probably ones that scale in different ways than git does.

    • adrianN 27 minutes ago
      Before you managed to build a popular tool it is unlikely that you need to serve many users. Directly going for something that can serve the world is probably premature
      • dboon 0 minutes ago
        For most software, yes. But the value of a package manager is in its adoption. A package manager that doesn’t run up against these problems is probably a failure anyway.
  • ekjhgkejhgk 1 hour ago
    Do the easy thing while it works, and when it stops working, fix the problem.

    Julia does the same thing, and from the Rust numbers on the article, Julia has about 1/7th the number of packages that Rust does[1] (95k/13k = 7.3).

    It works fine, Julia has some heuristics to not re-download it too often.

    But more importantly, there's a simple path to improve. The top Registry.toml [1] has a path to each package, and once donwloading everything proves unsustainable you can just download that one file and use it to download the rest as needed. I don't think this is a difficult problem.

    [1] https://github.com/JuliaRegistries/General/blob/master/Regis...

    • 0xbadcafebee 23 minutes ago
      This is basically unethical. Imagine anything important in the world that worked this way. "Do nuclear engineering the easy way while it works, and when it stops working, fix the problem."

      Software engineers always make the excuse that what they're making now is unimportant, so who cares? But then everything gets built on top of that unimportant thing, and one day the world crashes down. Worse, "fixing the problem" becomes near impossible, because now everything depends on it.

      But really the reason not to do it, is there's no need to. There are plenty of other solutions than using Git that work as well or better without all the pitfalls. The lazy engineer picks bad solutions not because it's necessarily easier than the alternatives, but because it's the path of least resistance for themselves.

      Not only is this not better, it's often actively worse. But this is excused by the same culture that gave us "move fast and break things". All you have to do is use any modern software to see how that worked out. Slow bug-riddled garbage that we're all now addicted to.

    • galenlynch 1 hour ago
      I believe Julia only uses the Git registry as an authoritative ledger where new packages are registered [1]. My understanding is that as you mention, most clients don't access it, and instead use the "Pkg Protocol" [2] which does not use Git.

      [1] https://github.com/JuliaRegistries/General

      [2] https://pkgdocs.julialang.org/dev/protocol/

    • zahlman 1 hour ago
      > 00000000-1111-2222-3333-444444444444 = { name = "REPLTreeViews", path = "R/REPLTreeViews" }

      ... Should it be concerning that someone was apparently able to engineer an ID like that?

      • ekjhgkejhgk 1 hour ago
        Could you please articulate specifically why that should be concerning?

        Right now I don't see the problem because the only criterion for IDs is that they are unique.

        • zahlman 17 minutes ago
          I didn't know whether they were supposed to be within the developer's control (in which case the only real concern is whether someone else has already used the id), or generated by the system (in which case a developer demonstrated manipulation of that system).

          Apparently it is the former, and most developers independently generate random IDs because it's easy and is extremely unlikely to result in collisions. But it seems the dev at the top of the list had a sense of vanity instead.

      • skycrafter0 1 hour ago
        If you read the repo README, it just says "generate a uuid". You can use whatever you want as long as it fits the format, it seems.
      • adestefan 1 hour ago
        It’s as random as any other UUID.
        • Severian 1 hour ago
          Incorrect, only some UUIDs are random, specifically v4 and v7 (v7 uses time as well).

          https://en.wikipedia.org/wiki/Universally_unique_identifier

          > 00000000-1111-2222-3333-444444444444

          This would technically be version 2, which would be built from the date-time and MAC address, and DCE security version.

          But overall, if you allow any yahoo to pick a UUID, its not really a UUID, its just some random string that looks like one.

  • steeleduncan 1 hour ago
    The other conclusion to draw is "Git is a fantastic choice of database for starting your package manager, almost all popular package managers began that way."
    • saidinesh5 1 hour ago
      I think the conclusion is more that package definitions can still be maintained on git/GitHub but the package manager clients should probably rely on a cache/db/a more efficient intermediate layer.

      Mostly to avoid downloading the whole repo/resolve deltas from the history for the few packages most applications tend to depend on. Especially in today's CI/CD World.

      • reactordev 1 hour ago
        This is exactly the right approach. I did this for my package manager.

        It relies on a git repo branch for stable. There are yaml definitions of the packages including urls to their repo, dependencies, etc. Preflight scripts. Post install checks. And the big one, the signatures for verification. No binaries, rpms, debs, ar, or zip files.

        What’s actually installed lives in a small SQLite database and searching for software does a vector search on each packages yaml description.

        Semver included.

        This was inspired by brew/portage/dpkg for my hobby os.

    • bluGill 1 hour ago
      Git isn't a fantastic choice unless you know nothing about databases. A search would show plenty of research on databases and what works when/why.
      • kibwen 51 minutes ago
        For the purposes of the article, git isn't just being used as a database, it's being used as a protocol to replicate the database to the client to allow for offline operation and then keep those distributed copies in sync. And even for that purpose you can do better than git if you know what you're doing, but knowledge of databases alone isn't going to help you (let alone make your engineering more economical than relying on free git hosting).
        • freedomben 22 minutes ago
          Exactly. It's not just about the best solution to the problem, it's also heavily about the economics around it. If I wanted to create a new package manager today, I could get started by utilizing Git and existing git hosting solutions with very little effort, and effort translates to time, and time is a scarce resource. If you don't know whether your package manager will take off or not, it may not be the best use of your scarce resources to invest in a robust and optimized solution out of the gate. I wish that weren't the case, I would love to have an infinite amount of time, but wishing is not going to make it happen
    • adastra22 1 hour ago
      Git is an absolute shit database for a package manager even in the beginning. It’s just that GitHub subsidizes hosting and that is hard to pass up.
      • fn-mote 19 minutes ago
        Sure, but can you back up the expletive with some reason why you think that?

        As it is, this comment is just letting out your emotion, not engaging in dialogue.

  • kibwen 58 minutes ago
    I think there's a form of survivorship bias at work here. To use the example of Cargo, if Rust had never caught on, and thereby gotten popular enough to inflate the git-based index beyond reason, then it would never have been a problem to use git as the backing protocol for the index. Likewise, we can imagine innumerable smaller projects that successfully use git as a distributed delta-updating data distribution protocol, and never happen to outgrow it.

    The point being, if you're not sure whether your project will ever need to scale, then it may not make sense to reinvent the wheel when git is right there (and then invent the solution for hosting that git repo, when Github is right there), letting you spend time instead on other, more immediate problems.

  • ifh-hn 1 hour ago
    So what's the answer then? That's the question I wanted answered after reading this article. With no experience with git or package management, would using a local client sqlite database and something similar on the server do?
    • encom 1 hour ago
      I quite like Gentoo's rsync based package manager. I believe they've used that since the beginning. It works well.
  • quaintdev 1 hour ago
    I host my own code repository using Forgejo. It's not public. In fact, it's behind mutual tls like all the service I host. Reason? I don't want to deal with bots and other security risks that come with opening port to the world.

    Turns out Go module will not accept package hosted on my Forgejo instance because it asks for certificate. There are ways to make go get use ssh but even with that approach the repository needs to be accessible over https. In the end, I cloned the repository and used it in my project using replace directive. It's really annoying.

    • agwa 53 minutes ago
      If you add .git to the end of your module path and set $GOPRIVATE to the hostname of your Forgejo instance, then Go will not make any HTTPS requests itself and instead delegate to the git command, which can be configured to authenticate with client certificates. See https://go.dev/ref/mod#vcs-find
    • xyzzy_plugh 1 hour ago
      > There are ways to make go get use ssh but even with that approach the repository needs to be accessible over https.

      No, that's false. You don't need anything to be accessible over HTTP.

      But even if it did, and you had to use mTLS, there's a whole bunch of ways to solve this. How do you solve this for any other software that doesn't present client certs? You use a local proxy.

  • gethly 1 hour ago
    If we stopped using VCS to fetch source files, we would lose the ability to get the exact commit(understand as version that has nothing to do with the underlying VCS) of these files. Git, Mercurial, SVN.., github, bitbucket...it does not matter. Absolutely nobody will be building downloadable versions of their source files, hosted on who knows how "prestigious" domains, by copying them to another location just to serve the --->exact same content<--- that github and alike already provide.

    This entire blog is just a waste of time for anyone reading it.

    • throwway120385 28 minutes ago
      Or you could just ship a tarball and an sha checksum.
    • layer8 20 minutes ago
      And yet, that's pretty much how the Java world works (Maven repositories).
  • twoodfin 1 hour ago
    What made git special & powerful from the start was its data model: Like the network databases of old, but embedded in a Merkle tree for independent evolution and verifiability.

    Scaling that data model beyond projects the size of the Linux kernel was not critical for the original implementation. I do wonder if there are fundamental limits to scaling the model for use cases beyond “source code management for modest-sized, long-lived projects”.

    • amluto 1 hour ago
      Most of the problems mentioned in the article are not problems with using a content-addressed tree like git or even with using precisely git’s schema. The problems are with git’s protocol and GitHub’s implementation thereof.

      Consider vcpkg. It’s entirely reasonable to download a tree named by its hash to represent a locked package. Git knows how to store exactly this, but git does not know how to transfer it efficiently.

      • mananaysiempre 5 minutes ago
        > Git knows how to store [a hash-addressed tree], but git does not know how to transfer it efficiently.

        Naïvely, I’d expect shallow clones to be this, so I was quite surprised by a mention of GitHub asking people not to use them. Perhaps Git tries too hard to make a good packfile?..

        Meanwhile, what Nixpkgs does (and why “release tarballs” were mentioned as a potential culprit in the discussion linked from TFA) is request a gzipped tarball of a particular commit’s files from a GitHub-specific endpoint over HTTP rather than use the Git protocol. So that’s already more or less what you want, except even the tarball is 46 MB at this point :( Either way, I don’t think the current problems with Nixpkgs actually support TFA’s thesis.

  • ori_b 1 hour ago
    Alternatively: Downloading the entire state of all packages when you care about just one, it never works out.

    O(1) beats O(n) as n gets large.

    • gruez 1 hour ago
      Seems to still work out for apt?
  • hogrug 1 hour ago
    The facts are interesting but the conclusion a bit strange. These package managers have succeeded because git is better for the low trust model and GitHub has been hosting infra for free that no one in their right mind would provide for the average DB.

    If it didn't work we would not have these massive ecosystems upsetting GitHub's freemium model, but anything at scale is naturally going to have consequences and features that aren't so compatible with the use case.

  • bencornia 1 hour ago
    > Grab’s engineering team went from 18 minutes for go get to 12 seconds after deploying a module proxy. That’s not a typo. Eighteen minutes down to twelve seconds.

    > The problem was that go get needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies. Cloning entire repositories to get a single file.

    I have also had inconsistent performance with go get. Never enough to look closely at it. I wonder if I was running into the same issue?

    • zahlman 1 hour ago
      > needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies.

      Python used to have this problem as well (technically still does, but a large majority of things are available as a wheel and PyPI generally publishes a separate .metadata file for those wheels), but at least it was only a question of downloading and unpacking an archive file, not cloning an entire repo. Sheesh.

      Why would Go need to do that, though? Isn't the go.mod file in a specific place relative to the package root in the repo?

    • fireflash38 1 hour ago
      How long ago were you having issues? That was changed in go 1.13.
  • hk1337 1 hour ago
    I like Go but it’s dependency management is weird and seems to be centered around GitHub a lot.
    • Hendrikto 40 minutes ago
      There is nothing tying Go to GitHub.
  • Zambyte 1 hour ago
    The issues with using Git for Nix seem to entirely be issues with using GitHub for Nix, no?
    • Rucadi 1 hour ago
      I also got the same feeling from that, in fact, I would go as far as to say that nixpkgs and nix-commands integration with git works quite well and is not an issue.

      So the phrase the article says "Package managers keep falling for this. And it keeps not working out" I feel that's untrue.

      The most issue I have with this really is "flakes" integration where the whole recipe folder is copied into the store (which doesn't happen with non-flakes commands), but that's a tooling problem not an intrinsic problem of using git

    • femiagbabiaka 1 hour ago
      Yeah, it's inclusion in here is baffling because none of the listed issues have anything to do with the particular issue nixpkgs is having.
  • 0xbadcafebee 25 minutes ago
    YOLO software engineering, the hallmark of the 21st century
  • PunchyHamster 33 minutes ago
    The article conclusion is just... not good. There are many benefits to using Git as backend, you can point your project to every single commit as a version which makes testing any fixes or changes in libs super easy, it has built in integrity control and technically (sadly not in practice) you could just sign commits and use that to verify whether package is authentic.

    It being unoptimal bandwidth wise is frankly just a technical hurdle to get over it, with benefits well worth the drawback

  • nacozarina 46 minutes ago
    successful things often have humble origins, it’s a feature not a bug

    for every project that managed to out-grow ext4/git there were a hundred that were well-served and never needed to over-invest in something else

  • miyuru 1 hour ago
    Funnily enough, I clicked the homebrew GitHub link in the post, only to get a rate limited error page from GitHub.
  • mikkupikku 1 hour ago
    People who put off learning SQL for later end up using anything other than a database as their database.
    • redog 59 minutes ago
      SQL killed the set theory star
  • born-jre 1 hour ago
    lol I see this as I plan on using Git for my thing store. https://github.com/blue-monads/potatoverse
  • frumplestlatz 1 hour ago
    Since ~2002, Macports has used svn or git, but users, by default, rsync the complete port definitions + a server-generated index + a signature.

    The index is used for all lookups; it can also be generated or incrementally updated client-side to accommodate local changes.

    This has worked fine for literally decades, starting back when bandwidth and CPU power was far more limited.

    The problem isn’t using SCM, and the solutions have been known for a very long time.

  • encom 1 hour ago
    >[Homebrew] Auto-updates now run every 24 hours instead of every 5 minutes[...]

    That is such an insane default, I'm at a loss for words.

  • aniou 1 hour ago
    As side note. Maybe someone knows, why rust devs chose an already used name for language changes proposal? "RFC" was already taken and well-established and I simply refuse to accept that someone wasn't aware about Request For Comments - and if it was true and clash was created deliberately, then it was rude and arrogant.

    Every, ...king time, when I read something like "RFC 2789 introduced a sparse HTTP protocol." my brain suffers from a short-circuit. BTW: RFC 2789 is a "Mail Monitoring MIB".

    • adastra22 1 hour ago
      There are many, many RFC collections. Including many that predate the IETF. Some even predate computers.
      • aniou 52 minutes ago
        But they were in different domains. Here, we have a strong clash because Rust is positioning itself as secure system and internet language and computer and internet standard are already defined by RFC-s. So, it may be not uncommon, when someone would tell about Rust mechanisms, defined by particular RFC in context of handling particular protocol, defined by... well... RFC too. But not by rust-one.

        Not so smart, when we realize, that one of aspects of secure and reliable system is elimination of ambiguities.

  • gjvc 1 hour ago
    sqlite seems to be ideal for a package manager
  • eviks 1 hour ago
    Indeed, the seductive nature of bad tools lying close to your hand - no need to lift your butt to get them!