20 comments

  • topsycatt 2 days ago
    That's the system I work on! Please feel free to ask any questions. All opinions are my own and do not represent those of my employer.
    • ryao 2 days ago
      I imagine you need to make and destroy sandboxed environments quite often. How fast does your code create a sandboxed environment?

      Do you make the environments on demand or do you make them preemptively so that one is ready to go the moment that it is needed?

      If you make them on demand, have you tested ZFS snapshots to see if it can be done even faster using zfs clone?

      • dullcrisp 2 days ago
        What’s ZFS? That doesn’t sound like a Google internal tool I’ve ever heard of.
      • blixt 2 days ago
        Seconding this. Also curious if this is done with microkernels (I put Unikraft high on the list of tech I'd use for this kind of problem, or possibly the still-in-beta CodeSandbox SDK – and maybe E2B or Fly but didn't have as good experiences with those).
      • luke-stanley 2 days ago
        I use ZFS, but isn't the situation the sandbox is in totally different? Why would it be optimal?
        • ryao 2 days ago
          If you are making sandboxes, you need to put the files in place each time. With ZFS clones, you can keep referencing the same files repeatedly, so the amount of changes to memory needed to create an environment are minimized. Let’s say the sandbox is 1GB and each clone operation does less than 1MB of memory writes. Then you have a >1000x reduction in writing needed to make the environment.

          Furthermore, ZFS ARC should treat each read operation of the same files as reading the same thing, while a sandbox made the traditional way would treat the files as unique, since they would be full copies of each other rather than references. ZFS on the other hand should only need to keep a single copy of the files cached for all environments. This reduces memory requirements dramatically. Unfortunately, the driver has double caching on mmap()’ed reads, but the duplication will only be on the actual files accessed and the copies will be from memory rather than disk. A modified driver (e.g. OSv style) would be able to eliminate the double caching for mmap’ed reads, but that is a future enhancement.

          In any case, ZFS clones should have clear advantages over the more obvious way of extracting a tarball every time you need to make a new sandbox for a Python execution environment.

          • o11c 2 days ago
            It's worth noting that if you go down a layer, LVM snapshots are filesystem-independent.
            • ryao 1 day ago
              You need to preallocate space on LVM2 for storing changes and if it fills, bad things happen. You have write amplification of 4MB per write by default on LVM2, while ZFS just writes what is needed, since LVM2 isn't aware of the filesystem structures. All of the advantages WRT cache are gone if you use LVM2 too. Correct me if I am wrong.

              That said, if you really want to use block devices, you could use zvols to get something similar to LVM2 out of ZFS, but it is not as good as using snapshots on ZFS' filesystems. The write amplification would be lower by default (8KB versus 4MB). The page cache would still duplicate data, but the buffer cache duplication should be bypassed if I recall correctly.

        • RunningDroid 2 days ago
          I believe they were referring to the use of ZFS snapshots for a Copy-on-Write type setup
    • hnuser123456 2 days ago
      Is the interactive python sandbox incompatible with thinking models? It seems like I can only get the interactive sandbox by using 2.0 flash, not 2.0 flash thinking or 2.5 pro.
      • topsycatt 2 days ago
        That's a good question! It's not incompatible, it's just a matter of getting the flow right. I can't comment too much on that process but I'm excited for the possibilities there.
        • hnuser123456 2 days ago
          Oh, I see Gemini can run code as part of the thinking process. I suppose the sandbox that happens in was the target of this research, while code editing in Gemini Canvas just has a button to export to Colab for running. The screenshots in the research show a "run" button for generated code in the chat, but I'm not seeing that exact interface.

          In any case, I share your excitement.

          • topsycatt 2 days ago
            Canvas actually has a mix of this sandbox (with a different container) and fully client-side.

            The "run" option for generated code was removed due to underutilization, but the sandbox is still used for things like the data analysis workflow and running extensions amongst other things. It's really just a general purpose sandbox for running untrusted code server-side.

            • hnuser123456 2 days ago
              Is there a way for you to campaign to return the run button for common queries for code examples? It's probably the most powerful educational tool ever invented, to be able to see how the human language description turns into strange computer code which turns into resulting output. If you guys can get it secure enough, it's a killer feature.
            • sans_souse 2 days ago
              Talk about indirect gas-lighting, I can never find info on deprecated functions like this one, to the point I convinced myself I imagined it. I guess now I know who to ask
        • TechDebtDevin 2 days ago
          Have you by chance read this paper: https://agent-gen.github.io/
    • wunderwuzzi23 2 days ago
      That's cool. I did something similar in the early days with Google Bard when data visualization was added, which I believe was when the ability to run code got introduced.

      One question I always had was what the user "grte" stands for...

      Btw. here the tricks I used back then to scrape the file system:

      https://embracethered.com/blog/posts/2024/exploring-google-b...

      • waych 2 days ago
        The "runtime" is a google internal distribution of libc + binutils that is used for linking binaries within the monolithic repo, "google3".

        This decoupling of system libraries from the OS itself is necessary because it otherwise becomes unmanageable to ensure "google3 binaries" remain runnable on both workstations and production servers. Workstations and servers each have their own Linux distributions, and each also needs to change over time.

        • saagarjha 2 days ago
          Of course, this meant that some tools got stuck on some old glibc from like 2007.
      • flawn 2 days ago
        It says in the article - Google Runtime Environment
      • jemfinch 2 days ago
        grte is probably "google runtime environment", I would imagine.
    • fragmede 2 days ago
      Do you think "hacked Gemini and leaked its source code" is an accurate representation of what happened here?
      • topsycatt 2 days ago
        I'm on the Google side of the equation. I think the title is a bit sensationalized, but that's the author's prerogative.
        • devdudect 2 days ago
          When are we going to be able to run sandboxed php code?
          • simonw 2 days ago
            You can run PHP in ChatGPT Code Interpreter today if you upload the right binary (also Deno and Lua and more): https://til.simonwillison.net/llms/code-interpreter-expansio...
          • topsycatt 2 days ago
            We could, it's just not high up on the priority list. Any particular reason you want php?
            • alienbaby 2 days ago
              Possibly they are mildly insane
            • 0xbadcafebee 2 days ago
              >75% of the web's server-side code is php. most of that is WordPress, but lots of people customize it, and being able to write your own themes, plugins, etc is a big deal
            • egeozcan 2 days ago
              Next step is gemini hosting Personal Home Pages.
            • ipaddr 2 days ago
              Why would you want to run anything else?
        • koakuma-chan 2 days ago
          > but that's the author's prerogative

          You submitted this.

          • topsycatt 2 days ago
            I submitted this HN link with a title that exactly matches the one on the article, but I didn't write the title on the article. AFAIK HN posts should match the title of the article they link to.
            • dang 2 days ago
              Actually the rule is designed to let you correct misleading titles:

              "Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html

              I've done that now (https://news.ycombinator.com/item?id=43509103).

              I appreciate your scruples though! Because even though you would have been on the right side of HN's rules to correct a misleading (and/or linkbait) title, the fact that you work for Google would have opened you to the usual gotcha attacks about conflict of interest. This way we avoided all of that, and it's still a good submission and thread!

            • koakuma-chan 2 days ago
              > AFAIK HN posts should match the title of the article they link to.

              I am not aware of such rule's existence.

              Also "should" not "must."

              To be clear: I don't have a problem with you submitting this, but the title appears to be completely false.

          • marcellus23 2 days ago
            From the HN guidelines:

            > Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

            Arguably this is misleading or clickbait, but safer to err on the side of using the original title.

          • wil421 2 days ago
            Even better, OP shared something OP didn’t write but thought it was interesting.
    • enoughalready 2 days ago
      Have you contemplated running the python code in a virtual environment in the browser?
    • seydor 2 days ago
      you re the hacker or the google?
      • topsycatt 2 days ago
        The google
        • larodi 2 days ago
          "im the google" is definitely a top 3 chart synthpop song by ladytron .)
        • sans_souse 2 days ago
          Can a Mod please change thread title to I'm The Google. AMA.
        • lugao 2 days ago
          [flagged]
        • onemoresoop 2 days ago
          Question: how does it feel inside google in terms of losing their lunch to OpenAi? Losing here is very loose, I don’t think OpenAI won yet but seems to have made a leap ahead of google in terms of marker share and we know google was sitting on tons of breakthroughs and research. Any panicking or internal discontent at google’s product policies? No need to answer if you’re uncomforable that your employer may hold you responsible for what you write here.
          • mediaman 2 days ago
            This is an unusual opinion in industry, although common with consumers.

            Currently, Google has the most cost effective model (Flash 2) for tons of corporate work (OCR, classifiers, etc).

            They just announced likely the most capable model currently in the market with Gemini 2.5.

            Their small open source models (Gemma 3) are very good.

            It is true that they've struggled to execute on product, but the actual technology is very good and getting substantial adoption in industry. Personally I've moved quite a few workloads to Google from OpenAI and Anthropic.

            My main complaint is that they often release impressive models, but gimp them in experimental mode for too long, without fully releasing them (2.5 is currently in this category).

            • snoman 2 days ago
              How does Flash compare to Nova Lite? The latter looks less expensive. I haven’t really used either (used Nova Pro and it was good)
          • nikcub 2 days ago
            Nobody serious believes this. OpenAI may be eating up consumer mindshare - but Google are providing some of the most capable, best, cheapest and fastest models for dev integration.
            • bitexploder 2 days ago
              As the hype dies down, Goliath shakes off the competition. AI models are now a game of inches and those inches cost billions every inch, but it matters in the long run.
            • lanyard-textile 1 day ago
              I’m honestly shocked to hear anyone defend gemini, respectfully :)

              What casts it as most capable?

          • MyelinatedT 2 days ago
            From my perspective (talking very generally about the mood and environment here), it’s important to remember that Google is a very, very big company with many products and activities outside of AI.

            As far as I can see, there is a mix of frustration at the slowness of launching, optimism/excitement that there are some really awesome things cooking, and indifference from a lot of people who think AI/LLMs as a product category are quite overhyped.

            • fennecbutt 2 days ago
              Idk, I used to want to work for Google but I'm not so sure anymore. They built an awesome landscaper next to my office in London.

              But the UX and general functionality of their apps and services has been in steep decline for a long time now, imo. There are thousands of examples of the most basic and obvious mistakes and completely uninspired, sloppy software and service design.

              • MoonGhost 2 days ago
                > obvious mistakes and completely uninspired, sloppy software and service design.

                That's something you can work on to improve.

                A few years back I wanted to work for FAANG big company. Now I don't after working for smaller but with 'big' management. There are rats races, dirty tricks. And engineers don't have much control on what and how they are doing. Many things decided by incompetent managers. Architect position is actually a manager's title, no brain or skills required.

                Today I rather go to a small company or startup where the results are visible and appreciated.

            • dataflow 2 days ago
              > Google is a very, very big company with many products and activities outside of AI.

              Profit is what matters though, not number of products. The consumer perception is that Search rakes in the largest profits, so if they lose that, it doesn't matter what else is there. Thoughts?

          • luke-stanley 2 days ago
            They just released a SOTA model (Gemini 2.5 Pro) that beats all models on most benchmarks, it's a great comeback from the model side but IMO they are less strong on the product side, they pioneered the sticky ecosystem of web app products model, though kinda like the Microsoft Office suite that (originally) had to be downloaded, ironically building on XML HTTP request support the IE5 introduced for Outlook.
    • Mindwipe 2 days ago
      Does anyone at Google care that you're trying to replace Assistant with this in the next few months and it can't set a timer yet?

      (I mean it will tell you it's set a timer but it doesn't talk to the native clock app so nothing ever goes off if you navigate away from the window.)

      • hnuser123456 2 days ago
        I doubt the guy working on the code sandbox can do anything about the overall resource allocation towards ensuring all legacy assistant features still work as well as they used to. That being said, I was trying to navigate out of an unexpected construction zone and asked google to navigate me home, and it repeatedly tried to open the map on my watch and lock my phone screen. I had to pull over and use my thumbs to start navigation the old fashioned way.
      • iury-sza 2 days ago
        I keep reading people complaining about this but I can't understand why. Gemini can 100% set timers and with much more subtle hints than assistant ever could. It just works. I don't get why people say it can't.

        It can also play music or turn on my smart lamps, change their colors etc. I can't remember doing any special configuration for it to do that either.

        Pixel 9 pro

        • jdiff 2 days ago
          I certainly can't get it to reliably play music on my Pixel 8. Mostly it summons YT Music, only occasionally do I get my music player, and sometimes I merely get "I'm an LLM, I can't help you with that."

          And you used to be able to say "Find my phone" and it would chime and max screen brightness until found. Tried that with Gemini once, and it went on with very detailed instructions on using Google or Apple's Find My Device website (depending on what type of phone I owned), maybe calling it from another device if it's not silenced, or perhaps accepting that my device was lost or stolen if none of the above worked. Did find it during that lengthy attempt at being helpful though.

          Another fun example, weather. When Gemini's in control, "What's the weather like tonight?" gets a short ramble about how weather depends on climate, with some examples of what the weather might be like broadly in Canada, Japan, or the United States at night.

          Unlike Assistant where you could learn to adapt to its unique phrasing preferences, you just flat out can never reliably predict what Gemini's going to do. In exchange for higher peak performance, the floor dropped out the bottom.

      • dgunay 2 days ago
        I dislike Google's (mis)management of Assistant as much as the next guy, but this just has not been my experience. I can tell Gemini on my phone to set timers and it works just fine.
      • ChadNauseam 2 days ago
        I have a rooted pixel with a flashed custom android ROM, which should be a nightmare scenario for gemini, and it can set timers just fine (and the timers show up in the native clock app)
      • arebop 2 days ago
        The Assistant can't reliably set timers either, though I guess 80% is considerably better than 0. Still, I think it used to be better back before Google caught a glimpse of a different squirrel to chase.
      • 7bit 2 days ago
        It can't do shit, especially in some EU countries, where it can do even less shit.

        Setting timers reminders, calendar events. Nothing. If they kill the assistant, I'll go Apple, no matter how much I hate it.

        • GrayShade 1 day ago
          Just tested, you need to enable "Gemini Apps", but they remember your interactions for 3, 18 or 36 months instead of 3 days.
          • 7bit 1 day ago
            Yeah, I disabled that when I tested it. No go for me, but thanks for informing me!
      • nosrepa 2 days ago
        I just want the assistant voice. I hate the Gemini ones.
        • whatevertrevor 2 days ago
          I'm with you on that. I prefer a human trying to sound like a robot instead of a robot trying to sound human.
    • jwlake 2 days ago
      Is there any reason it's not documented?
    • ed_elliott_asc 2 days ago
      This is why hacker news is so cool
    • KennyBlanken 2 days ago
      Can you get someone to fix the CSS crap on the website? When I have it open it uses 40-50% of my GPU (normally ~5% in most usage)...and when I try to scroll, the scrolling is jerky mess?
  • simonw 2 days ago
    I've been using a similar trick to scrape the visible internal source code of ChatGPT Code Interpreter into a GitHub repository for a while now: https://github.com/simonw/scrape-openai-code-interpreter

    It's mostly useful for tracking what Python packages are available (and what versions): https://github.com/simonw/scrape-openai-code-interpreter/blo...

    • Zopieux 2 days ago
      Meanwhile they could just decide to publish this list in a document somewhere and keep it automatically up to date with their infra.

      But not, secrecy for the sake of secrecy.

      • aleksiy123 2 days ago
        Tbh I doubt this is secrecy.

        More likely just noone has taken the time and effort to do it.

      • 12345hn6789 2 days ago
        What would the benefit of doing this be?
        • simonw 1 day ago
          It's documentation. Makes it much easier for people to know what kind of problems they can solve using Code Interpreter.

          It's a bit absurd that the best available documentation for that feature exists in my hacky scraped GitHub repository.

    • fudged71 1 day ago
      I just used this package list (and sandbox limitations) to synthesize a taxonomy of capabilities: https://gist.github.com/trbielec/a00a58fa97a232bef8984cc8d01...
  • lqstuart 2 days ago
    So by “we hacked Gemini and leaked its source code” you really mean “we played with Gemini with the help of Google’s security team and didn’t leak anything”
  • parliament32 2 days ago
    > resulting in the unintended inclusion of highly confidential internal protos in the wild

    I don't think they're all that confidential if they're all on github: https://github.com/ezequielpereira/GAE-RCE/tree/master/proto...

    • saagarjha 2 days ago
      I mean, those were also disclosed via a vulnerability.
      • Brian_K_White 2 days ago
        But it still means they aren't guilty of leaking/disclosing them.

        It's not a valid point of criticism. The escape did not in fact "result" in the leak of confidential photos. That already happened somewhere else. This only resulted in the republishing of something already public.

        Or another way, it's not merely that they were already public elsewhere, the imortant point is that the photos were not given to the ai in confidence, and so re-publishing them did not violate a confidence, any more than say github did.

        I'm no ai apologist btw. I say all of these ais are committing mass copyright violation a million times a second all day every day since years ago now.

        • saagarjha 1 day ago
          I’m not criticizing them
          • Brian_K_White 1 day ago
            The article made that criticism.
            • saagarjha 1 day ago
              The article criticized its authors? I’m not sure I understand.
              • Brian_K_White 1 day ago
                The article / leak authors said that the leak resulted in the exposure of highly confidential protos.

                I was saying that the article was wrong for saying that, but I was half wrong about that.

                I thought that the thing they were talking about was something that the AI got from a public source, in which case the AI didn't disclose anything it was given in confidense. It just republished something that it itself got from a public source in the first place.

                Except I think I had that wrong. The stuff was already published elsewhere, but that's not how the AI got it. The leak caused the AI to disclose some of it's own internal workings, which is actually a leak and does "result in the disclosure of something confidential" even of something else elsewhere had already also seperately disclosed the same thing. That other leak has no bearing in this case.

  • tgtweak 2 days ago
    The definition of hacking is getting pretty loose. This looks like the sandbox is doing exactly what it's supposed to do and nothing sensitive was exfiltrated...
  • bluelightning2k 2 days ago
    Cool write up. Although it's not exactly a huge vulnerability. I guess it says a lot about how security conscious Google is that they consider this to be significant. (You did mention that you knew the company's specific policy considered this highly confidential so it does count but it feels a little more like "technically considered a vulnerability" rather than clearly one.)
  • jll29 2 days ago
    Running the built-in "strings" command to extract a few file names from a binary is hardly hacking/cracking.

    Ironically, though, getting the source code of Gemini perhaps wouln't be valuable at all; but if you had found/obtained access to the corpus that the model was pre-trained with, that would have been kind of interesting (many folks have many questions about that...).

    • dvt 2 days ago
      > but if you had found/obtained access to the corpus that the model was pre-trained with, that would have been kind of interesting

      Definitionally, that input gets compressed into the weights. Pretty sure there's a proof somewhere that shows LLM training is basically a one-way (lossy) compression, so there's no way to go back afaik?

      • jdiff 2 days ago
        Not the original, but a lossy facsimile that's Good Enough for almost anything. And as the short history of LLMs and other nets has shown us, they're often not even all that lossy.
  • jeffbee 2 days ago
    I guess these guys didn't notice that all of these proto descriptors, and many others, were leaked on github 7 years ago.

    https://github.com/ezequielpereira/GAE-RCE/tree/master/proto...

  • theLiminator 2 days ago
    It's actually pretty interesting that this shows that Google is quite secure, I feel like most companies would not fare nearly as well.
    • kccqzy 2 days ago
      Yes and especially the article mentions "With the help of the Google Security Team" so it's quite collaborative and not exactly black box hacking.
  • commandersaki 2 days ago
    Their "LLM bugSWAT" events, held in vibrant locales like Las Vegas, are a testament to their commitment to proactive security red teaming.

    I don't understand why security conferences are attracted to Vegas. In my opinion its a pretty gross place to conduct any conference.

    • zem 2 days ago
      relatively cheap event space and hotels. it's hard to find a city to host a large conference.
    • lmm 2 days ago
      Excluding uptight scolds is a feature not a bug. There's a lot of overlap between people who find Vegas objectionable and people who find red teaming objectionable (because why would any decent person know attacking/exploiting techniques).
      • commandersaki 2 days ago
        The irony is that Vegas takes a dim view of those that take advantage of their gaming venues. The institutions that run it are quite aggressive when it comes to being attacked.

        Anyways, security conferences such as BSides run all over the world in various cities where red teaming type activities is embraced. IMO it'd be nice to diversify from Vegas, preferably places with more scenery/greenery like Boulder or something.

    • desmosxxx 2 days ago
      What don't you understand. Vegas is literally built for conferences.
    • hashstring 2 days ago
      Real, I feel the exact same way.
    • numbsafari 2 days ago
      You answered your own question.
    • scudsworth 2 days ago
      reinvent is in vegas
  • ein0p 2 days ago
    They hacked the sandbox, and leaked nothing. The article is entertaining though.
    • kccqzy 2 days ago
      They leaked one file in the sandbox that contained lots of internal proto files. The security team reviewed everything in the sandbox and thought nothing in it is sensitive and gave the green light; apparently the review didn't catch this in the sandbox.

      I guess this is a failing of the security review process, and possibly also how the blaze build system worked so well that people forgot a step existed because it was too automated.

      • charcircuit 2 days ago
        >that contained lots of internal proto files

        So does Google Chrome.

        • kccqzy 2 days ago
          No it's not the same level of internal. There are internal proto files specific to Chromium and its API endpoints, and then there are internal proto files for google3. The latter can divulge secrets about Google's general server side architecture. The former only divulges secrets about server side components relevant to Chromium.
  • fpgaminer 2 days ago
    Awww, I was looking forward to seeing some of the leak ;) Oh well. Nice find and breakdown!

    Somewhat relatedly, it occurred to me recently just how important issues like prompt injection, etc are for LLMs. I've always brushed them off as unimportant to _me_ since I'm most interested in local LLMs. Who cares if a local LLM is weak to prompt injection or other shenanigans? It's my AI to do with as I please. If anything I want them to be, since it makes it easier to jailbreak them.

    Then Operator and Deep Research came out and it finally made sense to me. When we finally have our own AI Agents running locally doing jobs for us, they're going to encounter random internet content. And the AI Agent obviously needs to read that content, or view the images. And if it's doing that, then it's vulnerable to prompt injection by third party.

    Which, yeah, duh, stupid me. But ... is also a really fascinating idea to consider. A future where people have personal AIs, and those AIs can get hacked by reading the wrong thing from the wrong backalley of the internet, and suddenly they are taken over by a mind virus of sorts. What a wild future.

    • 20after4 2 days ago
      > reading the wrong thing from the wrong backalley of the internet, and suddenly they are taken over by a mind virus of sorts. What a wild future.

      This already happens to people on the internet.

      • tcoff91 2 days ago
        Yeah, the way some people lose it from the internet reminds me of Snow Crash.
  • Cymatickot 2 days ago
    Probably best text I've seen in AI train ride recently:

    """"" As companies rush to deploy AI assistants, classifiers, and a myriad of other LLM-powered tools, a critical question remains: are we building securely ? As we highlighted last year, the rapid adoption sometimes feels like we forgot the fundamental security principles, opening the door to novel and familiar vulnerabilities alike. """"

    There this case and there many other cases. I worry for copy & paste dev.

  • qwertox 2 days ago
    Super interesting article.

    > but those files are internal categories Google uses to classify user data.

    I really want to know what kind of classification this is. Could you at least give one example? Like "Has autism" or more like "Is user's phone number"?

    • StephenAmar 2 days ago
      The latter. Like is it a public ID, an IP, user input, ssn, phone number, lat/long…

      Very useful for any scenario where you output the proto, like logs, etc…

  • mr_00ff00 2 days ago
    Slightly irrelevant, but love the color theme on the python code snippets. Wish I knew what it was.
  • b0ner_t0ner 2 days ago
    Very distracting background/design on desktop; had to toggle reader view.
  • paxys 2 days ago
    Funny enough while "We hacked Google's AI" is going to get the clicks, in reality they hacked the one part of Gemini that was NOT the LLM (a sandbox environment meant to run untrusted user-provided code).

    And "leaked its source code" is straight up click bait.

    • dang 2 days ago
      Ok, we put the sandbox in the title above. Thanks!

      (Submitted title was "We hacked Google's A.I Gemini and leaked its source code (at least some part)")

      • topsycatt 2 days ago
        Thanks!
      • infinghxsg 2 days ago
        Instead of sandbox can you just make sure people know it was not a meaningful hack?

        I mean I “hacked” this site too by those standards.

        • dang 2 days ago
          What would be a more accurate and neutral wording?
          • xnx 2 days ago
            We uncovered some internal details of the Gemini Python sandbox
    • IshKebab 1 day ago
      They didn't even hack it.
    • HenryBemis 2 days ago
      Click and cash (for the great trio).
  • sneak 2 days ago
    > However, the build pipeline for compiling the sandbox binary included an automated step that adds security proto files to a binary whenever it detects that the binary might need them to enforce internal rules. In this particular case, that step wasn’t necessary, resulting in the unintended inclusion of highly confidential internal protos in the wild !

    Protobufs aren't really these super secret hyper-proprietary things they seem to make them out to be in this breathless article.

    • film42 2 days ago
      No, but having the names to the fields, directly from Google, is very helpful for further understanding what's available from within the sandbox.
      • kingforaday 2 days ago
        Reminds me of this HN article from a month ago with lots of commentary on whether a database scheme is proprietary.

        https://news.ycombinator.com/item?id=43175628

        • film42 2 days ago
          Yeah there are some interesting similarities. However, the biggest difference is Google has the right to keep source proprietary, and companies like Unity are allowed to provide source code with a reference only license (still proprietary), but the US has FOIA to help push information into the open. Does a DB schema fall under FOIA scope? I think a better question is, can (or is) a db schema being used to conceal information? Is the law attempting to reinforce this barrier?

          In other words, it should not be about the intent of the requester, but the intent of its owner; and in the case of that article, either by bias in narrative, or the fact that it rhymes with events of the past, there is some tomfoolery about.

    • ratorx 2 days ago
      Yup, there’s no reason to believe that the proto files (which are definitions rather than data) are any more confidential than the Gemini source code itself.
    • daeken 2 days ago
      Yeah, this is honestly super interesting as a journey, but not as a destination. The framing takes away from how cool the work really is.
    • ipsum2 2 days ago
      Yes, there's a lot of internal protos from Google that are leaked on the internet. If I recall correctly, it was a hacker News comment that linked to it.

      Edit: I don't know why the parent comment was flagged. It is entirely accurate.

      • kccqzy 2 days ago
        You are probably thinking of the Google search ranking leak. That leak was the leak of the generated documentation from proto files.
    • whatevertrevor 2 days ago
      The protos in question are related to internal authn/z so it's conceivable that having access to that structure would be valuable information to an attacker.
      • rurban 2 days ago
        The protos were already available. See above.

        A valuable information would be able to run those RPC calls as Principal (their root user)

  • curiousZeedX 2 days ago
    [dead]