8 comments

  • autoexec 12 days ago
    As usual disabling JavaScript by default is the way to go, but for firefox at least you can make sure that dom.webgpu.enabled is set to false in about:config (this should be the default) and check for gfx.webgpu.force-enabled and if it's there make sure that is also false.
    • akyuu 12 days ago
      I think you can also disable hardware acceleration and that will prevent GPU from being accessible even with JavaScript enabled.
    • zamadatix 12 days ago
      I wonder if this is a case where disabling JavaScript JIT is enough or if even slow JS is still able to trigger the problem through the WebGPU side of the stack.
    • yieldcrv 12 days ago
      and then every react site stops rendering entirely?

      that suggestion isn't even viable on Tor websites

      • autoexec 12 days ago
        You can always selectively enable JS for specific websites that you trust. With something like NoScript it's just two clicks to whitelist a domain you want to authorize to be able to run random code on your machine.

        At work I run a hardened browser that disables a hell of a lot more than JS and not only do most pages work just fine as far as providing the content I wanted from them, but they load faster and look cleaner. There are still some annoying cases when a website can't even manage to display text or images without JS enabled, but even as easy as it is to enable JS if a website is that broken I often just close that tab and move on with my life anyway. I don't use that browser for things like online shopping but for 90% of what I need it works while also being far better for security and privacy

      • ziddoap 12 days ago
        I'm browsing HN right now without JS.

        A surprising number of sites work, at least well enough, with JS disabled. When they don't I can selectively enable JS as needed until I get the functionality required. Often that is a single permission or two, while I keep everything else disabled.

        • zamadatix 12 days ago
          What "well enough" is, and how much time you have to spend finding that out about each site and each time they update, is a pretty wide swath. E.g. browsing HN works fine... if you've already decided you don't care about things like collapsing comment chains. And HN is a pretty barebones site at that.

          If you're willing to dedicate the troubleshooting time to your web experience you can get yourself into a pretty useable state over time though.

          • 1oooqooq 12 days ago
            quit whining and actually try it.

            i use uBlockOrigin for ad block. it have a setting "disable js". done.

            each site you visit either works... or you leave it. if you must use it and it's a blank page, press ctrl+e, or open the ad blocker UI (works even on Firefox Android) and uncheck the blocked js icon. again, done. two clicks or one keyboard shortcut.

            and as the comment your replying to said, you will be surprised how everything works fine without

            • zamadatix 12 days ago
              How else would I know e.g. you can't collapse comments without JS enabled on HN or that you eventually get a more usable experience after investing the time unless I've already tried such operation in the browser before? It's worth at least thinking through what I've said before trying to call it out as blindly ignorant. Not that either way really steers the discussion towards the points made anyways.

              Also if you just whitelist every site the instant they don't work it's not exactly a security gain. Maybe it helps with ads UBO isn't catching out of the box or some other angle though. The amount of security you gain from this is proportional to the amount of effort you're willing to invest in making sites work. As an example: https://news.ycombinator.com/item?id=40069834 site works fine without one day, then suddenly doesn't so you whitelist or futz around until you find something like the /embed trick, until that either changes some day too or you've given up and just whitelisted every site you go to anyways.

              • 1oooqooq 11 days ago
                the point is it makes enabling js slightly just a click more annoying, which force you to unconsciously use sites that work fine without.

                you seen to spend time here, so you'd pay the security price and that's it.

                not having js on by default is for the 95pct of domains you hit everyday to read a single paragraph and never return.

  • karma_pharmer 12 days ago
    Wait until these guys hear about WebUSB, WebSerialPort, WebBootloader, WebFirmware, and WebHardDiskPartitioner!

    I hear they are working on even more APIs for TheWeb(tm).

    • eyegor 12 days ago
      Webusb/hid are scary powerful but for some reason I've never heard of malware attempting to use it. It's a neat party trick for being able to program usb enabled chips from a browser but I could never figure out another use case.
      • idle_zealot 12 days ago
        The requirement for the browser to explicitly ask the user to give the site access to a specific connected USB/BT device makes this a tricky vector. You would need to pose as a webapp with a legitimate need to access the device you're targeting.
        • Thorrez 12 days ago
          Previously that attack was possible against USB security keys (U2F/FIDO). Browsers fixed it though by blocking webusb from interacting with security keys.

          https://www.yubico.com/support/security-advisories/ysa-2018-...

          • drdaeman 12 days ago
            Which sucks, because I hoped there would be a way to use HSMs in browser for {en,de}cryption. A lot of those devices aren't just for FIDO, they have PKCS#11 and OpenPGP.

            WebAuthn/U2F is not designed for this. As it usually happens in tech, people hack weird contraptions afterwards (largeBlob, prf and hmac-secret) allowing to at least attach or derive the symmetric keys, but this isn't a solution even if it would've worked.

        • graemep 12 days ago
          A lot of people will click OK to anything though.
          • idle_zealot 12 days ago
            The best we can do is make it hard to own yourself by accident. The alternative is to give up on user control. Some users will ignore all warnings and install malware, get phished, and send money to Nigerian princes. We can try to educate them and make things as clear as possible, but removing or limiting functionality because some portion or users will misuse it is a bad idea.
          • bastawhiz 12 days ago
            The same people who will click allow on anything will also happily download and run a random APK or executable
          • beeboobaa3 12 days ago
            That's their choice. Sick and tired of developers dumbing everything down to protect this special breed of idiot.
      • BrutalCoding 12 days ago
        A use case I’d like to see is having VSCode (e.g. code-server) or any other web IDE for that matter, and being able to attach a debugger process that’s connected to my usb device (e.g. Android phone) for app development.

        I can setup a (macOS) VM and expose the web IDE securely to myself, but I wasn’t able to find a working solution about a year ago.

        The closest solution for me right now is using Google’s IDE ‘IDX’, basically a web VSCode with a right side panel containing an actual Android/iOS emulator. It’s neat, but I’d rather use my physical devices over USB during development.

        PS. If anyone knows how to set this up, please share!

      • archerx 12 days ago
        I used it to make an NFC login system for a web interface. I would like to explore webUSB more.
    • AshamedCaptain 12 days ago
      Don't forget WebBluetooth! Already shipping in chrome and literally used to do OTA firmware upgrades of InternetOfThings(TM) devices. What could possibly go wrong ?
      • altairprime 12 days ago
        What’s the difference between firmware updates using OWA and firmware updates using APK?

        A native Android app isn’t more secure than a web page running on that device, and the champions of Open Web Apps need the same hardware access privileges as native apps have in order to be a viable competing platform for development.

        HN is very strongly in favor of OWAs, so it’s confusing to see hostility to hardware access by webpages here? How else could a firmware updater app work?

    • signal11 12 days ago
      WebVibrator is a thing. No, really*: https://developer.mozilla.org/en-US/docs/Web/API/Vibration_A...

      *Not supported on Safari, thank goodness!

    • dormento 12 days ago
      > WebHardDiskPartitioner

      Made me look.

      • scintill76 12 days ago
        Therapist: WebHardDiskPartitioner isn't real, it can't hurt you.

        Chrome engineer: Hold my beer!

    • OneLeggedCat 12 days ago
      > WebHardDiskPartitioner

      I lol'd at that one. No one would never make such a thing... Wait would they?!?!

      • Jensson 12 days ago
        If you want to replace OS programs with browser apps you need it, and some thinks that you should be able to do everything in the browser.
      • OneLeggedCat 12 days ago
        WebDBAN. WebRootFolderRemover. WebHardDriveMusic.
        • nmeagent 12 days ago
          > WebHardDriveMusic

          I can hear it now: millions of non-solid-state drives across the Internet playing the Imperial March in sync...

  • miohtama 12 days ago
    THREAT MODEL

    > Like earlier, we assume our attacker embeds some malicious JavaScript in a webpage the victim is browsing for several minutes. The vic- tim runs a GPU-based AES implementation that can be queried for encryption with a chosen plaintext and key.

    Nobody runs a GPU-based AES implementation, so I feel the thread model and thus the assumptions of the paper are on a quite shaky ground.

    • 2OEH8eoCRo0 12 days ago
      Does it matter when the purpose is to demonstrate a crack in WebGPU?
      • olliej 12 days ago
        It does when the attack mechanism is a completely standard attack (cache eviction on aes tables) with an established solution (not running aes on shared resources, various masking operations, or simply doing the correct thing and using the system aes implementation or hardware).

        This attack is not wholly different from them saying “we made an aes implementation on the gpu and it leaked timing information”, yes it’s technically present, but the problem is not that you can’t tell the gpu to disable any predictors or caches, it’s that you’re using a wholly unsuited tool for a wholly inappropriate task with known hazards and expecting different outcome.

  • pclmulqdq 12 days ago
    Isn't GPU access from JavaScript supposed to be a feature, not a bug?
    • eptcyka 12 days ago
      The feature part of it is to allow JS code in a browser sandbox to compute stuff faster. Possibly even render it.

      The explicit anti-feature is the ability to do yet another side-channel attack. Also out-of-spec is the incredibly power-inefficient IPC mechanism. And I doubt a probabilistic querying of what kind of code the rest of the system is running was ever part of the spec either.

      • pclmulqdq 12 days ago
        I intended that to be sarcasm about how unimpressive this particular vulnerability is compared to how it's presented.
  • bastawhiz 12 days ago
    I simply don't understand what they did here, and this press release doesn't actually say what they did.

    > The team was able to track changes in the cache by filling it themselves using code in the JavaScript via WebGPU and monitoring when their own data was removed from the cache by input. This made it possible to analyse the keystrokes relatively quickly and accurately. By segmenting the cache more finely, the researchers were also able to use a second attack to set up their own secret communication channel, in which filled and unfilled cache segments served as zeros and ones and thus as the basis for binary code. They used 1024 of these cache segments and achieved transfer speeds of up to 10.9 kilobytes per second, which was fast enough to transfer simple information. Attackers can use this channel to extract data that they were able to attain using other attacks in areas of the computer that are disconnected from the internet.

    I don't understand which cache they're talking about, and why this has anything to do with keystrokes. And suggesting the use of cache invalidation to transmit information from one process to the browser doesn't really sound like a very concerning attack vector (how did you get a process running on the host computer in the first place, but without Internet access?)

    > The third attack targeted AES encryption, which is used to encrypt documents, connections and servers. Here, too, they filled up the cache, but with their own AES encryption. The reaction of the cache enabled them to identify the places in the system that are responsible for encryption and access the keys of the attacked system.

    Sorry, what? No part of this tracks for me. Who is even using the GPU for AES encryption? Or is this looking at the CPU cache by way of webgpu?

    If there's good work being done here, this article is frankly doing it a massive disservice.

    • remram 12 days ago
      The paper is linked at the bottom: https://ginerlukas.com/publications/papers/WebGPUAttacks.pdf

      It looks like it targets AES and keystrokes if they're processed on the GPU. They detect which parts of AES lookup tables (T-Tables) are getting added to the cache, which gives information about the key (section 6).

      • bastawhiz 12 days ago
        It sounds like the keystroke attack they claim isn't practical?

        > Despite the high recall, with the low precision, we can consider this attack mostly failed or severely degraded

        And in to what I said originally (who is actually doing AES on a GPU?) it sounds like the key recovery is also impractical:

        > In order to implement a last-round attack, we assume the attacker has access to the victim’s ciphertexts, but not the plaintext or the key.

        How exactly is a browser supposed to get access to the ciphertext in the first place?

        • redrabbyte 12 days ago
          Hi, the line about "failed" attacks pertains to the attack on AMD, Nvidia worked fine ;)
          • bastawhiz 12 days ago
            Even still, it relies on the assumption that you know that the user is typing, and that they're typing something that's interesting to you. You could pop a login page and expect the user to sign in, but it's still a very tenuous scenario to measure what you think might be keypresses. The time to redraw a text box is just as easily the focus ring being drawn, or the submit button being hovered, or a minor scroll event.

            Even then, the best that I know of for password recovery from timing is "Timing Analysis of Keystrokes and Timing Attacks on SSH", which relies on having data about the user in advance, and they only manage to reduce the search space by about 50x. I'm sure the state of the art is probably a bit better, but that's still assuming a lot: key press timing (that's probably noisy) isn't going to be a meaningful attack vector for arbitrary users online.

            • redrabbyte 12 days ago
              concurrent work (https://arxiv.org/ftp/arxiv/papers/2401/2401.04349.pdf) has shown website fingerprinting, recognizing something like the static login page of youtube/google/facebook etc is very much doable.

              that said, I don't expect to see any of these attacks in the wild. they're primarily demonstrations of the technique and to show that the channel is there

              as is often the case with side-channel attacks, a serious attacker would much more likely go for un-/recently patched traditional vulnerabilities

        • gpm 12 days ago
          > How exactly is a browser supposed to get access to the ciphertext in the first place?

          MITM attacks are usually considered relatively easy to pull off. Run a fake wifi-hotspot in an airport, use the "login portal" to get access to their browser, use the hotspot itself to get access to the cipher texts when they visit other https page (edit: I suppose those would be decrypted on the CPU... but when they use whatever odd program that thought decrypting messages to it on the GPU is a good idea).

          • bastawhiz 12 days ago
            That assumes the encrypted content is transferred in plaintext (not over https). I think you'd be hard pressed to find a HTTPS client that decrypts on the GPU. And then of course, the thing being decrypted needs to be big enough for the process to take long enough for them to perform the attack, which means many gigabytes (terabytes?). And then you'd still need to send the data from the mitm server back to the browser for processing before you start measuring anything.

            It's obvious they accomplished something, but the scenario they set up simply isn't something that is practical in the real world

            • freeone3000 12 days ago
              DRM-encrypted streaming video, perhaps? Gigabytes of encrypted data that needs to be decrypted and displayed?
              • bastawhiz 12 days ago
                But you've still got to shuttle that data to the browser and it has to be decrypted on the GPU. I think you'd be hard pressed to find that anywhere in the wild.

                Modern CPUs can decrypt AES with specialized instructions at a rate that's practically measured in GB/s. Especially for streaming video that doesn't need to decrypt the full stream at once (and where you're probably already using the GPU to, you know, render the video) why would you use the GPU for AES?

    • redrabbyte 12 days ago
      It's always hard to communicate fairly academic side channels in a way that the audience of a press-release (which is typically anyone) can get any level of detail.

      We tried to walk the line between enough information and not overwhelming, but it doesn't always work out :D Luckily there's always the paper.

      This article also did a pretty good job of a high-level summary imo https://www.securityweek.com/new-attack-shows-risks-of-brows...

    • baybal2 12 days ago
      [dead]
  • pyrolistical 12 days ago
    If they can repo specter/meltdown/bit flipping via row hammering, then this is the time to report the issue before webgpu goes GA.

    If they can’t, this is fud

  • userbinator 12 days ago
    More reasons to keep JS off by default and whitelist only the subset of sites that you absolutely trust and need it. Even without the security aspect, I remember reading an article about how WebGPU can be used for fingerprinting.