Swift's native Clocks are inefficient

(wadetregaskis.com)

141 points | by mpweiher 14 days ago

17 comments

  • netruk44 12 days ago
    I've been learning me some Swift and coming from C# I feel somewhat spoiled when it comes to timing things.

    In C# native Stopwatch class is essentially all you need for simple timers with sub-millisecond precision.

    Swift has not only the entire table of options from TFA to choose from, but also additional ones like DispatchTime [0]. They might all boil down to the same thing (mach_absolute_time, according to the article), but from the perspective of someone trying to learn the language, it's all a little confusing.

    Especially since there's also hidden bottlenecks like the one this post is about.

    [0]: https://developer.apple.com/documentation/dispatch/dispatcht...

    • chubs 12 days ago
      Just use CACurrentMediaTime for that, or Date(), both simple options :)
      • fathyb 12 days ago
        I believe `CACurrentMediaTime` depends on `QuartzCore.framework` and `Date` is not monotonic.

        I would also find it confusing if I found code doing something like network retries using `CACurrentMediaTime`.

  • user2342 12 days ago
    I'm not fluent in Swift and async, but the line:

       for try await byte in bytes { ... }
    
    
    for me reads like the time/delta is determined for every single byte received over the network. I.e. millions of times for megabytes sent. Isn't that a point for optimization or do I misunderstand the semantics of the code?
    • samatman 12 days ago
      The code, as the author makes clear, is an MWE. It provides a brief framework for benchmarking the behavior of the clocks. It's not intended to illustrate how to efficiently perform the task it's meant to resemble.
      • spenczar5 12 days ago
        But it seems consequential. If the time were sampled every kilobyte, the code would be 1,000 times faster - which is better than the proposed use of other time functions.

        At that point, even these slow methods are using about 0.5ms per million bytes, so it should be good up to gigabit speeds.

        If that’s not fast enough, then sample every million bytes. Or, if the complexity is worth it, sample in an adaptive fashion.

    • metaltyphoon 12 days ago
      I’m not sure about Swift, buy in C# and async method doesn’t have to be completed asynchronously. For example, when reading from files, a buffer will be first read asynchronous then subsequent calls will be completed synchronously until the buffer needs to be “filled” again. So it feels like most languages can do these optimizations

      again.

      • saagarjha 12 days ago
        This is what Swift does.
    • ajross 12 days ago
      Yeah, this is horrifying from a performance design perspective. But in this case you'd still expect that the "current time" retrieval[1] to be small relative to all the other async overhead (context switching for every byte!), and apparently it isn't?

      [1] On x86 linux, it's just a quick call into the vdso that reads the TSC and some calibration data, dozen cycles or so.

      • jerf 12 days ago
        Note the end of the article acknowledges this, so this is clearly a deliberate part of the constructed example to make a particular point and not an oversight by the author. But it is helpful to highlight this point, since it is certainly a live mistake I've seen in real code before. It's an interesting test of how rich one's cost model for running code is.
      • marcosdumay 12 days ago
        The stream reader userspace libraries are very well optimized for handling that kind of "dumb" usage that should obviously create problems. (That's one of the reasons Linux expects you to use glibc instead of making a syscall directly.)

        But I imagine the time reading ones aren't as much optimized. People normally do not call them all the time.

        • saagarjha 12 days ago
          They look very similar on macOS.
  • Shrezzing 12 days ago
    This is almost certainly intentional, and is very similar to the way web browsers mitigate the Spectre vulnerability[1]. Your processor (almost certainly) does some branch prediction to improve efficiency. If an application developer reliably knows the exact time, they can craft an application which jumps to another application's execution path, granting them complete access to its internal workings.

    To mitigate this threat, javascript engine developers simply added a random fuzzy delay to all of the precision timing techniques. Swift's large volume of calls to unrequired methods is, almost certainly, Apple's implementation of this mitigation.

    [1] https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...

    • saagarjha 12 days ago
      This is not true in the slightest, and I feel that you might be misunderstanding how these attacks work. Spectre does not allow you to control execution of another process. It does not touch any architecturally visible state; it works via side channels. This means all it can do is leak information. The mitigation for Spectre in the browser adds a fuzzy delay (which is not considered to be very strong, fwiw). Just making a slower timer does not actually mitigate anything. And if you look at the code (it's all open source!) you can see that none of it deals with this mitigation, it's all just normal stuff that adds overhead. These attacks are powerful but they are not magic where knowing the exact time gives you voodoo access to everything.
    • lxgr 12 days ago
      Nothing prevents applications from just calling the underlying methods mentioned in the article, so that can’t be it. The author even benchmarked these!
      • Someone 12 days ago
        Nothing? FTA: “The downside to calling mach_absolute_time directly, though, is that it’s on Apple’s “naughty” list – apparently it’s been abused for device fingerprinting, so Apple require you to beg for special permission if you want to use it”
        • sgerenser 12 days ago
          All the other methods "above" mach_absolute_time are still allowed though, including clock_gettime_nsec_np that's only ~2x slower than mach_absolute_time. While the Swift clock is ~40x slower than mach_absolute_time. I don't see how intentional slowdown for fingerprinting protection can be the cause.
        • cvwright 12 days ago
          All of the new privacy declarations are silly, but this one is especially ridiculous.

          I'm pretty sure I can trigger a hit to the naughty API just by updating a @Published var in an ObservedObject. For those unfamiliar with SwiftUI, this is the most basic way to tell the system that your model data has changed and thus it needs to re-render the view. Pretty much every non-trivial SwiftUI app will need to do this.

        • vlovich123 12 days ago
          but date and clock_gettime are still accessible and not much more overhead than the Mach API call. Additionally as I mention in another comment, this would have to be about Meltdown, not Spectre, and Meltdown is mitigated in the kernel through other techniques without sacrificing timers.
        • asow92 12 days ago
          It isn't difficult to be granted this permission. All an app needs to do is supply a reason defined in https://developer.apple.com/documentation/bundleresources/pr... as to why it's being used in the app's bundled PrivacyInfo.xcprivacy file, which could be disingenuous.
          • darby_eight 12 days ago
            It may not be difficult, but it's an additional layer of requirement. Defense in depth baby!
            • Someone 12 days ago
              In addition, if you get caught lying about this, your app may be nuked and your developer account terminated. May not be a big hurdle, but definitely can hurt if you have many users.
        • jedilord 12 days ago
          [dead]
    • vlovich123 12 days ago
      This would have to be for Meltdown not Spectre. Spectre is in process and meltdown is cross-process. In process would be pointless for a language like swift.

      And it’s a weird mitigation because Meltdown afaik has been mitigated on other OSes without banning high res timers.

      The nail in the coffin is that it’s unlikely about security is Date and clock_get_time are accessible and an order of magnitude faster.

      This seems a more likely scenario of poorly profiled abstraction layers adding features without measuring the performance.

    • fathyb 12 days ago
      If this was intentional, shouldn't it also affect `mach_absolute_time` which is used by the standard libraries of most languages and accessible to Swift?

      Also note you can get precise JavaScript measurements (and threading, eg. using pthreads and Emscripten) by adding some headers: https://developer.mozilla.org/en-US/docs/Web/API/Window/cros...

      • Shrezzing 12 days ago
        > Also note you can get precise JavaScript measurements (and threading) by adding some headers

        Though you can access these techniques now, in the weeks after Spectre attacks were discovered, the browsers all consolidated on "make timing less accurate across the board" as an immediate-term fix[1]. All browsers now give automatic access to imprecise timing by default, but have some technique to opt-in for near-precise timing.

        Similarly, Swift has SuspendingClock and ContinuousClock, which you can use without informing Apple. Meanwhile mach_absolute_time & similarly precise timing methods require developers to disclose the reasons for its use before Apple will approve your app on the store[2].

        [1] https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

        [2] https://developer.apple.com/documentation/kernel/1462446-mac...

        • fathyb 12 days ago
          That makes a lot of sense, thank you!
          • vlovich123 12 days ago
            No it doesn’t. Higher performance APIs like Date and clock_gettime are still available and not specially privileged and 40x faster. This looks pretty clearly like a bug.

            Spectre mitigations also are really silly here because as a swift app you already have full access to all in-process memory. It would have to be about meltdown but meltdown is prevented through other techniques.

    • beeboobaa3 12 days ago
      Have to protect those pesky application developers from knowing the time so they can write correct software.

      It makes sense for a web browser. Not for something like Swift.

      • vlovich123 12 days ago
        No, this is pretty clearly just a bug / poor design. Mistakes happen.
        • beeboobaa3 12 days ago
          Probably but I'm just responding to GP who implied that Apple, in all its infinite wisdom, did this on purpose.
    • stefan_ 12 days ago
      Literally one page into the article there is the full stack trace that makes abundantly clear there is no such thing going on, they just added a bunch of overhead.

      That's despite OSX having a vDSO style mechanism for it: https://github.com/opensource-apple/xnu/blob/master/libsysca...

    • Veserv 12 days ago
      No, that is nonsense.

      A competent organization would not make the function call take longer by a random amount of time. You would just do it normally then add the random fudge factor to the normal result. That is not only more efficient, it also allows more fine-tuned control, the randomization is much more stable, and it is just plain easier to implement.

      Though I guess I should not put it past them to do something incompetent given that they either implemented their native clocks poorly as the article says, or they incompetently implemented a Spectre mitigation as you theorize.

  • jepler 12 days ago
    I was curious how linux's clock_gettime compared. I wrote a simple program that tried all the clock types documented in my manpage: https://gist.github.com/jepler/e37be8fc27d6fb77eb6e9746014db...

    My two handy systems were an i5-1235U running 6.1.0-20-amd64 and a Ryzen 7 3700X also running 6.1.0-20-amd64. The fastest method was 3.7ns/s call on the i5 and 4ns/call on the Ryzen (REALTIME_COARSE and MONOTONIC_COARSE were about the same). If a "non-coarse" timestamp is required, the time increases to about 20ns/call on ryzen, 12ns on i5. (realtime, tai, monotonic, boottime).

    On the i5, if I force the benchmark to run on an efficiency core with taskset, times increase to 6.4ns and 19ns.

    • jeffbee 12 days ago
      You can knock almost a third off that fastest time by building with `-static`. In something that is completely trivial like reading the clock via vDSO the indirect call overhead of dynamic libc linking becomes huge. `-static` eliminates one level of indirect calls. The indirect vDSO call remains, though.

        % ./clocktest | rg MONOTONIC_COARSE
        MONOTONIC_COARSE     :    2.2ns percall
  • rfmoz 12 days ago
    OSX clock_gettime() [0] offers CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW, but not CLOCK_UPTIME, only CLOCK_UPTIME_RAW.

    Maybe someone knows why? On FreeBSD it is available [2].

    [0]: https://www.manpagez.com/man/3/clock_gettime_nsec_np/ [2]: https://man.freebsd.org/cgi/man.cgi?query=clock_gettime

  • Kallikrates 12 days ago
  • tialaramex 12 days ago
    In the Swift library documentation itself, hopefully a Swift person can tell me: What is the significance of the list of Apple platforms given? For example the Clock protocol shows iOS 16.0+ among others.

    I can imagine that e.g. ContinuousClock is platform specific - any particular system may or may not be able to present a clock which exhibits change despite being asleep for a while and so to the extent Apple claim Swift isn't an Apple-only language, ContinuousClock might nevertheless have platform requirements.

    But the protocol seems free from such a constraint. I can write this protocol for some arbitrary hardware which has no concept of time, I can't implement it, but I can easily write the protocol down, and yet here it is iOS 16.0+ anyway.

    • pdpi 12 days ago
      According to their changelog[0], Clock was added to the standard library with Swift 5.7, which shipped in 2022, at the same time as iOS 16. It looks like static linking by default was approved[1] but development stalled[2].

      I expect that it's as simple as that: It's supported on iOS 16+ because it's dynamically linked by default, against a system-wide version of the standard library. You can probably try to statically link newer versions on old OS versions, or maybe ship a newer version of the standard library and dynamically link against that, but I have no idea how well those paths are supported.

      0. https://github.com/apple/swift/blob/main/CHANGELOG.md

      1. https://github.com/apple/swift-evolution/blob/main/proposals...

      2. https://github.com/apple/swift-package-manager/pull/3905

    • rockbruno 12 days ago
      The standard library stopped being bundled with Swift apps when ABI stability was achieved. They are now provided as dynamic libraries alongside OS releases, so you can only use Swift features that match the library version for a particular OS version.
      • beeboobaa3 12 days ago
        Yikes. So after bundling their development tools with their operating system they are now also bundling some language's stdlib with the operating system? Gotta get them fingers in all of the pies, I guess.
        • MBCook 12 days ago
          > they are now also bundling some language's stdlib with the operating system

          much like libc, isn’t it? Apple writes tons of their own software in Swift and the number keeps going up. They’re trying to move more and more of the system to. It’s going to be loaded and in every system whether a user uses it or not.

          No different from the objective-C runtime.

          • jackjeff 12 days ago
            Absolutely agree.

            On Windows the equivalent would be MSVCRT which is sometimes shipped with the OS and sometimes not (depending on the versions involved). Sometimes you even need to worry about the CRT dependency with higher level languages because their “standard librairies” depend on the CRT.

            So if you see that being installed with Java or C# or Unity, now you know why.

        • KerrAvon 12 days ago
          Every Unix and most Unixlikes have always done this. It’s standard practice in that world.
          • beeboobaa3 12 days ago
            Which distro ships the go standard library?

            Also unixes let the sysadmin install additional libraries. How do I `apt install libswift2` on an iPhone?

    • Daedren 12 days ago
      Apple just doesn't backport APIs, it's a very very very rare occurence when it happens. It was introduced last year alongside iOS 16 so you require the latest OS, it's the norm really.
      • tialaramex 12 days ago
        I guess maybe I didn't explain myself well. Swift is supposedly a cross platform language. This "protocol" unlike the specific clocks certainly seems like something you could equally well provide on say, Linux. But, it's documented as requiring (among others) iOS 16.0+

        Maybe there's a separate view onto this documentation if you care about non-Apple platforms? Or maybe there's just an entirely different standard library for everybody else?

        • lukeh 12 days ago
          Same standard library (Foundation has some differences, that's another story). But the documentation on Apple's website only covers their own platforms.
  • xyst 12 days ago
    Post mentions this Apple doc, https://developer.apple.com/documentation/kernel/1462446-mac..., which states it can potentially be used to fingerprint a device?

    How can this API be used to fingerprint devices? It’s just getting the current time.

    My best guess, you can infer a users time zone? Thus get a very general/broad area of where this user lives (USA vs EU; or US-EST vs US-PST)

    Maybe I should just set my time to UTC on all devices

    • twoodfin 12 days ago
      The problem is it’s getting the current time with relatively high precision, which is the same reason a developer would prefer it for non-nefarious uses.

      Once you have a high-precision timer, there are all sorts of aspects of the user’s system you can fingerprint by measuring how long some particular API dependent on device performance and/or state takes to execute.

      Platform vendors long ago figured out not to hand out the list of available fonts, but it’s a couple orders of magnitude harder to be sure switching some text from Menlo to Helvetica doesn’t leak a fractional bit of information via device-dependent timing.

      EDIT: Others noted it’s actually ticks since startup, which is probably good for a few bits all on its own if you are tracking users in close to real time.

    • lapcat 12 days ago
      mach_absolute_time is unrelated to clock time. It's basically the number of CPU cycles since last boot, so it's more of an uptime measure.

      I suspect the fingerprinting aspect is more indirect: mach_absolute_time is the most accurate way to measure small differences, so if you're trying to measure subtle differences in performance between different devices on some specific task, mach_absolute_time would be the way to go.

      • VogonPoetry 12 days ago
        Consider N devices behind a NAT. They all make requests to a service.

        If the service can learn the individual but current values of mach_absolute_time, then after a minimum of two requests you can likely compute N and distinguish between each device making a request.

        This is possible because devices never reboot at exactly the same time.

      • MBCook 12 days ago
        > It's basically the number of CPU cycles since last boot, so it's more of an uptime measure

        And there’s the problem. Different devices have different uptimes. If you can get not only the uptime but a very very accurate version, you’ve got a very strong fingerprint.

      • hi-v-rocknroll 12 days ago
        There's also a Ruby gem that uses it for better-than-Benchmark high-precision timing.

        https://github.com/bwbuchanan/absolute_time/blob/master/ext/...

        https://rubygems.org/gems/absolute_time

        Yes you want this in a development environment or perhaps in some sort of macOS app for developers, but probably not in most apps without a very specific and limited need.

      • thealistra 12 days ago
        Yeah this is correct. Other comments seem misinformed.

        You can fingerprint a device using this because you know the wall clock difference and you know the previous cpu cycles. So you can assume any device with appropriately more cpu cycles may be the same device.

        We’re talking measurements taken from different apps using Google or Facebook sdk.

    • interpol_p 12 days ago
      My understanding is this gets something like the system uptime? (I may be reading the docs wrong).

      In which case, it could be used as one of many signals in fingerprinting a device, as you could distinguish a returning user by checking their uptime against the time delta since the uptime at their last visit. It's not perfect, but when combined with other signals, might be helpful

    • simcop2387 12 days ago
      The same way that you can do it from javascript I'd imagine.

      Timezones and such are one data point but the skews and accuracy and such are able to help you differentiate users too.

      https://forums.whonix.org/t/javascript-time-fingerprinting/7...

    • singron 12 days ago
      The offset from epoch time is probably unique per device per boot, and it only drifts one second per second while the device is suspended.

      You can get the time zone from less naughty APIs, and that has way fewer bits of entropy.

  • simscitizen 12 days ago
    Just use clock_gettime with whatever clock you want. There’s also a np (non-POSIX) suffixed variant that returns the timestamp in nanoseconds.
  • koenneker 12 days ago
    Might this be a hamfisted reaction to timing attacks?
  • adsharma 12 days ago
    clock_gettime_nsec_np() seems interesting in that it returns a u64.

    I proposed something similar for Linux circa 2012. The patch got lost in some other unrelated discussion and I didn't pursue it.

    struct timeval is a holdover from the 32 bit era. When everyone is using 64 bit machines, we should be able to get this data by reading one u64 from a shared page.

  • feverzsj 12 days ago
    I know swift has poor performance, but not expect they did it purposely.
  • foolswisdom 12 days ago
    > we’re talking a mere 19 to 30 nanoseconds to get the time elapsed since a reference date and compare it to a threshold.

    The table shows 19 or 30 milliseconds for Date / NSDate. Or am I misunderstanding something?

    • taspeotis 12 days ago
      Divide by a million
    • Medea 12 days ago
      "showing the median runtime of the benchmark, which is a million iterations of checking the time"
  • beeboobaa3 12 days ago
    Still hilarious how apple goes about their "system security".

    Instead of actually implementing security in the kernel they just kinda prevent you from distributing an app that may call that functionality. Because that way they can still let their buddies abuse it without appearing too biased (by e.g. having a whitelist on the device).

    This technical failing probably, partially, explains why they are so against allowing sideloading. That, and they're scared of losing their cash cow of course.

    • asveikau 12 days ago
      The hilarious thing is how people justify Apple's bugs with a security concern.

      Just squinting at the stack trace from the article, my intuition is that someone at Apple added a bunch of nice looking object-oriented stuff without regard for overhead. So a call to get a single integer from the kernel, namely the time, results in lots of objects being created on the heap and tons of "validation" going on. Then somebody on hacker news says this is all for your own good.

    • NotPractical 12 days ago
      > This technical failing probably, partially, explains why they are so against allowing sideloading.

      This occurred to me the other day. I've always laughed at the idea that Apple blocks sideloading for security purposes, but if the first line of defense is and always has been security through obscurity + manual App Store review (>= 2.0) on iOS, it's very possible that sideloading could cause problems. iOS didn't even have an App Store in release 1.0, meanwhile the Android security model has taken into account sideloaded apps since the very beginning [1]:

      > Android is designed to be open. [...] Securing an open platform requires a strong security architecture and rigorous security programs. Android was designed with multilayered security that's flexible enough to support an open platform while still protecting all users of the platform.

      [1] https://source.android.com/docs/security/overview

      Edit: Language revised to clarify that I'm poking fun of the idea and not the one who believes it.

      • GeekyBear 12 days ago
        > the Android security model has taken into account sideloaded apps since the very beginning

        Counterpoint: tech websites have literally warned users that they need to be wary of installing apps from inside Google's walled garden.

        > With malicious apps infiltrating Play on a regular, often weekly, basis, there’s currently little indication the malicious Android app scourge will be abated. That means it’s up to individual end users to steer clear of apps like Joker. The best advice is to be extremely conservative in the apps that get installed in the first place. A good guiding principle is to choose apps that serve a true purpose and, when possible, choose developers who are known entities. Installed apps that haven’t been used in the past month should be removed unless there’s a good reason to keep them around

        https://arstechnica.com/information-technology/2020/09/joker...

        "You should not trust apps from inside the walled garden" is not a sign of a superior security model.

        • NotPractical 12 days ago
          > Counterpoint: tech websites have literally warned users that they need to be wary of installing apps from inside Google's walled garden.

          This is not a counterpoint to what I was saying. I'm talking about sideloaded apps, not apps from Google Play. I agree that Google should work to improve their app vetting process, but that's a separate issue entirely, and one I'm not personally interested in.

          • GeekyBear 12 days ago
            If your security model is so weak that you can't keep malware out of the inside of your walled garden, the situation certainly isn't going to improve after you remove the Play store's app vetting process as a factor.
            • NotPractical 12 days ago
              I avoided making a claim regarding the relative "security level" of Android vs. iOS because it's not easy to precisely define what that means. All I was saying was that Android's security model explicitly accommodates openness. If your standard for a "strong" security model excludes openness entirely, that's fair I suppose, but I personally find it unacceptable. Supposing we keep openness as a factor for its own sake, I'm not sure how you can improve much on Android's model.

              This discussion seems to be headed in an ideological direction rather than a technical one, and I'm not very interested in that.

              • GeekyBear 12 days ago
                If your point of view is that you value the ability to execute code from random places on the internet more than security, perhaps that is the point you should have been making from the start.

                However, iOS makes the security trade off in the other direction.

                All an app's executable code must go through the app vetting process, and additional executable code cannot be added to the app without the app going through the app vetting process all over again.

                In contrast, Google has been unable to quash malware like Joker from inside the Play store because the malware gets downloaded and installed after the app makes it through the app vetting process and lands on a user's device.

                > Known as Joker, this family of malicious apps has been attacking Android users since late 2016 and more recently has become one of the most common Android threats...

                One of the keys to Joker’s success is its roundabout attack. The apps are knockoffs of legitimate apps and, when downloaded from Play or a different market, contain no malicious code other than a “dropper.” After a delay of hours or even days, the dropper, which is heavily obfuscated and contains just a few lines of code, downloads a malicious component and drops it into the app.

                https://arstechnica.com/information-technology/2020/09/joker...

                iOS not having constant issues with malware like Joker inside their app store has nothing to do with "security through obscurity" and everything to do with making a different set of trade offs when setting up the security model.

                • beeboobaa3 11 days ago
                  all of this malicious code still requires the user to grant the permissions, or exploit bugs in the operating system. same as ios, infinitely better than mac, windows and linux.

                  apple might pretend they are secure because they usually manage to catch such things during review. this doesn't actually mean they are secure.

                  end of the day, it's up to the user to choose what software they install, and what permissions they grant.

                  if your security model includes taking all user choice away, forbidding them from running software that they wish to run, and essentially treating them like unsophisticated toddlers that need your guidance because you know best, then sure, you might view this as a problem. but at that point, you are the problem.

                  • GeekyBear 10 days ago
                    > apple might pretend they are secure because they usually manage to catch such things during review. this doesn't actually mean they are secure.

                    The easiest way to see that Apple's security model is more robust is that tech websites don't have to warn users that they should fear the apps from inside the app store.

                    • NotPractical 8 days ago
                      There's little evidence that this isn't simply because Apple is better at policing their store. It probably also helps that an Apple developer license costs $99/year, whereas Google Play has a one-time $25 fee. Keep in mind that the Play Store is just one of Google's many endeavors, whereas the iPhone is Apple's premier product, and as such, one of their top priorities.

                      Regarding dynamic native code execution, please see saagarjha's comment and my reply.

                      > Apple's security model

                      It would be more accurate to say "the iOS security model", because as beeboobaa3 mentioned, macOS fully allows apps from outside the App Store, dynamic native code execution, and most other "insecure" things that are blocked on iOS.

                • saagarjha 12 days ago
                  Downloading executable code is irrelevant; it’s easy to alter app behavior dynamically on either platform.
                  • NotPractical 8 days ago
                    I agree. But it's at least worth noting that Google has taken steps towards blocking this as well, and it will likely be fully blocked in a future release of Android. From what I understand, it's partially to protect apps from themselves rather than the OS from apps, however. Additionally, it breaks legitimate apps like Termux, which many Android users see as a major regression. I personally think it's just another example of Apple-ish security theater, but Google has been known to copy some unfortunate things from Apple in attempt to mirror their success (see also: headphone jack, SafetyNet). Regardless, it goes to show that Android security is still evolving and the referenced 2020 article likely doesn't reflect the current state of things.
      • saagarjha 12 days ago
        Android and iOS have largely the same threat model with it comes to platform security. That is, app review mostly does not exist and the OS itself must protect the user.
      • spacedcowboy 12 days ago
        I'm not claiming that Apple is perfect, but I think comparing to Android, in terms of malware, security updates, and privacy, it comes out looking pretty good.
        • realusername 12 days ago
          Both look pretty similar to me, both in terms of policies and outcome.

          While iOS has longer device support, it's also way less modular and updates of system components will typically take longer to reach users than Android, so I'd say both have their issues there.

        • beeboobaa3 12 days ago
          Got some sources to cite, or is this the typical apple fanboyism of "android bad"?

          I've used android for years, never ran into any malware. I've also developed for android and ios. Writing malware is largely impossible due to the functional permission system, at least it's much, much harder than the other operating systems. Apple just pretends it's immune to malware because of the manual reviews and static analysis performed by the store. It's also why they're terrified of letting people ship their own interpreters like javascript engines.

          • Aloisius 12 days ago
            A bit old but, https://www.pandasecurity.com/en/mediacenter/android-more-in...

            One might argue that Android is targeted more than iPhone because of its larger userbase which certainly may contribute to it, but then MacOS which has a fraction of the userbase is more targeted than iOS - that makes the case that sideloading or lax app store reviews really are at least partly to blame.

            Given much of the malware seems to be apps that trick users into granting permissions by masquerading as a legitimate app or pirated software, it's not really too hard to believe that Apple's app store with their draconian review process and no sideloading might be a more difficult target.

            • beeboobaa3 12 days ago
              Obviously a strict walled garden keeps out bad actors. The question is: Is it worth it? I say no.

              People deserve to be trusted with the responsibility of making a choice. We are allowing everyone to buy power tools that can cause severe injuries when mishandled. No one blinks an eye. Just like we allow that to happen, we should allow people to use their devices in the way that they desire. If this means some malware can exist then I consider this to be acceptable.

              In the meantime system security can always be improved still.

              • Aloisius 12 days ago
                Yes, freedom to do what you want with your device is a great ideal.

                Yet I still don't want to have to fix my mom's phone because its loaded with malware or worse, malware draining her bank account.

      • threatofrain 12 days ago
        Is there a reputation of a security difference between Android and iOS? And in what direction does the badness lean?
        • beeboobaa3 12 days ago
          There is a reputation of Apple being more secure, but it's largely unfounded. It just looks that way because the ecosystem is completely locked down and software isn't allowed to exist without apple's stamp of approval.
          • kbolino 12 days ago
            Apple drove genuine security improvements in mobile hardware well before Android, including dedicated security chips and encrypted storage. The gap has been closed for a few years now, though, so the reputation is not so much "unfounded" as "out of date".
            • beeboobaa3 12 days ago
              You're not talking about security that protects end users against malware. You're talking about "security" that protects the device against "tampering", i.e. the owner using it in a way apple does not approve of.

              Apple's "security improvements" have always been about protecting their walled garden first and foremost.

              • kbolino 12 days ago
                A mobile device, in most users' hands:

                - Stores their security credentials for critical sites (banks, HR/payroll, stores, govt services, etc.)

                - Even if not, has unfettered access to their primary email account, which means it can autonomously initiate a password reset for nearly any site

                - Is their primary 2FA mechanism, which means it can autonomously confirm a password reset for nearly any site

                That's an immense amount of risk, both from apps running on the device, and from the device getting stolen. Both of the measures I mentioned are directly relevant to these kinds of threats. And, as I already said, Android has adopted these same security measures as well.

                • beeboobaa3 12 days ago
                  So the same as any computer since online banking and email were invented. This isn't some new development. You should stop trying to nanny people.
                  • kbolino 12 days ago
                    I have no idea what you are trying to say in the context of the thread. Hardware security is important for all of that and security measures have to evolve over time.
              • fingerlocks 12 days ago
                This just isn’t true. We have multiple bricked android devices from bootloader infected malware downloaded directly from the Play store. Nothing like that has ever happened on iOS.
                • beeboobaa3 12 days ago
                  The only thing this may prove is that Apple's app store review is more strict.
                  • fingerlocks 11 days ago
                    Yeah, along with the API entitlements. That’s how sandbox security works.
            • cyberax 12 days ago
              Like?
    • dang 12 days ago
      We detached this subthread from https://news.ycombinator.com/item?id=40274188.
  • diebeforei485 12 days ago
    It's required of native apps because native apps are full of API's that collect and sell user data. That's why.
  • sholladay 12 days ago
    Maybe unrelated, but I’ve noticed that setting a timer on iOS drains my battery more than I would expect, and the phone gets warm after a while. It’s just bad enough that if the timer is longer than 15 minutes, I often use an alarm instead of a timer. Not something I’ve experienced on Android.
    • whywhywhywhy 12 days ago
      Lot of glitches in timers and alarms on recent iOS, sometimes they don’t even fire. Extremely poor setting like a timer for cooking, checking your phone and there just isn’t a timer running anymore and your left wondering how far it’s overshot.

      15 Pro so definitely not an old phone new software issue.

    • saagarjha 12 days ago
      Have you tried profiling the device to see why?
  • loeg 12 days ago
    Wow, hundreds of milliseconds is a lot worse than I'd expect. I'm not shocked that it's slower than something like plain `rdtsc` (single digit nanoseconds?) but that excuses maybe microseconds of overhead -- not milliseconds and certainly not hundreds of milliseconds.
    • gok 12 days ago
      It's hundreds of milliseconds to do a a million iterations. A single time check is hundreds of nanoseconds.
      • loeg 12 days ago
        Oh, thanks. The table was unlabeled and I missed that in the text.

        Hundreds of nanos isn't great but it's certainly better than milliseconds.

    • layer8 12 days ago
      That’s for a million iterations, so really nanoseconds.