Gemma 4 on iPhone

(apps.apple.com)

136 points | by janandonly 2 hours ago

15 comments

  • karimf 0 minutes ago
    This app is cool and it showcases some use cases, but it still undersells what the E2B model can do.

    I just made a real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B. I posted it on /r/LocalLLaMA and it's gaining some traction on Reddit right now [0]. Here's the repo [1]

    I'm running it on Macbook instead of an iPhone, but based on the benchmark here [3], you should be able to run the same thing on iPhone 17 Pro.

    [0] https://www.reddit.com/r/LocalLLaMA/comments/1sda3r6/realtim...

    [1] https://github.com/fikrikarim/parlor

    [2] https://huggingface.co/litert-community/gemma-4-E2B-it-liter...

  • pmarreck 1 hour ago
    Impressive model, for sure. I've been running it on my Mac, now I get to have it locally in my iPhone? I need to test this. Wait, it does agent skills and mobile actions, all local to the phone? Whaaaat? (Have to check out later! Anyone have any tips yet?)

    I don't normally do the whole "abliterated" thing (dealignment) but after discovering https://github.com/p-e-w/heretic , I was too tempted to try it with this model a couple days ago (made a repo to make it easier, actually) https://github.com/pmarreck/gemma4-heretical and... Wow. It worked. And... Not having a built-in nanny is fun!

    It's also possible to make an MLX version of it, which runs a little faster on Macs, but won't work through Ollama unfortunately. (LM Studio maybe.)

    Runs great on my M4 Macbook Pro w/128GB and likely also runs fine under 64GB... smaller memories might require lower quantizations.

    I specifically like dealigned local models because if I have to get my thoughts policed when playing in someone else's playground, like hell am I going to be judged while messing around in my own local open-source one too. And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

    Note: I tried to hook this one up to OpenClaw and ran into issues

    To answer the obvious question- Yes, this sort of thing enables bad actors more (as do many other tools). Fortunately, there are far more good actors out there, and bad actors don't listen to rules that good actors subject themselves to, anyway.

    • c2k 1 hour ago
      I run mlx models with omlx[1] on my mac and it works really well.

      [1] https://github.com/jundot/omlx

    • barbazoo 35 minutes ago
      > And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

      I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?

      • spijdar 13 minutes ago
        Realistically, a lot of people do this for porn.

        In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.

      • throwuxiytayq 23 minutes ago
        The in-ter-net is for porn
    • magospietato 53 minutes ago
      Haven't built anything on the agent skills platform yet, but it's pretty cool imo.

      On Android the sandbox loads an index.html into a WebView, with standardized string I/O to the harness via some window properties. You can even return a rendered HTML page.

      Definitely hacked together, but feels like an indication of what an edge compute agentic sandbox might look like in future.

    • jackp96 1 hour ago
      [flagged]
      • potsandpans 54 minutes ago
        I'm tired of this concern trolling.
        • jackp96 14 minutes ago
          People are allowed to disagree with you, mate. This is a real concern that will affect real people's lives. I'm all for freedom, but that doesn't mean we ought to let just anyone own a nuke.

          That said, show me where I'm wrong. I'd love to change my mind on this.

  • janandonly 15 minutes ago
    OP Here. It is my firm belief that the only realistic use of AI in the future is either locally on-device for almost free, or in the cloud but way more expensive then it is today.

    The latter option will only bemusedly for tasks that humans are more expensive or much slower in.

    This Gemma 4 model gives me hope for a future Siri or other with iPhone and macOS integration, “Her” (as in the movie) style.

    • 0dayman 2 minutes ago
      this is not that first step towards your dream
    • kennywinker 7 minutes ago
      Did you really watch “Her” and think this is a future that should happen??

      Seriously????

      • zozbot234 1 minute ago
        Nah he just read Saltman's tweets that said the exact same thing.
  • PullJosh 1 hour ago
    This is awesome!

    1) I am able to run the model on my iPhone and get good results. Not as good as Gemini in the cloud, but good.

    2) I love the “mobile actions” tool calls that allow the LLM to turn on the flashlight, open maps, etc. It would be fun if they added Siri Shortcuts support. I want the personal automation that Apple promised but never delivered.

    3) I am so excited for local models to be normalized. I build little apps for teachers and there are stringent privacy laws involved that mean I strongly prefer writing code that runs fully client-side when possible. When I develop apps and websites, I want easy API access to on-device models for free. I know it sort of exists on iOS and Chrome right now, but as far as I’m aware it’s not particularly good yet.

  • rickdg 1 minute ago
    How do these compare to Apple's Foundation Models, btw?
  • __natty__ 2 minutes ago
    That's a great project! I just wondered whether Google would have a problem with you using their trademark
  • jeroenhd 1 hour ago
  • burnto 10 minutes ago
    My iPhone 13 can’t run most of these models. A decent local LLM is one of the few reasons I can imagine actually upgrading earlier than typically necessary.
  • deckar01 5 minutes ago
    It doesn’t render Markdown or LaTeX. The scrolling is also unusable during generation.
  • TGower 42 minutes ago
    These new models are very impressive. There should be a massive speedup coming as well, AI Edge Gallery is running on GPU, but NPUs in recent high end processors should be much faster. A16 chip for example (Macbook Neo and iphone 16 series) has 35 TOPS of Neural Engine vs 7 TFLOPS gpu. Similar story for Qualcomm.
    • api 36 minutes ago
      That’s nuts actually for such a low power chip. Can’t wait to see the M series version of that.

      I’m sure very fast TPUs in desktops and phones are coming.

      • zozbot234 22 minutes ago
        The Apple Silicon in the MacBook Neo is effectively a slimmed down version of M4, which is already out and has a very similar NPU (similar TFLOPS rating). It's worth noting however that the TFLOPS rating for Apple Neural Engine is somewhat artificial, since e.g. the "38 TFLOPS" in the M4 ANE are really 19 TFLOPS for FP16-only operation.
  • dwa3592 14 minutes ago
    I think with this google starts a new race- best local model that runs on phones.
    • dwa3592 2 minutes ago
      I wonder why the cut off date for 3n-E4B-it is Oct, 2023. That's really far in the past.
  • hadrien01 1 hour ago
    Is it me or does the App Store website look... fake? The text in the header ("Productiviteit", "Alleen voor iPhone") looks pixelated, like it was edited on Paint, the header background is flickering, the app icon and screenshots are very low quality, the title of the website is incomplete ("App Store voor iPho...")
    • giarc 1 hour ago
      It's the dutch version, see /nl/ in the url.

      If you just go to https://apps.apple.com/ it does look better, but I agree, still a bit "off".

    • throwatdem12311 1 hour ago
      Issues caused by a low effort localization?

      On my iPhone it opens on the App Store app, so it looks fine to me.

    • piperswe 1 hour ago
      What browser are you using? I don't see any of this behavior on Firefox...
      • hadrien01 1 hour ago
        Firefox on Windows, but it looks about the same in Edge

        Screenshot of the header: https://i.imgur.com/4abfGYF.png

        • t-sauer 39 minutes ago
          Renders equally weird for me on Firefox on Windows 11. Firefox on MacOS looks good though.

          Edit: Seems like mix-blend-mode: plus-lighter is bugged in Firefox on Windows https://jsfiddle.net/bjg24hk9/

        • morpheuskafka 53 minutes ago
          It looks like there is some sort of glow effect on the text that isn't rendering right on your browser? It arguably doesn't have the best contrast, but seems to be as intended in Safari 26.3. Looks similar on Chrome macOS too: https://imgur.com/yq5PrKm.
    • j0hax 1 hour ago
      Everything renders crystal clear with Firefox on GrapheneOS.
    • ezfe 1 hour ago
      Nothing weird on my side
  • carbocation 52 minutes ago
    It would be very helpful if the chat logs could (optionally) be retained.
  • darshil2023 1 hour ago
    [dead]