Apple's On-Device and Server Foundation Models

(machinelearning.apple.com)

941 points | by 2bit 14 days ago

43 comments

  • rishabhjain1198 14 days ago
    For people interested in AI research, there's nothing new here.

    IMO they should do a better job of referencing existing papers and techniques. The way they wrote about "adaptors" can make it seem like it's something novel, but it's actually just re-iterating vanilla LoRA. It was enough to convince one of the top-voted HackerNews comments that this was a "huge development".

    Benchmarks are nice though.

    • lolinder 14 days ago
      > For people interested in AI research, there's nothing new here.

      Was anyone expecting anything new?

      Apple has never been big on living at the cutting edge of technology exploring spaces that no one has explored before—from laptops to the iPhone to iPads to watches, every success they've had has come from taking tech that was already prototyped by many other companies and smoothing out the usability kinks to get it ready for the mainstream. Why would deep learning be different?

      • csvm 13 days ago
        Prototyping tech is one thing; making it a widely adopted success is another. For instance, Apple was the first to bring WiFi to laptops in 1999. Everyone laughed at them at the time. Who needs a wireless network when you can have a physical LAN, ey?
        • steve1977 13 days ago
          > For instance, Apple was the first to bring WiFi to laptops in 1999. Everyone laughed at them at the time. Who needs a wireless network when you can have a physical LAN, ey?

          From https://en.wikipedia.org/wiki/AirPort:

          "AirPort 802.11b card"

          "The original model, known as simply AirPort card, was a re-branded Lucent WaveLAN/Orinoco Gold PC card, in a modified housing that lacked the integrated antenna."

          • gumby 13 days ago
            That was also how lucent’s access points worked.
            • collinmanderson 8 days ago
              It was pretty neat how you could take the PCMCIA card out of an AirPort or WaveLAN/Orinoco and stick it in a laptop and it would just work.
        • Matl 13 days ago
          On the other hand, people who laughed at them removing the 3.5mm jack can still safely laugh away.
          • throw0101d 13 days ago
            > On the other hand, people who laughed at them removing the 3.5mm jack can still safely laugh away.

            Then laugh at Samsung and their flagship line of phones as well, since they haven't had headphone jacks for a while now. "After Note 10 dumps headphone jack, Samsung ads mocking iPhone dongles disappear" (2019):

            * https://www.cnet.com/tech/mobile/after-removing-headphone-ja...

            "Samsung is hiding its ads that made fun of Apple's removal of headphone jack":

            * https://www.androidauthority.com/samsung-headphone-jack-ads-...

            • tivert 13 days ago
              >> On the other hand, people who laughed at them removing the 3.5mm jack can still safely laugh away.

              > Then laugh at Samsung and their flagship line of phones as well, since they haven't had headphone jacks for a while now. "After Note 10 dumps headphone jack, Samsung ads mocking iPhone dongles disappear" (2019):

              I totally do. One of the problems with Apple is the industry seems to mindlessly ape their good and bad decisions. Their marketing has been so good, many people just assume whatever they do must be the best way.

              • barbecue_sauce 13 days ago
                At the time I felt like Apple was getting rid of the 3.5mm jack as a potential bottleneck for future iPhone designs (as one of the limiting aspects of form factor), but there still doesn't seem to be anything design-wise to justify it, even several years later. It is very clear now that it was merely to encourage Air Pod adoption.
                • user_7832 13 days ago
                  I would say this was obvious to the cynical of us from the very beginning. Unless you are trying to go portless (for water resistance perhaps?) or have a very thin phone, there’s very little benefits of removing the jack… except to drive airpod sales, of course.
                  • arvinsim 12 days ago
                    I draw the line on going pure portless. I would like to retain USB-C thank you very much.
                  • WanderPanda 13 days ago
                    I mean to go thinner than 6/6s I can see the 3.5mm causing trouble. Part of me is still sad they bounced they went the other direction when it comes to iPhone thickness
              • monooso 13 days ago
                > One of the problems with Apple is the industry seems to mindlessly ape their good and bad decisions...

                That's not a problem with Apple.

                • talldayo 13 days ago
                  It's more of a regulatory problem, under a certain light.
                  • josephg 13 days ago
                    Regulation that stops companies copying ideas from one another would be a disaster.
                    • talldayo 13 days ago
                      I'm more suggesting that bad decisions should be litigated against fast-and-early, so other companies aren't encouraged to follow in Apple's footsteps. If every company had their own Lightning connector, there would be no choice but to force them all to converge. The original sin is letting it happen at all, in the first place.
                      • josephg 13 days ago
                        Who decides which ideas are good and bad? I assume you wouldn’t want regulators to have retroactively forced Apple to keep floppy disk drives in their computers. Or cdrom drives? It’s just the “obviously bad” ideas that should be banned, right?

                        Do you have a crystal ball that lets you know ahead of time which choices are good and bad? Even in retrospect I’m not sure Apple made the wrong choice with the lightning connector. It’s a better connector in just about every way than micro-usb, which was the only standard alternative at the time. Apple’s experience with lightning was rolled into the design process for usb-c, which as I understand it they were heavily involved in. USB-c might not exist as it does today without Apple’s experiments with the lightning connector.

                        Even if we pretend you’re better at picking winners and losers in the tech world than Apple and Samsung, do you think regulators are going to be as canny as you are with this stuff? US politicians don’t seem to understand the difference between Facebook and the internet. Are you sure you want them making tech choices on your behalf?

                        If you ask me, I think regulators would make a dogs breakfast of it all. If they were involved we’d probably still have laws on the books mandating that all laptops still have parallel ports or PCMCIA slots or something. The free market can sure take its time figuring this stuff out. But competition does, usually, lead to progress.

                      • throw0101d 13 days ago
                        > If every company had their own Lightning connector, there would be no choice but to force them all to converge.

                        You mean like forcing every phone maker to use the awesome connecter known as micro USB-B?

                        * https://en.wikipedia.org/wiki/Common_external_power_supply

                      • monooso 13 days ago
                        I don't really see how you could litigate against dropping the headphone jack from the iPhone, which is the context of this thread.
              • deanishe 13 days ago
                > One of the problems with Apple is the industry seems to mindlessly ape their good and bad decisions.

                That's hardly Apple's fault, but it is so annoying. Hardly any of my cables have proper reinforcement sleeves any more.

                Apple started making awful cables that snapped at the plugs, and everybody else just copied them.

            • skeaker 13 days ago
              We absolutely do laugh at both already.
              • krick 13 days ago
                I would gladly laugh, but it's nearly impossible to buy a good phone now. TBH I don't care that much about my phone not having a 3.5mm (even if I would need to use wired headphones, which is very rare now, I can use an USB adapter for that), but there are basically no phones without this stupid hole in the display, or with a good dedicated (not under screen) fingerprint scanner (because who needs that, when you can have face recognition, right?) All top-line phones are like $1500 now, but still are considered like disposable products that are naturally expected to be changed every 2 years. Batteries are not removable, yet devices are not actually (reliably) waterproof.

                And maybe I'm wrong, but somehow it feels each improvement like that was actually pioneered by Apple. In the dreamworld of free-market enthusiasts this should have made Apple bankrupt or iPhone a very niche consumer device, but in the real world everything just became iPhone. There are some rare exceptions, but these are either outright experimental and gimmicky (because being different is their identity), or just bottom-of-the-line products that have these "intentional defects" that should make you chose the more expensive option.

                • lebed2045 10 days ago
                  i seems you exclude many android options, that have hidden selfie camera and good working fingerprint scanner. ZTE Axon 20 5G as an example.
          • joshstrange 13 days ago
            This is such a tired talking point. Use a (lightning|USB-C)->3.5mm adapter or use bluetooth.
            • astrange 13 days ago
              And your experience for phone use cases will be better, because walking with wired headphones in gives you nasty telephonic effects (sound transmission along the cable) and they get tangled up.
              • bigstrat2003 13 days ago
                The two aren't mutually exclusive. Those who wish to use Bluetooth headphones can happily use them, while those who prefer wired can continue to use them. There's no reason smartphone manufacturers shouldn't support both.
                • astrange 13 days ago
                  They do, you can get the little dongle and it has superior audio quality to most audiophile DACs on the market.

                  But that 3.5mm port takes up a lot of room that could be used for more battery, backup antennas for when the user's hand is covering one of them, vibration motors etc.

                  • lebed2045 10 days ago
                    I'm pretty sure you could find justification even for why top models don't support microSD card expansion. The problem with this line of argument is that if they wanted to, they could support both without any issues. The real reason is money. It's more profitable to have Bluetooth only when you also make AirPods, and not include storage expansion when you sell built-in memory options at a 400% markup.
          • jb1991 13 days ago
            Interesting that you suggest laughing at their decision to remove the headphone jack, when it was actually just the first of an industry-wide shift that has done so by other companies.
        • bildung 13 days ago
          Was that really the case? I remember they were mocked for e.g. offering wifi only, firewire only etc., while the respective removed alternatives were way more common.
          • blihp 13 days ago
            In the consumer space at least, WiFi was nowhere to be seen on a typical PC when Apple adopted it. Same with USB. So while it technically originated and existed elsewhere, there was no serious traction on it prior to Apple adoption.

            What you say is also true: many people weren't ready to ditch the old when Apple decide to deprecate it.

            • josephg 13 days ago
              This has been true for ages. They were the first to ditch the floppy disk drive and later cd drive in their computers. Both choices were very controversial at the time.
              • paulmd 13 days ago
                They were also the first to usb-c nirvana - they were shipping laptops with thunderbolt in 2011, and moved to usb-c in 2016, giving you four full-capability outputs at a time when most laptops had one, sometimes, and it took another 5 years before most laptops adopted at least one as standard and premium laptops sometimes had two.

                (People look down on the move to usb-c which I don’t quite get, everyone seems to fawn over usb-c in other contexts but macbooks, amirite!?!?. Yes it’s nice to have a hdmi port but fundamentally if you buy into this vision that usb-c does everything but you also want to use a bunch of legacy ports (vs thunderbolt video, thunderbolt networking, etc) then obviously you’re going to have to have dongles, and people supposedly buy into that vision in other contexts. Apples implementation of that vision was fundamentally at least a decade ahead of the curve there, if you’re going to do that you want lots of ports and you want every port to do everything, not “this one is the only one that can charge fast”, “that one doesn’t have video output”, “if you run both ports they drop to some weird lower capability because you’re dividing the controller”. Those complaints are the things people don’t like about the base-tier M-series processors today, and apple’s previous models solved that problem long before anyone else did.)

                Hell until very very recently a lot of the time the competition didn’t even have thunderbolt/Pcie tunnelling… you got 10gbps usb-c and a grab-bag of charging and display features, and you’re gonna like it. That’s still the case with motherboards and it’s literally only with this years’ release that we’re finally getting usb4/thunderbolt as standard on high-end boards. Literally more than a decade from when apple started putting thunderbolt on laptops, almost a decade from the era of 4x tb3 full-spec ports.

                … and reminder that in classic usb fashion, usb4 still doesn’t even guarantee Pcie tunneling support. So really it can still be just a normal usb-c 10gbps connection in a silly mustache and trench coat. Even on the next-gen stuff. What’s the term for doing an ok, moderately competent but not even exceptionally good job while your competition repeatedly shoots themselves in the face, again? But it’s by design - the intent is deceiving and manipulating the customer into buying last year’s junk, it’s working as intended for USB-IF’s real customers and stakeholders.

                https://en.wikipedia.org/wiki/USB4#cite_ref-auto2_15-0

                • talldayo 12 days ago
                  > They were also the first to usb-c nirvana

                  Intel says hi.

                  > People look down on the move to usb-c which I don’t quite get

                  People loved Thunderbolt for replacing Firewire. They hated Apple's choice because these USB-C Macbooks shipped with precisely zero USB-A ports and relegated every user to carrying around a dongle.

                  The year is 2024, we're almost a decade out from Apple going all-in on USB-C and the predominant peripheral connector is still type-A. I don't like it either, but plugging our ears and pretending like it's not a problem is silly and only makes consumers mad.

                  > and reminder that in classic usb fashion, usb4 still doesn’t even guarantee Pcie tunneling support.

                  That is in fact the correct default to use. Ever heard of Thunderspy? https://thunderspy.io/

                  • josephg 12 days ago
                    > Intel says hi.

                    Intel helped make this move possible, but it doesn’t manufacture laptops. Apple took the heat for “donglegate”.

                    On the x86 desktop, usb-c is still surprisingly rare. I think my motherboard (that’s less than a year old) only has 2 usb-c ports and 8x usb-a.

                    • talldayo 12 days ago
                      > Intel helped make this move possible, but it doesn’t manufacture laptops. Apple took the heat for “donglegate”.

                      And rightfully so. They took Intel's technology and told an unprepared and uninterested industry to switch or die. Naturally, very few manufacturers switched over and Apple's all-or-nothing strategy made more people mad than happy.

                      Having 4 lanes of Thunderbolt connectivity is awesome. It doesn't really fix the fact that none of them can easily connect to a wired keyboard or mouse.

                      > On the x86 desktop, usb-c is still surprisingly rare.

                      My motherboard only has one TB connector, everything else is type-A too. Most of the bandwidth is broken out over SATA or PCIe internally, and frankly I don't regret it one bit. 99% of my life, there is nothing plugged into that Thunderbolt port.

      • IOT_Apprentice 14 days ago
        Apple was first with 64 bit iPhone chips. Remember Qualcomm VP at the time claimed it was nothing. Apple Silicon for M1 was impressive for instant in low power high performance.
        • lolinder 14 days ago
          Those are both still (major) incremental improvements to known tech, not cutting-edge research. Apple takes what other companies have already done and does it better.
          • sunshinerag 13 days ago
            all cutting-edge research other companies are supposedly doing are also incremental. Depends on your vantage point.
        • prmoustache 13 days ago
          But last at bringing a calculator on the iPad =)
          • viscanti 13 days ago
            No one has ever brought a native (not 3rd party) calculator to the iPad before. Apple is the first.
            • froggit 12 days ago
              > No one has ever brought a native (not 3rd party) calculator to the iPad before. Apple is the first.

              I'm not sure I understand... Considering apple makes the ipad, wouldn't all ipad calculators other than apple's be 3rd party by definition?

      • jeanlucas 14 days ago
        > For people interested in AI research

        I think he is pointing out for people interested in research.

        OTOH, it is interesting to see how a company is applying AI to customers at the end. It will bring up new challenges that will be interesting from at least an engineering point of view.

      • rmbyrro 13 days ago
        I think you misinterpreted OP's comment. Apple makes it sound like there's smth new, but there isn't. They don't have to innovate, but it's good practice to credit who've done what they're taking and using. Also to use the names everyone else is already using.
        • marci 13 days ago
          The strange thing is Apple did mention (twice) in the article that their adapters are loras so I don't understand OP's comment.
          • aceazzameen 13 days ago
            I gathered from OP's "huge development" comment he was talking about others people's popular perception that it wasn't a lora.
        • steve1977 13 days ago
          > Also to use the names everyone else is already using.

          That would be a very un-Apple thing to do. They really like to use their own marketing terms for technologies. It's not ARM, it's Apple Silicon. It wasn't Wi-Fi, it was AirPort. etc. etc.

          • gumby 13 days ago
            > It wasn't Wi-Fi, it was AirPort. etc. etc.

            FWIW the term “airport” predated the name “wifi” — in those days you had to otherwise call it IEEE 802.11.

            And the name as great: people were buying them like crazy and hiding them in the drop ceiling to get around the corporate IT department. A nice echo of how analysts would buy their own apple II + visicalc to…get around corporate IT.

            I’m OK with Apple using “apple silicon” as the ARM is only part of it.

            Just commenting on your two examples; in general I agree with your point.

            • steve1977 13 days ago
              As far as I know, both the AirPort trademark and the term Wi-Fi got introduced in 1999 (could be that AirPort was a couple of weeks earlier)
              • gumby 13 days ago
                Airport came out in the beginning of the year, maybe January, at Macworld. WECA renamed themselves the WiFi Alliance a few years later, but the name (and trademark) had to exist for it to be worth them doing so.

                WECA reminds me of other "memorable" names of the era, my favorite being PCMCIA, though VESA is another fave.

                • steve1977 13 days ago
                  Thanks for the info. That was pretty much around the time when I started out in IT. And yes PCMCIA, what a mouthful! I remember Ethernet cards in this format.
                  • chuckadams 13 days ago
                    People Can’t Memorize Computer Industry Acronyms
          • astrange 13 days ago
            > It wasn't Wi-Fi, it was AirPort.

            Except in Japan, where it's AirMac. And China, where it's WWAN not Wi-Fi.

          • Tijdreiziger 13 days ago
            See also: FireWire, iSight, Retina, FaceTime, etc.
            • KerrAvon 13 days ago
              None of these really fit the pattern. Apple invented FireWire, called it FireWire, and other companies chose to call it different things in their implementations (partly because Apple originally charged for licensing the name, IIRC). iSight is an Apple product. FaceTime is an Apple product. Retina is branding for high-resolution displays beyond a certain visual density.
              • steve1977 13 days ago
                "Apple invented FireWire" is maybe not fully accurate (but actually a good example of the point here).

                Wikipedia: FireWire is Apple's name for the IEEE 1394 High Speed Serial Bus. Its development was initiated by Apple[1] in 1986,[3] and developed by the IEEE P1394 Working Group, largely driven by contributions from Sony (102 patents), Apple (58 patents), Panasonic (46 patents), and Philips (43 patents), in addition to contributions made by engineers from LG Electronics, Toshiba, Hitachi, Canon,[4] INMOS/SGS Thomson (now STMicroelectronics),[5] and Texas Instruments.

                What might be interesting in this regard is that Sony was also using its own trademark for it: "i.LINK".

      • caseyy 13 days ago
        > Apple has never been big on living at the cutting edge of technology

        There was such a time. Same as with Google. Interestingly, around 2015-2016 both companies significantly shifted to iterative products from big innovations. It's more visible with Google than Apple, but here's both.

        Apple:

        - Final Cut Pro

        - 1998: iMac

        - 1999: iBook G3 (father of all MacBooks)

        - 2000: Power Mac G4 Cube (the early grandparent of the Mac Mini form factor), Mac OS X

        - 2001: iPod, iTunes

        - 2002: Xserve (rackable servers)

        - 2003: Iterative products only

        - 2004: iWork Suite, Garage Band

        - 2005: iPod Nano, Mac mini

        - 2006: Intel Macs, Boot Camp

        - 2007: iPhone and Apple TV

        - 2008: MacBook Air, iPhone 3G

        - 2009: iPhone 3Gs, all-in-one iMac

        - 2010: iPad, iPhone 4

        - 2011: Final Cut Pro X

        - 2012: Retina displays, iBooks Author

        - 2013: iWork for iCloud

        - 2014: Swift

        - 2015: Apple Watch, Apple Music

        - 2016: Iterative products only

        - 2017: Iterative products mainly, plus ARKit

        - 2018: Iterative products only

        - 2019: Apple TV +, Apple Arcade

        - 2020: M1

        - 2021: Iterative products only

        - 2022: Iterative products only

        - 2023: Apple Vision Pro

        Google:

        - 1998: Google Search

        - 2000: AdWords (this is where it all started going wrong, lol)

        - 2001: Google Images Search

        - 2002: Google News

        - 2003: Google AdSense

        - 2004: Gmail, Google Books, Google Scholar

        - 2005: Google Maps, Google Earth, Google Talk, Google Reader

        - 2006: Google Calendar, Google Docs, Google Sheets, YouTube bought this year

        - 2007: Street View, G Suite

        - 2008: Google Chrome, Android 1.0

        - 2009: Google Voice, Google Wave (early Docs if I recall correctly)

        - 2010: Google Nexus One, Google TV

        - 2012: Google Drive

        - 2013: Chromecast

        - 2014: Android Wear, Android Auto, Google Cardboard, Nexus 6, Google Fit

        - 2015: Google Photos

        - 2016: Google Assistant, Google Home

        - 2017: Mainly iterative products only, Google Lens announced but it never rolled out really

        - 2018: Iterative products only

        - 2019: Iterative products only

        - 2020: Iterative products only, and some rebrands (Talk->Chat, etc)

        - 2021: Iterative products only, and Tensor Chip

        - 2022: Iterative products only

        - 2023: Iterative products only, and Bard (half-baked).

        • vile_wretch 13 days ago
          Some of your choices and what you consider iterative/innovative are strange to me. For 2009, a chassis update for the iMac and a spec/camera bump for the iPhone doesn't seem particularly innovative especially in comparison to say the HomePod in 2017 or Satellite SOS in 2022.

          Also small correction but iTunes (as Soundjam MP) was originally third-party software and Final Cut was acquired by Apple.

          • caseyy 13 days ago
            Yes, it's not perfect.

            About iTunes: I did not know that! Thank you.

            About iterative/innovative: I considered hardware and software that became household names or general knowledge to be significant innovations. It is not rigorous, I tried to include more rather than less. Still, on some years these companies mostly did version increases for their hardware and software, like new iOS and macOS versions, and that was it. Those years I marked as iterative.

            I included a few too many iPhones, although when I wrote that, my thought process was that these phones were pivotal to how iPhones developed. I should have included the original iPhone, and iPhone 3G — the first iPhone developed around the concept of an app platform and with an App Store. This has undoubtedly been a big innovation. iPhone 4 and 3Gs, perhaps, should not have been included.

            It's loose and just to illustrate a general trend, individual items are less important, we could all pick slightly different ones. But I believe the trend would remain.

            • KerrAvon 13 days ago
              You're missing Apple Silicon, which has had a huge impact across the entire industry even if random soccer dad doesn't know about it -- if any one thing is responsible for Intel's marketshare collapsing in the future, the M series of processors is it.
              • AshamedCaptain 13 days ago
                > if any one thing is responsible for Intel's marketshare collapsing in the future, the M series of processors is it.

                If anything, that would be AMD. But I'd guess they're both more worried about the entire desktop & laptop market shrinking way more than anything Apple does.

        • barbecue_sauce 13 days ago
          No Newton?
          • caseyy 13 days ago
            Missed it, but should have been included for 1998. A very good example.
        • lolinder 13 days ago
          The iMac refined a form factor that dated back to at least Commodore. The iBook came after decades of iteration by other companies on laptops. The Cube was just a PC with a more compressed form factor. The iPod came a few years after other commercial digital media players. Etc, etc.

          Note that I'm not saying that there's anything wrong with their approach or that they didn't make real improvements. I'm just saying that Apple has never produced any successful product that would count as "new" to someone interested in cutting-edge research. They've always looked around at things that exist but aren't yet huge successes and given them the final push to the mainstream.

          • caseyy 13 days ago
            It depends on the definition of "new". With some definitions, we may claim that nothing is ever new — we may say computers started with the Antikythera mechanism and abaci, or maybe before. With other definitions (like "new" as in a "new for most people") we will see that Apple has brought about many new things. So we need to agree on the definition.

            I used the definition of new somewhere between "new for most people", "newly popular", and "meaningfully advanced from the previous iteration". With such a definition, I think you can agree with me.

          • KerrAvon 13 days ago
            In the consumer space, I'm not sure I can think of any examples from anyone ever that are examples of cutting-edge research at the time. It's hard to build consumer products on the bleeding edge. You'd be releasing phones today using CHERI, for example, which is not quite ready for prime time.
        • com2kid 13 days ago
          > - 2000: Power Mac G4 Cube (the early grandparent of the Mac Mini form factor), Mac OS X

          The Next Cube being a very obvious inspiration here.

          Mac OS X was a needed step, and entire books have been written about its creation. While it has some innovative pieces, it was very much a do or die situation for Apple, not brought on by innovation so much as the need to survive. (I'm sure BeOS fans will argue that BeOS was the real innovative OS. ;) )

          > - 2001: iPod, iTunes

          iPod was a, very well done, refinement of the existing MP3 device category. iTunes' innovation was the licensing deal they got with record companies, that is what really surprised everyone.

          > - 2002: Xserve (rackable servers)

          Not sure how this is an innovation? Rack mount servers had been around a long time.

          > - 2004: iWork Suite,

          Microsoft literally had a product called Microsoft Works that was originally released in 1988 and came shipped on tons of home PCs.

          > - 2005: iPod Nano, Mac mini

          The iPod Nano was cool, the Mac mini was a wonderful feat of engineering and cost reductions.

          > - 2006: Intel Macs, Boot Camp

          Necessity brought this about.

          > - 2007: iPhone and Apple TV

          This is a perfect example of Apple entering an existing product category and doing an amazing job of execution. Palm, Blackberry, and Microsoft were already releasing very capable smart phones, but none of them bothered polishing the product (MS and Blackberry focused on corporate sales, end user experience was not the top priority) and while Apple did push a lot of technology forward to make the iPhone (notably screen tech and using capacitive touch screens), their main innovation was realizing they could get customers to pay for a cell phone. For those who don't remember, prior to the iPhone, most customers got their cellphone for "free" from their cellular provider in return for agreeing to a 2 year contract. Apple realized if they made a really nice product, that people would buy it.

          Apple also did some really cool, and now largely forgotten about, positioning here involving the iPod Touch, where the iPod Touch had access to the full App Store and became entry level "kids toy" devices that got people into the ecosystem.

          Heck arguably the App Store was a larger innovation than the phone.

          Fun fact: Microsoft had an App Store ready to launch for Windows Mobile (pre Windows Phone 7) but it was scrapped at the last minute because an exec thought that "no way would phone users ever pay for apps".

          (When I joined MS in 2006 the source code was still laying around in the Windows Mobile source tree!)

          Apple TV was arguably too early at this point in time, I'd say it didn't really take off until later generations when more streaming media was available.

          But innovative? Web TV was out in the late 90s (!!) and Microsoft tried to do Media Center PC's since 2002. Heck for awhile with Xbox 360, Microsoft basically owned the "TV smart device" market segment. (and they released the Xbox One as a media streaming device and sort of forgot that it was also a games console... oops)

          As with most products, Apple just did a really good job of it, but Roku has dramatically outplayed everyone else in the market by getting embedded directly into cheap TVs sold at Costco.

          > - 2010: iPad, iPhone 4

          iPad is/was an amazing product, and it succeeded thanks to great apps.

          It was also a refinement of Tablet PCs which have been around since the late 80s/early 90s.

          Apple was willing to do what Microsoft wasn't, break all back-compat and make a really good single purpose device. Microsoft's tablets (some of which are really damn nice!) were always hamstrung because Microsoft never could go all in on abandoning existing x86 software. (The closest attempt being Windows 8 RT, which managed to make the perfect set of compromises to anger everyone!)

          > - 2015: Apple Watch, Apple Music

          The first generation Apple Watch was... meh. Now, I say this as someone who was working on a direct competitor - I am still not sure how it has such a miserable battery life and why such a massively overpowered CPU and GPU still dropped frames.

          I am not sure what is innovative about Apple Music, vs every other streaming music service.

    • throwaway4good 14 days ago
      I thought the news of them using Apple Silicon rather than NVIDIA in their data centers was significant.

      Perhaps there is still hope of a relaunch of xserve; with the widespread use of Apple computers amongst developers Apple has a real chance of challenging NVIDIA's CUDA moat.

      • pjmlp 14 days ago
        Not at Apple's price points.
        • throwaway4good 14 days ago
          I think NVIDIA has the highest hardware markup at the moment.
          • Hugsun 13 days ago
            You get considerably more ML FLOPS per dollar in a 4090 than any mac. It seems like the base M2 MAX is at roughly the same price point. It does grant you more RAM.

            Quadro and Tesla cards might be a different story. I would still like to see concrete FLOPS/$ numbers.

            • hajile 13 days ago
              They don't need the entire mac. Their cost per Max chip is probably $200-300 which beats the 4090 by a massive margin and each chip can do more than a 4090 because it also has a CPU onboard.

              4090 peaks out at around 550w which means they can run 5+ of their Max chips in the same power budget.

              A 4090 is $2000. Apple can probably get 5 chips on a custom motherboard for that cost. They'll use the same amount of power, but get a lot more raw compute.

              • Hugsun 12 days ago
                > Their cost per Max chip is probably $200-300 which beats the 4090 by a massive margin...

                That's true. I was talking about end user pricing.

                > ...each chip can do more than a 4090 because it also has a CPU onboard.

                That's a strange thing to say. It has a CPU, correct. It makes the chip more versatile but for data center ML tasks it doesn't really matter. A 4090 chip also has much more ML relevant compute per chip. So apple's chips can't really "do more than a 4090" in any relevant way.

                Of course apple pays less for their in house made chips vs external products. That comparison doesn't seem relevant to the context, e.g. they're not going to challenging CUDA with internal chips.

                They might get more compute per watt though. My guess is that nvidias datacenter chips are competitive in that space, but that's another story.

              • eek2121 13 days ago
                The GPU in the M-series is much slower than a 4090. 4060-4070ish performance at best, and it varies quite a bit.
                • ribit 13 days ago
                  You need to consider this in the context of the relevant task. Nvidia GPUs have extremely high peak performance for GEMM, but when working with LLMs, bandwidth (and RAM capacity) becomes the limiting factor. There is a reason why real ML-focused datacenter Nvidia GPUs use much wider RAM interfaces and a much higher price point. The M2 Ultra might not have the raw compute, but it has a lot of RAM and large caches.
                • hajile 13 days ago
                  If they can get 5 4070s for the price and power of one 4090, that's a win for them as they'll get more performance per dollar and per watt.
                  • talldayo 13 days ago
                    > and per watt

                    Part of the advantage of using "one 4090" is that the max TDP is only 450w, as opposed to 5 M2 Ultras running at ~150w each. When you scale up to Nvidia's latest Blackwell architecture, I genuinely don't know how Apple could beat them on performance-per-watt. Buying M2 Ultras wholesale is probably cheaper than an NVL72 cluster, but certainly not what you'd want to use for Linux or maximizing AI-based performance-per-watt.

                    • hajile 13 days ago
                      You are missing the point. We're discussing if Apple can use their own chips more cheaply than buying Nvidia's chips.

                      The Max TDP is not actual peak power consumption. Gamer's nexus recorded 500w peak and almost 670w overclocked. Most reviews I've looked at seem to put peak power consumption around 550w.

                      M2 Ultra wasn't even mentioned and it uses more than 150w. The correct question would be about M3 Max as we have solid numbers on it. M3 Max uses around 100w when both the GPU and CPU are heavily utilized and less than that when only the GPU is used.

                      This means that Apple could run 5 of their M3 Max chips in the same peak power as the 4090. But wait, there's more. 4090 doesn't run in a vacuum. It requires a separate CPU setup and a couple hundred more watts.

                      That means we could power 7 or so M3 Max chips with that same amount of power.

                      Of course, this isn't the whole story. 4090 isn't a professional chip either (while Apple can bin and certify their own CPUs and know they're getting a server-grade chip) and the 4090 also doesn't have nearly enough RAM. H100 starts at $25,000 and goes up. Apple could buy 75-100 M3 Max chips for that kind of money. That's certainly a load more compute than H100 would offer. Blackwell will be even more expensive in comparison.

            • throwaway4good 13 days ago
              The M2 is a chip designed to be in a laptop (and it is quite powerful given its low power consumption). Presumedly they have a different chip or at least completely different configuration (RAM, network, etc.) in their data centers.
              • mrweasel 13 days ago
                The interesting point here is that developers targeting the Mac can safely assume that the users will have a processor capable of significant AI/ML workloads. On the Windows (and Linux) side of things, there's no common platform, no assumption that the users will have an NPU or GPU capable of doing what you want. I think that's also why Microsoft was initially going for the ARM laptops, where they'd be sure that the required processing power is available.
                • qwytw 13 days ago
                  > The interesting point here is that developers targeting the Mac can safely assume that the users will have a processor capable of significant AI/ML workloads

                  Also that a significant proportion (majority?) of them will have just 8 GB of memory which is not exactly sufficient to run any complex AI/ML workloads.

                  • talldayo 13 days ago
                    Easy solution; just swap multiple gigabytes of your model to SSD-based ZRAM when you run out of memory. What could possibly go wrong?
                • skohan 13 days ago
                  I believe MS is trying to standardize this, in the same way as they do with DirectX support levels, but I agree it's probably going to be inherently a bit less consistent than Apple offerings
                  • pjmlp 13 days ago
                    DirectML can use multiple backends.
                • InsomniacL 13 days ago
                  That sounds like a big issue, but surely assuming for either case is bad.

                  I expect OS's will expose an API which, when queried, will indicate the level of AI inference available.

                  Similar to video decoding/encoding where clients can check if hardware acceleration is available.

                • treprinum 13 days ago
                  How does it help me (with maxed out M3 Max) that Apple might have some chip in the future right now? I do DL on A6000 and 4090, not waiting until Apple produces a chip someday that is faster than 1650 in ML...
                • sofixa 13 days ago
                  That's probably where Microsoft's "Copilot+ PCs" come in.
                  • pjmlp 13 days ago
                    Plus DirectML, wich as the name implies, builds on top of DirectX, allowing multiple backends, CPU, GPU, NPU.
              • skohan 13 days ago
                There was a rumor floating around that Apple might try to enter the server chip business with an AI chip, which is an interesting concept. Apple's never really succeeded in the B2B business, but they have proven a lot of competency in the silicon space.

                Even their high-end prosumer hardware could be interesting as an AI workstation given the VRAM available if the software support were better.

                • jitl 13 days ago
                  > Apple's never really succeeded in the B2B business

                  Idk every business I’ve worked and all the places my friends work seem to be 90% Apple hardware, with a few Lenovo issued for special case roles in finance or something.

            • esskay 13 days ago
              Of course you do, Apple's selling mobile SOC's not high end cards. That doesn't mean they're incapable of making them for the right application. You don't seriously think the server farms running on M4 Pro Max chips do you...
          • pjmlp 14 days ago
            Depends on which card one is talking about.
            • throwaway4good 13 days ago
              Maybe. It is not really obvious how much you for the AI accellerator part of their offerings. For example the chips in iPhones are quite powerful even adjusted for price. However for some cases - like the max chip in the macbooks or the extra ram - their pricing seems high - maybe even nvidia high.
            • bayindirh 13 days ago
              [flagged]
              • pjmlp 13 days ago
                Yes, it does.

                Should we keep arguing like on school playground?

                • bayindirh 13 days ago
                  Maybe, if you prefer. Honestly, I'm migrating a server fleet today, so notifications are hard to hear over these Apollo 6500 fans.
                  • pjmlp 13 days ago
                    Not everyone needs those.
        • bayindirh 13 days ago
          I mean, even Apple can't match the markups nVidia has right now. If you break a GPU in your compute server, you wait months for a replacement, and the part is sent back if you can't replace it in five days.

          Crazy times.

          • c0balt 13 days ago
            Enterprise offerings tend to differ. You can get a replacement NVIDIA GPU via a partner, like Lenovo, in 2-3 weeks. And that's on the high side for some support contracts.
            • bayindirh 13 days ago
              That's from HPE, for an Apollo 6500.
      • grecy 13 days ago
        I'm wondering how my electricity they will save just from moving from Intel to Apple Silicon in their data centers.
    • derefr 14 days ago
      I think the thing they're saying that's novel, isn't what they have (LoRAs), but where and when and how they make them.

      Rather than just pre-baking static LoRAs to ship with the base model (e.g. one global "rewrite this in a friendly style" LoRA, etc), Apple seem to have chosen a bounded set of behaviors they want to implement as LoRAs — one for each "mode" they want their base model to operate in — and then set up a pipeline where each LoRA gets fine-tuned per user, and re-fine-tuned any time the data dependencies that go into the training dataset for the given LoRA (e.g. mail, contacts, browsing history, photos, etc) would change.

      In other words, Apple are using their LoRAs as the state-keepers for what will end up feeling to the user like semi-online Direct Preference Optimization. (Compare/contrast: what Character.AI does with their chatbot response ratings.)

      ---

      I'm not as sure, from what they've said here, whether they're also implying that these models are being trained in the background on-device.

      It could very well be possible: training something that's only LoRA-sized, on a vertically-integrated platform optimized for low-energy ML, that sits around awake but doing nothing for 8 hours a day, might be practical. (Normally it'd require a non-quantized copy of the model, though. Maybe they'll waste even more of your iPhone's disk space by having both quantized and non-quantized copies of the model, one for fast inference and the other for dog-slow training?)

      But I'm guessing they've chosen not to do this — as, even if it were practical, it would mean that any cloud-offloaded queries wouldn't have access to these models.

      Instead, I'm guessing the LoRA training is triggered by the iCloud servers noticing you've pushed new data to them, and throwing a lifecycle notification into a message queue of which the LoRA training system is a consumer. The training system reduces over changes to bake out a new version of any affected training datasets; bakes out new LoRAs; and then basically dumps the resulting tensor files out into your iCloud Drive, where they end up synced to all your devices.

      • Hugsun 13 days ago
        There is no way they would secretly train loras in the background of their user's phones. The benefits are small compared to the many potential problems. They describe some LoRA training infrastructure which is likely using the same capacity as they used to train the base models.

        > ...each LoRA gets fine-tuned per user...

        Apple would not implement these sophisticated user specific LoRA training techniques without mentioning them anywhere. No big player has done anything like this and Apple would want the credit for this innovation.

      • wmf 14 days ago
        I don't think the LoRAs are fine-tuned locally at all. It sounds like they use RAG to access data.
        • derefr 14 days ago
          Consider a feature from earlier in the keynote: the thing Notes (and Math Notes) does now where it fixes up your handwriting into a facsimile of your handwriting, with the resulting letters then acting semantically as text (snapping to a baseline grid; being reflowable; being interpretable as math equations) but still having the kind of long-distance context-dependent variations that can't be accomplished by just generating a "handwriting font" with glyph variations selected by ligature.

          They didn't say that this is an "AI thing", but I can't honestly see how else you'd do it other than by fine-tuning a vision model on the user's own handwriting.

          • Hugsun 13 days ago
            I didn't see the presentation but judging by your description, this is achievable using in-context learning.
          • wmf 14 days ago
            For everything other than handwriting I don't think the LoRAs are fine-tuned locally.
            • derefr 14 days ago
              Well, here's another one: they promised that your local (non-iCloud) photos don't leave the device. Yet they will now — among many other things they mentioned doing with your photos — allow you to generate "Memoji" that look like the people in your photos. Which includes the non-iCloud photos.

              I can't picture any way to use a RAG to do that.

              I can picture a way to do that that doesn't involve any model fine-tuning, but it'd be pretty ridiculous, and the results would probably not be very good either. (Load a static image2text LoRA tuned to describe the subjects of photos; run that once over each photo as it's imported/taken, and save the resulting descriptions. Later, whenever a photo is classified as a particular subject, load up a static LLM fine-tune that summarizes down all the descriptions of photos classified as subject X so far, into a single description of the platonic ideal of subject X's appearance. Finally, when asked for a "memoji", load up a static "memoji" diffusion LoRA, and prompt it with the that subject-platonic-appearance description.)

              But really, isn't it easier to just fine-tune a regular diffusion base-model — one that's been pre-trained on photos of people — by feeding it your photos and their corresponding metadata (incl. the names of subjects in each photo); and then load up that LoRA and the (static) memoji-style LoRA, and prompt the model with those same people's names plus the "memoji" DreamBooth-keyword?

              (Okay, admittedly, you don't need to do this with a locally-trained LoRA. You could also do it by activating the static memoji-style LoRA, and then training to produce a textual-inversion embedding that locates the subject in the memoji LoRA's latent space. But the "hard part" of that is still the training, and it's just as costly!)

              • janekm 13 days ago
                That's going to be something similar to IPAdapter FaceID: https://ipadapterfaceid.com Basically you use a facial structure representation that you'd use for face recognition (which of course Apple already compute on all your photos) together with some additional feature representations to guide the image generation. No need for additional fine-tuning. A similar approach could likely be used for handwriting generation.
              • gokuldas011011 14 days ago
                I believe this could be achieved by providing a seed image to the diffusion model and generating memoji based on it. This way fine tuning isn't required.
                • raverbashing 13 days ago
                  Yup this is pretty much it, and DALLE and others can do this already
      • throwthrowuknow 13 days ago
        I think you’re misunderstanding what they mean by adapting to use cases. See this passage:

        > The adapter models can be dynamically loaded, temporarily cached in memory, and swapped — giving our foundation model the ability to specialize itself on the fly for the task at hand

        This along with other statements in the article about keeping the base model weights unchanged says to me that they are simply swapping out adapters on a per app or per task basis. I highly doubt they will fine tune adapters on user data since they have taken a position against this. I wonder how successful this approach will be vs merging the adapters with the base model. I can see the benefits but there are also downsides.

      • rvaish 14 days ago
        Easel has been on iMessage for a bit now: https://apps.apple.com/us/app/easel-ai/id6448734086
    • selimnairb 13 days ago
      Very little of the “AI” boom has been novel, most has been iterative elaborations (though innovative nonetheless). Academics have been using neural network statistical models for decades. What’s new is the combination of compute capability and data volume available for training. It’s iterative all the way down though, that’s how all technologies are developed.
      • sigmoid10 13 days ago
        Most people don't realize this, but almost all research works that way. Only the media spins research as breakthrough-based, because that way it is easier to sell stories. But almost everything is incremental/iterative. Even the transformer architecture, which in some way can be seen as the most significant architectural advancement in AI in the past years, was a pretty small, incremental step when it came out. Only with a lot of further work building on top of that did it become what we see today. The problem is that science-journalists vastly outnumber scientists producing these incremental steps, so instead of reporting on topics when improvements actually accumulated to a big advancement, every step along the way gets its own article with tons of unnecessary commentary heralding its features.
      • astrange 13 days ago
        The "bitter lesson of machine learning" means that you actually can't do anything novel; it won't work as well as just doing the simple thing but bigger.

        (So there is room left if you're limited by memory or budget.)

      • w10-1 13 days ago
        > What’s new is the combination of compute capability and data volume available for training

        This is the important part.

        My advisor said new means old method applied to new data or new method on old data.

        Commercially, that means price points, i.e., discrete points where something becomes viable.

        Maybe that's iterative, but maybe not. Either way, once the opportunity presents, time is of the essence.

    • WiSaGaN 14 days ago
      This gives me the vibe of calling high resolution screens as "retina" screens.
      • dishsoap 14 days ago
        I don't see anything wrong with that at all. They've created a branding term that allows consumers to get an idea of the sort of pixel density they can expect without having to actually check, should they not want to bother.
        • necovek 14 days ago
          Except that everyone has different visual acuity and different distance they use the same devices at, and in the end, "retina" means nothing at all.

          But this is exactly the type of marketing Apple is good at, though "retina" is probably not the most successful example.

          • theshrike79 14 days ago
            If your "visual acuity" is so good that you can see the pixels of a retina-branded display from the intended viewing distance, you might need to be studied for science.
            • necovek 13 days ago
              If your visual acuity is 20/10, you'd roughly need 3600 pixels vertically to not notice any pixelation if Bill Otto did the calculations right at https://www.quora.com/What-resolution-does-20-10-vision-corr...

              20/10 is rare but can easily be corrected to with glasses or contacts.

              You also left that "intended viewing distance" hanging there, without at all acknowledging what that is at a minimum?

            • jackothy 13 days ago
              It's not so impossible to spot flaws if you're using worst-case testing scenarios. Which are not worthless because such patterns do actually pop up in real world usage, albeit rarely.
              • kolinko 13 days ago
                Examples?
                • jackothy 13 days ago
                  Had one happen to me recently where I was scrolling Spotify, and they do the thing where if you try to scroll past max they will stretch the content.

                  One of the album covers being stretched had some kind of fine pattern on it that caused a clearly visible shifting/flashing Moiré pattern as it was being stretched.

                  Wish I could remember what album cover it was now.

                  Though really it's simple enough: As long as you can still spot a single dark pixel in the middle of an illuminated white screen, the pixels could benefit from being smaller. (Edit: swapped black and white)

                  • kolinko 10 days ago
                    The single pixel example is wrong I think, because that’s not how light and eyes work - you can always spot a single point of light, regardless of how small - if it’s bright enough.
                    • jackothy 10 days ago
                      Yes exactly that's why I went back right away to edit - if the whole screen is white but one pixel is off and you can spot it, then my logic holds.
        • ngcc_hk 14 days ago
          Agreed. It is not high resolution as such, but high resolution that the user can relate to - like cannot see the pixel.

          Still remember the hard time using Apple newton in a conference vs the palm freely on loan in a Gartner group conference. Palm solved a problem, even though not very Apple … user can input on a small device. I kept it, on top of my newly bought newton.

          It is the user …

      • pyinstallwoes 13 days ago
        Still no manufacturer compares to the quality of apple screens and resolution …
        • Malcolmlisk 13 days ago
          Those screens are produced by samsung.
          • PaulRobinson 13 days ago
            By your logic, I own a Foxconn smartphone with a FreeBSD-based OS. If you bought a Porsche, would you call it a Volkswagen?
            • lostmsu 13 days ago
              Perhaps you should.
          • mensetmanusman 13 days ago
            Part of the screen is, yes. Apple designs the full stack and sources new technology from multiple suppliers including Samsung.
          • astrange 13 days ago
            Except for when it's LG, Sharp or BOE.
      • dwaite 12 days ago
        Or AMD calling monitors which meet quality and feature requirements 'FreeSync'

        Or Intel calling USB4 devices and cables which meet quality and feature requirements 'Thunderbolt 5'

        Compared to, say, manufacturers who aren't willing to meet any certification requirements or to properly implement the standards at play saying they have "USB-A 3.2 2x2 ports" on their motherboards.

        Retina doesn't carry the same weight as an industry certification effort like thunderbolt, but it still informs people that a screen actually meets some sort of bar without them having to evaluate pages of tech specs, and reviews saying whether the tech specs are accurate or have undocumented caveats.

        Finally, establishing such certifications are difficult - look at the number of failed attempts at creating industry quality/feature marks in the television market.

      • viktorcode 13 days ago
        Retina means high pixel density, not high resolution. And there are very few standalone displays on the market which can be called “retina”, unfortunately.
        • necovek 12 days ago
          Interestingly, I would venture to say that DPI is a measure of resolution: that's the way it's still used in printers or scanners, for example (600 dpi). And retina instead means high angular resolution, or pixels having small arc measures from "appropriate" distances.

          The term "resolution" transitioned gradually to mean number-of-individual-lines-rendered horizontally and vertically for displays, but really, the idea is how many dots can you "resolve" (or "resolving power"): a "high resolution" screen or screen mode had a larger number of individual pixels being drawn, which meant that the density is higher. You never talk about a scanner having a resolution of 7200x3600 even if it can scan 12"x6" at 600dpi.

          So really, in an informal conversation, I believe both are fine. If you want to be extremely precise, you missed the mark: width and height in pixels is the sanest way to call what you refer to as "resolution".

          • necovek 12 days ago
            See also https://en.wikipedia.org/wiki/Display_resolution:

            > For device displays such as phones, tablets, monitors and televisions, the use of the term display resolution as defined above is a misnomer, though common. The term display resolution is usually used to mean pixel dimensions, the maximum number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: resolution properly refers to the pixel density, the number of pixels per unit distance or area, not the total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI).

    • threeseed 14 days ago
      It's a huge development in terms of it being a consumer-ready, on-device LLM.

      And if Karpathy thinks so then I assume it's good enough for HN:

      https://x.com/karpathy/status/1800242310116262150

      • rishabhjain1198 14 days ago
        The productization of it (like Karpathy mentioned) is awesome. But I think the URL for that would be this maybe? [link](https://www.apple.com/apple-intelligence/)
      • kfrzcode 14 days ago
        [flagged]
        • threeseed 14 days ago
          a) I would trust Karpathy over Elon given he doesn't have a competing product.

          b) Apple only provides information to ChatGPT when the user consents to doing so and the information is only for that request i.e. it is not logged for future training.

        • pests 14 days ago
          The temp around Elon here is lower than you think. I would say almost the exact opposite of your claim.
          • slimebot80 14 days ago
            Elon is talking out his arse as usual.
        • camillomiller 14 days ago
          He is factually wrong and has been rekted by his own community notes
    • Cthulhu_ 13 days ago
      Thing is, Apple takes these concepts and polishes them, makes them accessible to maybe not laypeople but definitely a much wider audience compared to those already "in the industry", so to speak.
    • marcellus23 14 days ago
      They refer to LoRA explicitly in the post.
      • rishabhjain1198 14 days ago
        Although I caught that on the first read, I found myself questioning when I read the adaptors part, "is this not just LoRA...".

        Maybe it's my fault as a reader, but I think the writing could be clearer. Usually in a research paper you would link to the LoRA paper there too.

    • monkeydust 13 days ago
      Feel Apple should have just focused on their models for this one and not complicate the conversation with OpenAI. They could have left that to another announcement later.

      Quick straw poll survey around the office, many think their data will be sent off to OpenAI by default for these new features which is not the case.

    • scosman 13 days ago
      I think you’re referring to my comment about this being huge for developers?

      Just want to point out I call this launch huge, didn’t say “huge development” as quoted, and didn’t imply what was interesting was the ML research. No one in this thread used the quoted words, at least that I can see.

      My comment was about dev experience, memory swapping, potential for tuning base models to each HW release, fine tune deployment, and app size. Those things do have the potential to be huge for developers, as mentioned. They are the things that will make a local+private ML developer ecosystem work.

      I think the article and comment make sense in their context: a developer conference for Mac and iOS devs.

      Apple also explicitly says it’s LoRA.

    • franzb 13 days ago
      This isn't about AI research, it's about delivering AI at unimaginable scale.
      • throwthrowuknow 13 days ago
        180 million users for chatgpt isn’t unimaginable but it does exceed the number of iPhone users in the United States.
    • frompom 13 days ago
      Do you have the same expectations for any company launching hardware that they cite the various papers related to how the tech was developed? EVERY piece of tech announced by ANY company relies on a variety of research out there yet it doesn't seem expected that every time anyone launches something they cite the numerous papers related to that. Why would products/services in this category be any different?
      • talldayo 13 days ago
        > Do you have the same expectations for any company launching hardware that they cite the various papers related to how the tech was developed?

        If they try to market it with a seemingly unique or yet-unheard of name, then yeah. It is nice knowing what the "real world" name of an Apple-ized technology is.

        Just ignoring it and marketing the technology under some new name is adjacent to lying to your audience through omission.

        • dwaite 12 days ago
          > Just ignoring it and marketing the technology under some new name is adjacent to lying to your audience through omission.

          They don't market technology, they market solutions. E.g. afib detection on Apple Watch, rather than calling it a BNNS using a custom-built sensor for one-wire EKG.

          This is the document where they describe how the solution works, and they clearly state adapters work based on LoRA.

    • arvinsim 12 days ago
      > The way they wrote about "adaptors" can make it seem like it's something novel, but it's actually just re-iterating vanilla LoRA.

      That's a classic Apple strategy though.

    • gigglesupstairs 14 days ago
      Was there anything about searching through our own photos using prompts? I thought this could be pretty amazing and still a natural way to find very specific photos in one’s own photo gallery.
      • mazzystar 14 days ago
        Run OpenAI's CLIP model on iOS to search photos. https://github.com/mazzzystar/Queryable
        • gigglesupstairs 13 days ago
          Yes, exactly this. I have had this for a while and works wonderfully well in most cases but it’s wonky and not seamless. I wanted a more integrated approach with Photos app which only Apple can bring to the table.
      • avereveard 14 days ago
        Which is in turn just multimodal embedding

        Besides I could do "named person on a beach in August" and get the correct thing in photos on Android photos, so I don't get it.

        It's amazing for apple users if they didn't have it before. But from a tech stand point people could have had it for a while.

        • azinman2 14 days ago
          Photos has had this for a while with structured natural language queries, and this kind of prompt was part of the WWDC video.
        • theshrike79 14 days ago
          The difference is that Apple has been doing this on-device for maybe 4-5 years already with the Neural Engine. Every iOS version has brought more stuff you can search for.

          The current addition is "just" about adding a natural language interface on top of data they already have about your photos (on device, not in the cloud).

          My iPhone 14 can, for example, detect the breed of my dog correctly from the pictures and it can search for a specific pet by name. Again on-device, not by sending my stuff to Google's cloud to be analysed.

          • fauigerzigerk 13 days ago
            They have been trying and failing to do a tiny little bit of this. It's so broken and useless that I've been uploading all my iCloud photos to Google as well, for search and sharing.
            • theshrike79 13 days ago
              If you like Google using your personal photos for machine learning, that's your option. Now they have your every photo, geotagged and timestamped so they can see where you have been and at what times. Then they of course anonymise that information into an "advertiser id" they tag on to you and a sufficient quantity of other people so they can claim they're not directly targeting anyone.

              I prefer Apple's privacy focused option myself.

              • fauigerzigerk 13 days ago
                >If you like Google using your personal photos for machine learning, that's your option.

                It's a trade-off between getting the features I need and the price I have to pay. All else being equal I do prefer privacy as well. Unfortunately, all else is not equal.

                >I prefer Apple's privacy focused option myself.

                It's only an option if it works.

    • lhl 13 days ago
      I think your conclusion is uncharitable or at least depends on how deep your interest in AI research actually is. Reading the docs, there are at least several points of novelty/interest:

      * Clearly outlining their intent/policies for training/data use. Committing to no using user data or interactions for training their base models is IMO actually a pretty big deal and a differentiator from everyone else.

      * There's a never-ending stream of new RL variants ofc, but that's how technology advances, and I'm pretty interested to see how these compare with the rest: "We have developed two novel algorithms in post-training: (1) a rejection sampling fine-tuning algorithm with teacher committee, and (2) a reinforcement learning from human feedback (RLHF) algorithm with mirror descent policy optimization and a leave-one-out advantage estimator. We find that these two algorithms lead to significant improvement in the model’s instruction-following quality."

      * I'm interested to see how their custom quantization compares with the current SoTA (probably AQLM atm)

      * It looks like they've done some interesting optimizations to lower TTFT, this includes the use of some sort of self-speculation. It looks like they also have a new KV-cache update mechanism and looking forward to reading about that as well. 0.6ms/token means that for your average I dunno, 20 token query you might only wait 12ms for TTFT (I have my doubts, maybe they're getting their numbers from much larger prompts, again, I'm interested to see for myself)

      * Yes, it looks like they're using pretty standard LoRAs, the more interesting part is their (automated) training/re-training infrastructure but I doubt that's something that will be shared. The actual training pipeline (feedback collection, refinement, automated deployment) is where the real meat and potatoes of being able to deploy AI for prod/at scale lies. Still, what they shared about their tuning procedures is still pretty interesting, as well as seeing which models they're comparing against.

      As this article doesn't claim to be a technical report or a paper, while citations would be nice, I can also understand why they were elided. OpenAI has done the same (and sometimes gotten heat for it, like w/ Matroyshka embeddings). For all we know, maybe the original author had references, or maybe since PEFT isn't new to those in the field, that describing it is just being done as a service to the reader - at the end of the day, it's up to the reader to make their own judgements on what's new or not, or a huge development or not. From my reading of the article, your conclusion, which funnily enough is now the new top-rated comment on this thread isn't actually much more accurate the the one old one you're criticizing.

    • steve1977 13 days ago
      You know what company you are talking about here?
    • rvaish 14 days ago
      reminds me of Easel on iMessage: https://easelapps.ai/
    • Spooky23 13 days ago
      Those people aren’t looking at Apple.

      They seem to have a good model for adding value to their products without the hold my beer, conquer the world bullshit that you get from OpenAI, et al.

    • kfrzcode 14 days ago
      "AI for the rest of us."
      • wkat4242 14 days ago
        Except Apple isn't really for the rest of us. Outside of America and a handful wealthy western countries it's for the top 5-20% earners only.
        • jahewson 14 days ago
          Approximately 33% of all smartphones in the world are iPhones.
          • rvnx 13 days ago
            60% in the US
          • dwaite 12 days ago
            Maybe the suspicion is the top 5% carry six smartphones?
        • throwaway2037 14 days ago
          Japan and Taiwan are both more than 50% iOS.

          Ref: https://worldpopulationreview.com/country-rankings/iphone-ma...

        • whynotminot 13 days ago
          Who do you think this presentation is geared toward?
        • theshrike79 13 days ago
          In the EU the market share is 30%
          • d1sxeyes 13 days ago
            Yes but not evenly distributed, BeNeLux, Germany, Austria, and Nordic countries have a lot of iPhone users, while moving further east (or south) you see lower market share. Maybe it’s “two handfuls” of wealthy western countries rather than just one, but I think OPs point holds true.
      • chuckjchen 13 days ago
        This sounds like every newcomers to the stage except for big players like Apple.
    • jan3024 13 days ago
      [dead]
  • cube2222 14 days ago
    Halfway down the article contains some great charts with comparisons to other relevant models, like Mistral-7B for the on-device models, and both gpt-3.5 and 4 for the server-side models.

    They include data about the ratio of which outputs human graders preferred (for server side it’s better than 3.5, worse than 4).

    BUT, the interesting chart to me is „Human Evaluation of Output Harmfulness” which is much, much ”better„ than the other models. Both on-device and server-side.

    I wonder if that’s part of wanting to have gpt as the „level 3”. Making their own models much more cautious, and using OpenAI’s models in a way that makes it clear „it was ChatGPT that said this, not us”.

    Instruction following accuracy seems to be really good as well.

    • crooked-v 14 days ago
      I want to know what they consider "harmful". Is it going to refuse to operate for sex workers, murder mystery writers, or people who use knives?
      • hotdogscout 14 days ago
        I bet it's the usual double standards the AI one percenters cater to.

        No sex because apparently it's harmful yet never explained why.

        No homophobia/transphobia if you're Christian but if you're Muslim it's fine.

      • m463 13 days ago
        Bet it depends on the country.

        In the USA, you won't be able to ask about sex, but you can probably ask about tank man.

      • arthur_sav 14 days ago
        They'll inject whatever ideology / dogma is "the current thing" into this.
        • rvnx 13 days ago
          "Think Different"
        • HeatrayEnjoyer 13 days ago
          [flagged]
          • mvandermeulen 13 days ago
            Do they have exclusive rights?
          • soygem 13 days ago
            Yes, what's your point?
            • HeatrayEnjoyer 9 days ago
              Repeating right wing rhetoric is corrosive and a red flag that the user is at best repeating it out of ignorance, but likely knows very well what they are doing. We don't go pouring HCL on things we care about, nor do we accept the presence of those who do.
      • its_ethan 14 days ago
        The caption for the image gives a little more insight into "harmful" and one of the things it mentions is factuality - which is interesting, but doesn't reveal a whole lot unless they were to break it out by "type of harmful".
      • Aerbil313 13 days ago
        None of the use cases they presented in WWDC using Apple Intelligence was creative writing. There is one, that uses ChatGPT explicitly:

        > And with Compose in Writing Tools, you can create and illustrate original content from scratch.

        https://www.apple.com/apple-intelligence/

    • causal 13 days ago
      Refusing to answer any question would result in a perfect score for the first chart since it says nothing of specificity
    • tonynator 14 days ago
      So it's not going to be better than other models, but it will be more censored. I guess that might be a selling point for their customer base?
      • dghlsakjg 14 days ago
        iPhone share is ~59% of smartphones in the US.

        Their customer base is effectively all demographics.

        • tonynator 14 days ago
          Those who dislike censorship and enjoy hacking avoid iPhones for obvious reasons.
          • overstay8930 14 days ago
            People who understand cybersecurity hygiene use iPhones for obvious reasons
            • Hugsun 13 days ago
              The reasons are very not obvious to me. Could you elaborate?
              • overstay8930 13 days ago
                There is no other ecosystem that will provide easy E2EE in the cloud, that’s reason enough for the vast majority of users who are just sick of breaches because of pure sysadmin incompetence.

                There are plenty of examples in every Apple service or even their accessories (i.e. wireless keyboard encryption). FaceTime is hardened from even theoretical attacks that could probably only be performed by 5 eyes like transcribing E2EE calls based on bandwidth use (FaceTime has built mitigations around this attack vector).

              • astrange 13 days ago
                iPhone security is very good, better than, say, your desktop even though you don't carry your desktop everywhere you go. (Some Android security is also very good, depends on the hardware though.)
            • Aerbil313 13 days ago
              People who understand cybersecurity who are not operating within a US-allied country use ... I don't know what to be honest. What to do in such a situation, where Apple is a US-based company obligated by law to comply with requests from three letter agencies and Android is a buggy mess which probably is backdoored by every major power?
              • astrange 13 days ago
                Your threat model shouldn't be about three letter agencies unless you're running a terrorist group. It should be about porn spammers or something.
                • Der_Einzige 13 days ago
                  I can assure you that if you’re doing AI research, glowies likely spy on you, but it’s the “good” kind of spying where you might get a job offer some day instead of the kind where they Guantanamo bay you…
                  • Aerbil313 3 days ago
                    Unless of course you are not based in a US-allied country, as stated very explicitly in my comment.

                    Ah the neverending American centricism on this site.

                • Aerbil313 13 days ago
                  Do you mean that US is the measure of justice and goodness in the world and anyone who opposes her is a terrorist? I don't mean to detract the conversation but this is really victim blaming. There are valid situations where the US DoD is a threat.

                  Your username reminded me of Julian Assange, for a second I mixed them up.

                  • astrange 13 days ago
                    No, I assume if you're working for a foreign military then you wouldn't be posting about it on here.
                    • Aerbil313 13 days ago
                      I assume if I was working for a foreign power I'd already know the answer to my original question... unless there's no good answer.
              • overstay8930 13 days ago
                Sorry to break it to you but you cannot hide from the US government if you use modern hardware or connect to the internet in any way.
                • Aerbil313 11 days ago
                  How likely is it that seemingly well-intentioned companies/open source projects like Mullvad, Wireguard, Qubes OS, Monero etc. are infiltrated by the US government?

                  Or are you talking about the presence of Intel ME and the like in modern hardware?

                  • overstay8930 9 days ago
                    I’m talking about the simple fact that Five Eyes have so many resources put together that you will never hide forever, unless you are willing to drastically change your life.

                    The good news is you are not important enough for them to care about you unless you are an Iranian general with nuclear access or some shit. Even these ransomware groups aren’t even on the radar of the people who are actually being targeted, I’m talking about stopping the yakuza trying to sell nukes to terrorists level of threat.

                    • Aerbil313 3 days ago
                      I appreciate you, and thanks for the "You're not that important" reminder for the average reader of this comment. However I must state I personally really am in need of an answer. Hypothetically, what if I am that important? What if I am not yet a target but think there's a decent chance that I will be targeted by nation-states, the most capable being US one day? You seem pretty knowledgeable. In the age of ML-assisted mass data scanning and ISP data collection & correlation, what would the lifestyle changes I must undergo to remain truly pseudonymous on the internet look like?
            • realusername 13 days ago
              Those cybersecurity experts are using GrapheneOS and certainly not an iPhone where they can't even check if all is going well...
              • jitl 13 days ago
                I’ve worked with a few people from NCC Group, Matasano, security staff at Airbnb and OpenAI, all carry and recommend iPhones for security footing. Depending on threat model, “lockdown mode” on iOS has a lot of what is useful in Grapheme like turning off built in connectivity services & disabling JIT and other code paths in the webview.
                • realusername 13 days ago
                  To each their own, I certainly would not recommend an iPhone in this scenario, especially even more at a top tech company.
            • binkethy 13 days ago
              It would seem that integrating a backdoor funnel to OpenAI is a bit of a security issue to those who care about such things.

              Yay, we can all train corporate models for free involuntarily.

              I guess it's time to check out Lineage OS and Postmarket OS. It was always a matter of time.

              • scarface_74 13 days ago
                No one is going to train an AI on random user generated data. The data is going to be horrible and it’s going to be full of PII that’s too risky to expose.
          • duxup 14 days ago
            I feel that way, have an iPhone.
            • ihumanable 13 days ago
              Yea, same.

              I have literally no desire to hack and fuck around with my personal cell phone, doing so would take away time from the hacking I actually want to do.

              • tonynator 12 days ago
                One option you can't hack on, the other you can but don't have to at all. Why choose the former? Are you so sure you'll never want to do anything on your phone that's unapproved by your corporate overlords?
          • gfourfour 14 days ago
            How does an iPhone contribute to censorship?
  • ksec 14 days ago
    I hope, this could mean Apple will push the baseline of ALL Macs to have higher than 8GB of Memory. While I wish we all get 16GB M4 as baseline. Apple being Apple may only give us 12GB, and charges extra $100 for the 16GB option.

    It will still be a lot better than 8GB though.

    • talldayo 14 days ago
      The Steam Deck ships with 16 gigs of quad-channel LPDDR5 and it costs $400. Apple knows exaaaactly what they're doing with this sort of pricing.

      Can't forget about that cozy 256gb SSD either. An AI computer will need more than that, right?

      • wraptile 14 days ago
        RAM is literally the cheapest primary component in a laptop at going rate of 1-4usd/GB. I'd say that shipping 8GB base model in 2024 is clearly manipulation by Apple, i.e. planned obsolescence or a way to moat Apple software. Anyone who doesn't see this is just being delusional.

        Same way Apple and Samsung ship 128GB of storage when the production price between 128gb and 1tb is like 10$ (on a 1000$ device). Samsung even got rid of micro sd slot. It's so blatant it's actually depressing.

        • glial 14 days ago
          > RAM is literally the cheapest primary component

          Is that still true for Apple's integrated memory? It might be - I just don't know.

          • talldayo 14 days ago
            For the LPDDR4 and LPDDR5 that goes into the M1 and M2/M3 systems, yes. You might need to spend more money on memory controllers (since M1 and up is 8-channel) but the physical memory component itself is highly availible and relatively cheap. Same goes for SSD storage, nowadays.
          • p_l 13 days ago
            The memory used by Apple isn't anything magical or special - it's bog standard LPDDR5, essentially same as phone - and in a laptop it's way less limited by thermal and power constraints to add more (which is how you have the rather large possible set of options).

            While going for the top tier of memory sizes Apple offers does cost considerable amounts, making 16, or even 32GB standard is peanuts.

          • rfoo 13 days ago
            > integrated memory

            Yes. The cost of bonding memory to their chip is mostly the same for 8G / 16G / 32G / practically any number.

        • PaulRobinson 13 days ago
          On the number of devices they sell, an extra $64 of cost per device (taking your higher figure and assuming an extra 8GB), across Mac, iPad and iPhone, they'd be looking at a cost of ~$12.8bn a year. If they just did it for Mac, it's still in the region of $2bn/year.

          Sure they could pass that onto a mostly price-insensitive audience, but they like round numbers, and it's not the size of decision you take without making sure its necessary: that your customers are going to go elsewhere in either scenario of doing it or not doing it.

        • pjmlp 14 days ago
          Typing this from a Samsung with SD slot, need to chose your models wisely.
      • zer0zzz 14 days ago
        Is steamdeck sold at cost? From what I know Apple has a rule that everything must be sold at 40% margins. That is prob the main reason.
        • makeitdouble 14 days ago
          > From what I know Apple has a rule that everything must be sold at 40% margins.

          As for all rules, it's a rule except when it's not. On the top of my head Apple TV [0] had a 20% predicted margin presumably because they wanted to actually sell them.

          Otherwise 40% margin is usually calculated against the BOM, which doesn't mean 40% of actual profit when the product is sold.

          In that respect we have no idea of the actual margin on a macbook air for instance, it could be 10% when including their operating costs and marketing, or it could 60% if they negociated prices way below the estimated BOM for instance.

          It's just to say: Apple sells at 8Gb because they want to, at the end of the day nothing is stopping them to play with their margin or the product price.

          [0] https://www.reuters.com/article/idUSN06424767/

        • talldayo 14 days ago
          As a consumer I really cannot be made to care why it's the case. This artificial price tiering is stupid and everyone has been calling it a scam for years. Apple clearly knows they're in the wrong, but continues because they know nobody can stop them.
          • andruby 13 days ago
            From a business perspective it's not _stupid_. It sucks for us customers, but it's "smart" from a business point of view.

            Until they get serious competition, I doubt they'll change their practices.

            And while I hate the overpriced memory upgrades, I still prefer paying extra, rather than Apple switching to a Ad-based business model like Google (and potentially OpenAI in the future)

            • talldayo 13 days ago
              > From a business perspective it's not _stupid_. It sucks for us customers, but it's "smart" from a business point of view.

              Well, I'm not a business. I appreciate smart consumer choices and I applaud any company that doesn't have to be forced into doing the right thing.

              > I still prefer paying extra, rather than Apple switching to a Ad-based business model like Google (and potentially OpenAI in the future)

              Oh you sweet summer child. You think Apple doesn't also have an ad-based business model on top of that?

              I switched to Linux after MacOS Mojave, and I do not miss any of this brouhaha one bit. It's almost rich hearing people talk about how few ads MacOS has, when it's constantly begging you to try or pay for Apple software services. Even Android isn't as ad-ridden as MacOS, the only victory Apple can claim is relative to Windows (which is a grim reflection of MacOS's eventual service-dominated fate).

              You should try out Linux, though. It's a culture shock, trying to get work done with no inbuilt advertisement whatsoever. I could never go back to Mac or Windows and be this productive.

              • scottyah 13 days ago
                Business and consumers both benefit from not using a cost-plus pricing model in a marketspace heavily dependent on R&D. Sure, the profit margins are hard to agree upon since it's an infinite scale and they change heavily day to day, but do not trust anyone who thinks all product pricings should be based solely on cost of materials.
          • pjmlp 14 days ago
            Yes they can, buy something else.
            • talldayo 13 days ago
              I've been doing that for six fucking years and not a single thing has changed. The base memory and storage has not changed in that time. During that same amount of time:

              - Macs transitioned to Apple Silicon, got rid of dGPU memory

              - Baseline Macbook Air models increased in price by $100

              - AI became a realistic and usable technology

              - Gaming is slightly feasible with GPTK

              Of course we shouldn't be starting at 8 gigs of memory. This is highway robbery and the only thing you can say in defense is "buy something else then"

              • pjmlp 12 days ago
                When I know I will get robbed in the highway, I take another safer road instead, instead of playing lucky and complaining afterwards.
        • brandall10 13 days ago
          It's been speculated that base config macbooks essentially act as loss leaders for higher end configs, so overall, probably sales across the line net somewhere around that. The cost of the upgrades themselves can get to multiple times the actual market cost.
          • breuleux 13 days ago
            I feel like this is very obviously what they are doing: they have a small margin on the entry config where people are a lot more price-aware, and jack up the upgrade costs to get a fat margin on configs bought by orgs and power users who generally care less about how much it costs. Frankly, I'd do the same thing.
            • scottyah 13 days ago
              I always thought the products like the $700 computer wheels[1] and $1,000 monitor stands[2] are made exclusively for the companies who are trying to signal their wealth. For anyone who knows how much these accessories cost, walking into an office where all the workstations have these components puts out a message of wealth (much like standing desks, Herman Miller chairs, expensive art, super espresso machines, etc).

              I don't think it tarnishes their reputation as long as the products are both actually good quality and there are inexpensive alternatives. Why not get some easy cash while letting your customers display their vanity? It reminds me of the "I am rich" app[3].

              [1] https://www.apple.com/shop/product/MX572ZM/A/apple-mac-pro-w... [2] https://www.apple.com/shop/product/MWUG2LL/A/pro-stand [3] https://en.wikipedia.org/wiki/I_Am_Rich

              • talldayo 12 days ago
                > Why not get some easy cash while letting your customers display their vanity?

                Because computer wheels and monitor stands are high-demand utilities, not jewelry or makeup.

        • creshal 13 days ago
          40% margin on parts that cost tens of dollars isn't going to have a huge impact on the sticker price of devices costing hundreds to thousands.
    • torginus 13 days ago
      I remember hearing that Apple's researching running AI models straight from flash storage (which would make immense amount of sense imo). You could create special, high read bandwidth flash chips (which would probably involve connecting a fast transciever in parallel to the 3D flash stack).

      If you could do that, you could easily get hundreds of GB/s read speed out of simple TLC flash.

      Obviously this is the future, but I think it's a promising one.

    • dgellow 13 days ago
      The have insane margin on ram and storage, it would be really surprising to see them move away from their current strategy
    • keyle 14 days ago
      It probably will change. Note that, so far, a 16GB apple device has much better usability than the equivalent on windows. This may sound biased, but the memory compression and foreground/background actions by macOS tight integration with the hardware is really good. I've never felt like I couldn't do things on smaller hardware, except (larges) LLMs.

      Also when I compare with my co-workers the memory pressure is a lot less running the same software on macOS than Windows. This might have to be due to the UI framework at play.

      But that said, I totally agree that Apple is doing daylight robbery with their additional RAM pricing, and the minimum on offer is laughable.

      • Aeolun 13 days ago
        Any apple device has much better usability than a windows machine, regardless of RAM.
      • wwtrv 13 days ago
        > This may sound biased,

        It certainly does, close to irrational even. IIRC memory compression is enabled by default on Windows as well.

        • dialup_sounds 13 days ago
          Biased and irrational are both things HN readers say to avoid using the word "subjective".
      • snemvalts 14 days ago
        The swapping is indeed faster as the SSD is on the SoC and so fast to access. To the point that an 4 year old 8gb M1 Air is enough for simpler development work, at least for me.
        • andreasmetsala 13 days ago
          SSD on chip might be a thing one day but I’m pretty sure only the RAM is on the same chip.
        • pests 13 days ago
          I would think any 4 year old 8gh laptop would be enough for simpler development work.
    • vishnugupta 13 days ago
      Does it matter what the baseline memory is as long as they have 16GB M4 as an option?
      • manmal 13 days ago
        Some companies give their employees only base models.
        • joshstrange 13 days ago
          Some companies are stupid, news at 6.

          It's not Apple's (or any computer manufacturer's) responsibility to put out products that can solve every problem with the base model.

          I'll never understand why companies pay high salaries then give employees sub-optimal computers to do their job.

          • manmal 13 days ago
            I understand your frustration, but I don’t appreciate the sarcasm in your response.
  • ndgold 14 days ago
    Absolutely awesome amount of content in these two pages. This was not expected. It is appreciated. I can’t wait to use the server model on a Mac to spin up my own cloud optimized for the Apple stack.
    • solarkraft 14 days ago
      What makes you think you'll get that model?

      Edit: I see they're committing to publishing the OS images running on their inference servers (https://security.apple.com/blog/private-cloud-compute/). Would be cool if that allowed people to run their own.

      • msephton 14 days ago
        Apparently they will in a VM but it seems perhaps only security researchers?
      • whazor 13 days ago
        It would be much cooler if enterprises can swap to their custom models in their own clouds.
      • rekoil 13 days ago
        > Would be cool if that allowed people to run their own.

        Oh my god that would be absolutely amazing!

    • titaniumtown 14 days ago
      Did it mentioned being able to spin up the server model locally? I must've missed that part in the article.
      • theshrike79 13 days ago
        They didn't but I'll bet it's coming in the next 5 years.

        Most likely integrated with an Apple TV or a similar thing. Enough local LLM processing power to handle a family's data all in-house.

  • vzaliva 14 days ago
    I love that they use machinelearning.apple.com not ai.apple.com
    • tmpz22 14 days ago
      For the majority of the keynote they explicitly avoided the word AI instead substituting the word Intelligence, then Apple Intelligence, and then towards the end they said AI and ChatGPT once or twice.

      I think they saw the response to all the AI shoveling and Microsoft Recall and executed a fantastic strategy to reposition themselves in industry discussions. I still have tons of reservations about privacy and what this will all look like in a few years, but you really have to take your hat off to them. WWDC has been awesome and it makes me excited to develop for their platform in a way I haven't felt in a very, very, long time.

      • worstspotgain 14 days ago
        > executed a fantastic strategy to reposition themselves in industry discussions

        Just the usual marketing angle, IMO. It's not TV, it's HBO.

        No one is reluctant to use the word smartphone to include iPhones. I don't think anyone is going to use the Apple Intelligence moniker except in the same cases where they'd say iCloud instead of cloud services.

        It's also a little clunky. Maybe they could have gone with... xI? Too close to the Chinese Xi. iAI? Sounds like the Spanish "ay ay ay." Not an easy one I think. The number of person-hours spent on this must have been something.

        • tmpz22 14 days ago
          I don't think they actually expect "Apple Intelligence" to enter popular vernacular. I think it was more to drive home the distinction between what Apple is doing and what everybody else is doing.
          • andsoitis 14 days ago
            > distinction between what Apple is doing and what everybody else is doing

            it is artificial intelligence, applied intelligently.

            In Apple's case: "personalised AI system"

        • swyx 13 days ago
          correct. last year instead of VR they went with Spatial Intelligence
          • hbn 13 days ago
            Vision Pro isn't really designed to be a VR device first and foremost. The primary usecase is the passthrough mode whereas VR usually describes the software putting you in a different place.
          • spogbiper 13 days ago
            "spatial computing"
      • seydor 14 days ago
        > makes me excited to develop for their platform in a way I haven't felt in a very, very, long time

        AI will ultimately do all the 'development', and will replace all apps. The integrations are going to be a temporary measure. Only apps that will survive are the ones that control things that apple cannot control (ie. how Uber controls its fleet)

        • Hugsun 13 days ago
          Perhaps. It will be exciting to see if/how that happens. It does seem relatively far off still. At least some years.
      • dgellow 13 days ago
        What excites you specifically as a developer?
        • tmpz22 13 days ago
          For the last 10 years I've been a full stack / devops developer. I think that ecosystem is in a very bad place right now and has failed to efficiently modernize. The tools that people are adopting in an attempt to mitigate this such as NextJS are still grounded in the complexities of the past like Node/React/Express/Serverless and not good enough.

          These troubles metastasize to subpar SaaS products, low efficiency, bad company cultures, layoffs, bad hiring practices, management instead of leadership, salary stagnation, dark patterns, you name it.

          So to see Apple with a laser focus on tooling, quality of life, privacy, in this WWDC while everyone else runs around like a headless chicken suggests to me that their platform might be the more lucrative path to follow. I think it'll be faster, better, and more enjoyable, to develop consumer and business applications for fun and profit.

          Don't get be wrong its far from a silver bullet. Many Apple Platform APIs like CloudKit and Server-side-Swift have a LONG way to go. But Im seeing the right steps to address these issues and at the end of the day it feels a whole lot better then what I've been doing in the past and produces better end products IMO.

    • andbberger 14 days ago
      glad someone sane is in charge in cupertino
      • okdood64 14 days ago
        Apple Intelligence.
        • bfung 14 days ago
          Waiting for aiPhone in a few iterations </troll>
    • xwolfi 14 days ago
      Yeah they probably were still working on the last buzzword
  • w10-1 13 days ago
    I think we as tech people lost the forest for the trees.

    Apple (unwisely I think) is allowing UI's to just generate responses.

    The wow-neat! experience will wear off quickly. Then even as a miss rate of 0.1%, there will be thousands - millions - of cringe-worthy examples that sully the Apple brand for quality.

    It will be impossible to create quality filter good enough, and there will be no way to back these features out of the OS.

    For targeted use-cases (like coding and editing), this will be useful. But these features may be what finally makes contempt for Apple go mainstream, and that would be a shame.

    Internally at Apple, they likely discussed how much to limit the rollout and control usage. I think they decided to bake it into API's more to maintain developer mindshare than to keep users happy.

    The one feature that could flip that script is interacting with Siri/AI in order to get things done. The frustration with knowing what you want but not how or whether it can be done drives a lot of tech angst. If this only meant ordinary people could use their existing phones to their full extent, it would be a huge win.

    • s3p 13 days ago
      "that sully the Apple brand for quality."

      OK. No one remembers Apple Maps, the CSAM scanning, the crush ad, etc? Companies do embarrassing stuff all the time. At least they're trying.

    • scottyah 13 days ago
      I agree, they're joining in on the slippery slope auto-correct, home assistants, and Self Driving.

      I think it's been awhile since consumers have trusted or relied on consumer tech. Browsing the web from a phone can only be described as adversarial. Scrolling down a top google result recipe site is almost impossible. Texts don't always send and you can't keep up with all the cloud backup offerings that it's hard to tell if your photos are actually being saved.

      The current political and media scene is often described as post-truth, where accuracy isn't the biggest driving factor. It seems that computation is headed that way as well.

  • ra7 14 days ago
    > Our foundation models are trained on Apple's AXLearn framework, an open-source project we released in 2023. It builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs.

    Interesting that they’re using TPUs for training, in addition to GPUs. Is it both a technical decision (JAX and XLA) and a hedge against Nvidia?

    • m-s-y 14 days ago
      They’d be silly not to hedge. Anyone, in fact, would be silly. It to hedge. On pretty much everything.
    • anvuong 14 days ago
      Jax was built with TPUs in mind, so it's not surprising that they use TPUs
    • gokuldas011011 14 days ago
      "Use the best tool available"
      • flakiness 13 days ago
        They hired people nearby. Conveniently there is a small town called Mountain View.
  • dingclancy 14 days ago
    It’s interesting that a sub-ChatGPT 3.5 class model can do a lot of things on-device if you marry it with a good platform and feed it personal context. GPT-4o, living on the browser, is not as compelling as a product compared to what Apple Intelligence can do on the iPhone with a less capable model.
    • aixpert 13 days ago
      their 3 billion parameter model can't do shit, Only some basic grammar check style rewrite and maybe summarization
  • miven 14 days ago
    > For on-device inference, we use low-bit palletization, a critical optimization technique that achieves the necessary memory, power, and performance requirements.

    Did they go over the entire text with a thesaurus? I've never seen "palletization" be used as a viable synonym for "quantization" before, and I've read quite a few papers on LLM quantization

    • bagrow 14 days ago
      • miven 14 days ago
        Huh, generally whenever I saw the lookup table approach in literature it was also referred to as quantization, guess they wanted to disambiguate the two methods

        Though I'm not sure how warranted it really is, in both cases it's still pretty much the same idea of reducing the precision, just with different implementations

        Edit: they even refer to it as LUT quantization on another page: https://apple.github.io/coremltools/docs-guides/source/quant...

        • astrange 13 days ago
          Just "quantization" is poor wording for that. Quantization means dropping the low bits.

          Sounds like it was confused with "vector quantization" which does involve lookup tables (codebooks). But "palletization" is fine too.

      • fudged71 13 days ago
        404
      • elcritch 14 days ago
        Huh, it’s PNG for AI weights.
    • cgearhart 13 days ago
      I also found it confusing the first time I saw it. I believe it is sometimes used because the techniques for DL are very similar (in some cases identical) to algorithms that were developed for color palette quantization (in some places shortened to "palettization"). [1] At this point my understanding is that this term is used to be more specific about the type of quantization being performed.

      https://en.wikipedia.org/wiki/Color_quantization

    • dialup_sounds 13 days ago
      I enjoy the plausible irony that they used the very same model they're describing to proofread the article, and it didn't catch palettize (like a color palette) vs. palletize (like a shipping pallet).
  • scosman 14 days ago
    “We utilize adapters, small neural network modules that can be plugged into various layers of the pre-trained model, to fine-tune our models for specific tasks.”

    This is huuuuge. I don’t see announcement of 3rd party training support yet, but I imagine/hope it’s planned.

    One of the hard things about local+private ML is I don’t want every app I download to need GBs of weights, and don’t want a delay when I open a new app and all the memory swap happens. As an app developer I want the best model that runs on each HW model, not one lowest common denominator model for slowest HW I support. Apple has the chance to make this smooth: great models tuned to each chip, adapters for each use case, new use cases only have a few MB of weights (for a set of current base models), and base models can get better over time (new HW and improved models). Basically app thinning for models.

    Even if the base models aren’t SOTA to start, the developer experience is great and they can iterate.

    Server side is so much easier, but look forward to local+private taking over for a lot of use cases.

    • dimtion 14 days ago
      With huge blobs of binary model weights, dynamic linking is cool again.
      • pjmlp 14 days ago
        Dynamic linking has always been cool for writing plugins.

        It is kind of ironic that languages that praise so much for going back to early linking models, have to resort for much heavier OS IPC for similar capabilities.

        • rfoo 13 days ago
          Which languages?

          IIUC Go and Rust resort to OS IPC based plugin system mainly because they refused to have a stable ABI.

          On the other hand, at $DAYJOB we have a query engine written in C++ (which itself uses mostly static linking [1]) loading mostly static linked UDFs and ... it works.

          [1] Without glibc, but with libstdc++ / libgcc etc.

          • skohan 13 days ago
            Doesn’t rust’s static linking also have to do with the strategy of aggressive minimization? Iirc for instance every concrete instantiation of a dynamic type will have its own compiled binary, so it would basically be impossible for a dynamic library to do this since it wouldn’t know how it would be used, at least not without some major limitations or performance tradeoffs
          • pjmlp 13 days ago
            Well if it loads code dynamically, it is no longer static linking.

            Also it isn't as if there is a stable ABI for C and C++ either, unless everything is compiled with the same compiler, or using Windows like dynamic libraries, or something like COM to work around the ABI limitations.

      • inickt 14 days ago
        Which Apple has put some pretty large effort in the last few years to improve in iOS
    • lossolo 14 days ago
      It's LORA, most of the things you saw in Apple Intelligence on device presentation are basically different LORAs.
      • scosman 13 days ago
        The article says it’s lora a bunch of times. That’s clear.

        My comment above is about dev experience, memory swapping, tuning base models to each HW release, and app size.

    • eightysixfour 14 days ago
      This is how Google is doing it too.
      • gokuldas011011 14 days ago
        Indeed. Google said LoRA and apple said adapter plugging. Wonder the difference is at, Apple's dev conference is for consumers and Google's dev conference is for developers.
        • eightysixfour 13 days ago
          Yeah, just Apple doing a better job branding. LoRA is an adapter.
      • scosman 13 days ago
        Oh missed that!

        But kinda as expected: only works on 2 android phones (pixel 8 pro, S24).

        Pretty typical: Apple isn’t first, but also typically will scale faster with HW+platform integration.

        • Deathmax 13 days ago
          On Apple’s side, Apple Intelligence will only be enabled on A17 Pro and M-series chips, so only the iPhone 15 Pro and Pro Max will be supported in terms of phones.
          • scosman 13 days ago
            2 phones, ~4 tablets, ~12 PCs.

            Looking at sales, looks like about 10x the phone volume of s24 (and pixel 8 doesn’t register on the chats).

        • rfoo 13 days ago
          > will scale faster

          * Only in USA, both intentionally and not.

    • danielmarkbruce 14 days ago
      this is pretty stock standard lora.
      • idiotsecant 14 days ago
        [flagged]
        • dang 14 days ago
          • Sabinus 14 days ago
            One day they're going to train a moderation bot on your account, and it's going to be amazing.
            • bowsamic 14 days ago
              I’m sure you will continue to think so until he permanently restricts your account for getting downvoted
              • Sabinus 5 days ago
                Quite the opposite. This 9 year old account just reached the karma level required to be able to downvote. I'm so chuffed.
          • idiotsecant 13 days ago
            You caught me, I was weak in the face of ridiculousness. Keep on keeping on, you do a great job keeping this place halfway civil.
            • danielmarkbruce 12 days ago
              What was ridiculous? Are they not doing stock standard lora?
              • idiotsecant 12 days ago
                I don't think you were ridiculous. In fact, I agree with you. I was sarcastically pointing out that whenever apple adopts some well understood technology the fanboys always gush over how innovative it is now that apple is doing it.
    • seydor 14 days ago
      Local models are also extremely energy consuming. I don't see local AI working for long, because Large models are going to get so incomparably smarter and eventually reach general intelligence
  • buildbot 14 days ago
    3.5B per weight with no quality loss is state of the art - that's an awesome optimization result (a mix of 2b and 4b weights).
    • Hugsun 13 days ago
      I would like to see their method compared quantitatively to the best llama.cpp methods. IQ3_S has a similar bpw and pretty high quality.

      I wonder if they didn't stretch the truth using the phrase "without loss in accuracy".

  • htrp 14 days ago
    > Our foundation models are fine-tuned for users’ everyday activities, and can dynamically specialize themselves on-the-fly for the task at hand. We utilize adapters, small neural network modules that can be plugged into various layers of the pre-trained model, to fine-tune our models for specific tasks. For our models we adapt the attention matrices, the attention projection matrix, and the fully connected layers in the point-wise feedforward networks for a suitable set of the decoding layers of the transformer architecture.

    >We represent the values of the adapter parameters using 16 bits, and for the ~3 billion parameter on-device model, the parameters for a rank 16 adapter typically require 10s of megabytes. The adapter models can be dynamically loaded, temporarily cached in memory, and swapped — giving our foundation model the ability to specialize itself on the fly for the task at hand while efficiently managing memory and guaranteeing the operating system's responsiveness.

    This kind of sounds like Loras......

    • cube2222 14 days ago
      The article explicitly states they’re Loras.
    • karmasimida 14 days ago
      I think it is just LoRA, you can call the LoRA weights as adapters
    • alephxyz 14 days ago
      The A in LoRA stands for adapters
      • GaggiX 14 days ago
        LoRA stands for "Low Rank Adaptation" btw.
  • epipolar 14 days ago
    It would be interesting to see how these models impact battery life. I’ve tried a few local LLMs on my iPhone 15 Pro via the PrivateLLM app, and the battery charge plummets just after a few minutes of usage.
    • bradly 14 days ago
      During my time at Apple the bigger issue with personalized, on-device models was the file size. At the time, each model was a significant amount of data to push to a device, and with lots of teams wanting an on-device model and the desire to update them regularly, it was definitely a big discussion.
      • hmottestad 14 days ago
        They’ve gone with a single 3B model and several “adapters” for each use case. One adapter is good at summarising while another good a generating message replies.
        • onesociety2022 14 days ago
          AI noob here. Is every single model in iOS really just a thin adapter on top of one base model? Can everything they announced today really be built on top of one base LLM model with a specific type of architecture? What about image generation? What about text-to-speech? If they’re obviously different models, they can’t load them all at once into RAM. If they have to load from storage every time an app is opened, how will they do this fast enough to maintain low latency?
          • wmf 14 days ago
            The main LLM is only 1.5 GB so it should only take a half second to load. Or they could keep it loaded. The other models may be even smaller.
            • glial 14 days ago
              Maybe they use the "Siri is waking up and the screen wabbles" animation time for loading the model. That would be clever.
              • mholm 13 days ago
                They'll have plenty of time to load the model; It still needs to wait for the user to actually voice/type their request. Invoking Siri happens well before the request is ready.
    • woadwarrior01 13 days ago
      I'm the author of Private LLM. Looks like it's just become possible[1] to run quantized LLM inference using the ANE with iOS 18. I think there are some major efficiency gains on the table now.

      [1]: https://github.com/apple/coremltools/pull/2232

    • urbandw311er 14 days ago
      Likely they’ll be able to take advantage of the hardware neural engine and be far more power efficient. Apple has demonstrated this is something it takes pretty seriously.
      • brcmthrowaway 14 days ago
        So iOS LLM Apps dont use the neural engine? Lol
        • woadwarrior01 14 days ago
          None of the current iOS and macOS LLM Apps use the Neural Engine. They use the CPU and the GPU.

          nb: I'm the author of a fairly popular app in that category.

          • jjtheblunt 14 days ago
            How would you know none of the apple apps use the neural engine? Is the key in the statement “LLM”?
          • l33t7332273 14 days ago
            Why do they not?
            • wpm 14 days ago
              AFAIK there is no general purpose, "do this on the ANE" API. You have to be using specific higher level APIs like CoreML or VisionKit in order for it to end up on the ANE.
              • bt1a 13 days ago
                This, plus metal acceleration works quite well. 7~8B parameter models quantized to 3bpw or so run with good tok/s on my iphone 15 pro
                • sanxiyn 13 days ago
                  It works quite well as long as you don't care about battery.
        • hmottestad 14 days ago
          If they use Llama.cpp they probably run on the GPU. Apple hasn’t published much about their neural engine, so you kinda have to use it through CoreML. I assume they have some aces up their sleeves for running LLMs efficiently that haven’t told anyone yet.
        • renewiltord 14 days ago
          Probably not. The CoreML LLM stuff only works on Macs AFAIK. Probably the phone app uses the GPU.
    • jamesy0ung 14 days ago
      It looks like PrivateLLM uses the GPU for inferencing, from what I can tell, Apple is using the ANE on the A17 Pro. For M1 and above, I'd presume they are using the GPU since the ANE in M series isn't great.
  • Jayakumark 13 days ago
    The model is not opensource. Also now we are stuck with walled garden for models thats deeply integrated at OS or Browser level. 1. Apple Models not open - so we cannot run Android, also not on Desktop Chrome or Edge. 2. Microsoft Phi3 - Can run inside iOS ,but on Android only as an APP but not on OS level or no supported APIs. Can run on Desktop Edge not chrome. 3. Google GEmini nano - Can only run inside Android and Desktop Chrome not Edge, not on iOS as weights are not open.

    So we cannot get a similar answer from LLM as its different models, you cannot across ecosystem.

  • orbital-decay 13 days ago
    > 2. Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.

    How do they represent users around the globe authentically while being located in Cupertino, CA? (more of a rhetorical question really)

    • esskay 13 days ago
      You mean the person on the other side of the planet doesn't know about Philz Coffee down on Stevens Creek Blvd, or that there's a cool park a 2 minute walk away from Apple HQ?!

      It does baffle me how California centric they are with many of their announcements, and even some features.

      • rekoil 13 days ago
        The Maps stuff always gets me. Yeah sure it looks pretty, but almost none of what makes it a usable product is available to me in Sweden.
    • boxed 13 days ago
      I wish I could have one keyboard on my iPhone and could type both Swedish and English with it. These are the basics they can't get right, and I don't see why. They clearly have bilingual people working over there, why is this so bad?
      • dgellow 13 days ago
        I share your pain, switching between English, German, and French is really, really frustrating…
      • jacooper 13 days ago
        Because then you will be typing wrong/s
  • anshumankmr 13 days ago
    As someone who has been dabbling with Prompt Engineering and now fine tuning some models (working on a use case where we may have to fine tune one of the Mistral's 7B instruct models), I want to know what kind of skillsets I need to really have so that I can join this team (or a similar team building these sort of things)
  • superkuh 14 days ago
    The "Human Evaluation of Output Harmfulness" section confirms what I've perceived: Mistral-7B is the best of the small models in terms of minimizing false positive refusals. With the refusal vector abliteration stuff this is less of an issue but a good base is still important.
  • GaggiX 14 days ago
    It would be cool to understand when the system will use one or the other (the ~3 billion on-device model or the bigger one on Apple servers).
    • swatcoder 14 days ago
      Conceivably, they don't have precise answers for that yet, and won't until after they see what real-world usage looks like.

      They built out a system that's ready to scale to deliver features that may not work on available hardware, but they're also incentivized to minimize actual reliance on that cloud stuff as it incurs per-use costs that local runs don't.

      • GaggiX 14 days ago
        Yeah this is probably right. If it works well enough during real-world usage it will be using the on-device model, if not then there is the bigger one on the servers. There is also GPT-4o, so they have 3 different models to use depending on the task.
    • aixpert 13 days ago
      if you have ever used a 3 billion or 7 billion parameter model you know that they are really bad at text generation, so this will be done in the cloud
  • Isuckatcode 14 days ago
    >By fine-tuning only the adapter layers, the original parameters of the base pre-trained model remain unchanged, preserving the general knowledge of the model while tailoring the adapter layers to support specific tasks.

    From a ML noob (me) understanding of this, does this mean that the final matrix is regularly fine tuned instead of fine tuning the main model ? Is this similar to how chatGPT now remembers memory[1] ?

    [1] https://help.openai.com/en/articles/8590148-memory-faq

    • ww520 14 days ago
      The base model is frozen. The smaller adaptor matrices which are finetuned with new data. During inference, the weights from the adaptor matrices "shadow" the weights in the base model. Since the adaptor matrices are much smaller, it's quite efficient to finetune them.

      The advantage of the adaptor matrices is you can have different sets of adaptor matrices for different tasks, all based of the base model.

    • MacsHeadroom 14 days ago
      ChatGPT memory is just a database with everything you told it to remember.

      Low Rank Adaptors (LoRA) are a way of changing the function of a model by only having to load a delta for a tiny percentage of the weights rather than all the weights for an entirely new model.

      No fine-tuning is going to happen on Apple computers or phones at any point. They are just swapping out Apple's pre-made LoRAs so that they can store one LLM and dozens of LoRAs in a fraction of the space it would take to store dozens of LLMs.

  • Blackstrat 13 days ago
    I haven't seen anything indicating whether these features can be disabled. I'm not interested in adding a further invasion of privacy to my phone. I don't want some elaborate parlor trick helping me write. I've spent some time with ChatGPT and while it was somewhat novel, I wasn't overly impressed. Much of it was rudimentary and often wrong. And I wasn't overly impressed with some of the code that it generated. Reliance on such tools reminds me of an Asimov SF tale.
    • hbn 13 days ago
      I'd certainly expect you'd to be able to at the very least disable the stuff that does outgoing network requests.

      As for the stuff that's local to your device, how is your privacy being invaded? It's your device's OS looking at data on the device it's running on, as it's always done.

  • rvaish 14 days ago
    Easel on iMessage has had this experience plus more for a while, including multiplayer, where you can have two people in one scene together with photorealistic imagery: https://apps.apple.com/us/app/easel-ai/id6448734086
  • PHGamer 14 days ago
    it would have been nice if they allowed you to build your own apple AI system (i refused to redefine apples AI as just AI :-p ) using clusters of mac minis and mac pros. but of course they still want that data for themselves like google does. its secure against everyone but apple and the NSA probably lol.
    • IOT_Apprentice 14 days ago
      What is stopping you from doing that? Nothing. Start cooking
  • mFixman 13 days ago
    Has anybody here improved their day-to-day workflow with any kind of "implicit" generative AI rather than explicitly talking to an LLM?

    So far all attempts seem to be building an universal Clippy. In my experience, all kinds of forced autocomplete and other suggestions have been worse than useless.

    • mavamaarten 13 days ago
      GitHub Copilot works well in my experience. It does bad suggestions at times, but also really spot-on ones.

      Other than that, AI for me is meme/image generation and a semi-useful chatbot.

  • Hugsun 13 days ago
    The benchmarks are very interesting. Unfortunately, the writing benchmarks seem to be poorly constructed. It looks like there are tasks no model can achieve and others that almost all models pass, i.e. every model gets around 9.0.
  • TheRoque 14 days ago
    Why isn't there a comparison with the Llama3 8b in the "benchmarks" ?
    • axoltl 14 days ago
      The Llama 3 license says:

      "If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights."

      IANAL but my read of this is that Apple's not allowed to use Llama 3 at all, for any purposes, including comparisons.

      • anvuong 14 days ago
        They can just run the same tests and cite the results from other websites. That has nothing to do with Meta. No companies can force you to not talk about them.
        • axoltl 14 days ago
          The tests they ran were very different from what's usually run, mostly involving perception of usefulness to humans. I don't see what website they would've cited from?
    • teonimesic2 14 days ago
      I believe it is because llama 3 8B beats it, which would make it look bad. The phi-3-mini version they used is the 4k which is 3.8B, while LLama 3 8B would be more comparable to phi-3 small (7B) which also considerably better than phi-3-mini. Likely both phi-3 small and llama 3 8B had too good results in comparison to Apple's to be added, since they did add other 7B models for comparison, but only when they won.
      • mixtureoftakes 14 days ago
        llama 3 definitely beats it, but 99% of the users wont care which is actually a good thing... apple totally wins the ai market not by being sota but by sheer amount of devices which will be running their models, we're talking billions
        • Hugsun 13 days ago
          How is any of this good? Apple serves its captive users inferior models without giving them a choice. I don't see how that is winning the AI market either.
          • dwaite 13 days ago
            You may be able to make a case that Apple's model has less parameters or performs worse than other models on standardized tests.

            Thats far from being "inferior" when you are talking about tuning for specific tasks, let alone when taking into account real-world constraints - like running as a local always-running task on resource-constrained mobile devices.

            Running third party models means requiring them to accomplish the same tasks. Since the adapters are LORA-based, they are not adaptable to a different base model. This pushes a lot of specialized requirements onto someone hoping to replace the on-device portion.

            This is different from say externally hosted models such as their announced ChatGPT integration. They announced an intention to integrate with other providers, but it is not clear yet how that is intended to work (none of this stuff is released yet even in alpha form).

    • woadwarrior01 13 days ago
      Because their model won't look good in comparison. Also see this part of the footnote: "The open-source and Apple models are evaluated in bfloat16 precision." The end user's on-device experience will be with a quantized model and not the bfloat16 model.
    • leodriesch 14 days ago
      I think it’s fair to leave it out in the on-device model comparison. 3b is much smaller than 8b, it is obviously not going to be as good as llama 3 if they did not make groundbreaking advancements with the technology.
    • hmottestad 14 days ago
      Maybe it’s too new for them to have had time to include it in their studies?
      • TheRoque 14 days ago
        Phi-3-Mini, which is in the benchmarks, was released after Llama3 8b
        • hmottestad 14 days ago
          Llama 3 8B is really really good. Maybe it makes Apples models look bad? Or it could be a licensing thing where Apple can’t use Llama 3 at all, even just for benchmarking and comparison.

          The license for the Llama models was basically designed to stop Apple, Microsoft and Google from using it.

  • wslh 14 days ago
    Is it me or Apple is really moving fast? I don't think it is easy for a company of this size to concisely put a vision of AI in these short and crazy AI times.

    BTW, not an Apple fan but an Apple user.

    • MacsHeadroom 14 days ago
      Google had similar AI functionality on Pixels last year and Microsoft had like six AI CoPilot products before that. So I would not say Apple is moving fast.

      Most people expected this update 6 months ago.

      • azinman2 14 days ago
        Take a look at the yearly OS cadence. iOS 17 only came out a few months before your 6 month expectation.
      • doctor_eval 14 days ago
        Since when does Apple make major software update announcements at Christmas?
    • wmf 14 days ago
      People thought Apple was behind but they were just working quietly.
    • majestik 14 days ago
      ChatGPT came out November 2022 and it took Apple 18 months to announce Siri will integrate with it.

      Is that moving fast? Maybe, compared to what, Oracle?

  • hehdhdjehehegwv 14 days ago
    The WWDC show got on my nerves with the corpspeak, but this is pretty cool stuff.

    I’ve been trying to make smaller more efficient models in my own work. I hope Apple publish some actual papers.

    • gepardi 14 days ago
      Yeah it was close to “infomercial” levels of cheesy.
  • revscat 14 days ago
    > With this set of optimizations, on iPhone 15 Pro we are able to reach time-to-first-token latency of about 0.6 millisecond per prompt token, and a generation rate of 30 tokens per second. Notably, this performance is attained before employing token speculation techniques, from which we see further enhancement on the token generation rate.

    This seems impressive. Is it, really? I don’t know enough about the subject to judge.

    • bastawhiz 14 days ago
      For a phone running locally, that's pretty fast. The bigger question is how good the output is. Fast garbage isn't useful, so we'll have to wait to see what it actually ends up looking like outside of demos.
  • shreezus 14 days ago
    This is great, however Apple needs to be explicit on what it, and what isn't relayed to third party services, and provide the ability to opt-out if desired. It's one thing to run inference on-device, and another to send your data through OpenAI's APIs. The partnership details are not entirely clear to me as a user.
    • frizlab 14 days ago
      They are? Did you watch the keynote? They talked about it at length.
      • tsunamifury 14 days ago
        [flagged]
        • frizlab 14 days ago
          They told explicitly there are three things. On device AI for queries that can be done on device, private cloud compute for those that can’t and opt in ChatGPT(-4o) support for more general queries.

          Cloud compute queries only use the data for answering the queries and are run on an OS where storage is not available along other privacy measures. The builds of the OS will be public and auditable by security researchers.

          I think it’s plenty details for a non-tech keynote. The tech details are in the session and SotU.

    • TillE 14 days ago
      It's literally the prompt you just gave it, that's what they're sending to ChatGPT, nothing else. None of the features that sift through your data are touching OpenAI.
    • gnicholas 14 days ago
      My understanding is that nothing is shared with any non-Apple company except if you specifically authorize it on a per-use basis. Otherwise it just runs locally or in the Apple AI cloud, and is not retained. All of this is subject to verification of Apple’s claims, of course.
  • simianparrot 13 days ago
    I just hope all of this can be toggled off, I don't want it on my devices.
    • dmix 13 days ago
      They said repeatedly in the video anything going over the wire is optional and user controllable.
      • simianparrot 10 days ago
        I don't want local AI either. These "smart" features are all noise to me.
  • advael 14 days ago
    I'm disappointed that they make the fundamental claim that their cloud service is private with respect to user inputs passed through it and don't even a little bit talk about how that's accomplished. Even just an explanation of what guarantees they make and how would be much more interesting than explanations of their flavor of RLHF or whatever nonsense. I read the GAZELLE* paper when it came out and wondered what it would look like if a large-scale organization tried to deploy something like it.

    Of course, Apple will never give adequate details about security mechanisms or privacy guarantees. They are in the business of selling you security as something that must be handled by them and them alone, and that knowing how they do it would somehow be less secure (This is the opposite of how it actually works, but also Apple loves doublespeak, and 1984 allusions have been their brand since at least 1984). I view that, like any claim by a tech company that they are keeping your data secure in any context, as security theater. Vague promises are no promises at all. Put up or shut up.

    * https://arxiv.org/pdf/1801.05507

    • killingtime74 14 days ago
      Don't they do it in this linked article? https://security.apple.com/blog/private-cloud-compute/
      • senderista 14 days ago
        This approach is definitely not "secure by construction" like FHE, it's just defense-in-depth with a whole lot of impressive-sounding layers. But I don't see how this has anything to do with provable security (not that TFA claims it does).
      • advael 14 days ago
        Woa, good catch! Maybe they're doing better about at least being concrete about it, though I still have to side-eye "Users control their devices" (Even with root on macbooks I don't have access to everything running on it). However, the section that promises to open-source the cloud software are impressive and if true gives them more credibility than I assumed. I would still look out for places where devices they do control could pass them keys in still-proprietary parts of the stack they're operating, as even if we can verify the cloud container OS in its entirety if there's a backchannel for keys that a hypervisor could use then that's still a backdoor, but they are at least seemingly making a real effort here
        • threeseed 14 days ago
          > Even with root on macbooks I don't have access to everything running on it

          Just disable System Integrity Protection and then you do.

          • advael 14 days ago
            Ah, word. Probably not applicable to my use case (It's a laptop that's remotely administrated for a job, and I avoid proprietary stuff for my personal devices where possible) but it's good to know it exists
          • solarkraft 14 days ago
            That has a few drawbacks, for instance you won't be able to run iOS apps anymore.
            • advael 13 days ago
              Yea, nice as it is to hear that they're nice to their dev marketshare, there's probably never going to be a sanctioned iphone the end user actually gets to control. Bread and butter and such
      • KoolKat23 13 days ago
        The only two questions I would have would be, how often are they "periodically rebooted" and what are the predefined metrics logged/reported.

        We may have some insight into the second point when the code is published.

  • ddxv 14 days ago
    Will these smaller on device models lead to a crash in GPU prices?
    • jondwillis 14 days ago
      Not in the short-to-medium-term. Try the local models out, they fall over pretty quickly, even if you have 64GB+ of VRAM.
      • wkat4242 14 days ago
        It depends what you use them for.

        If you ask it for knowledge, like a comparison of vacuum cleaner models then yes, it's a hallucination fest. They just don't have the parameters for this level of detail. This is where ChatGPT is really king.

        But if you give them the data they need with RAG, they're not bad. Acting on commands, looking stuff up in provided context, summarising all perform pretty well. Which seems to be also what Apple is targeting to do with them.

    • sooheon 14 days ago
      Prices fall when supply outpaces demand -- this is adding more demand.
      • wmf 14 days ago
        This isn't adding GPU demand.
        • sooheon 14 days ago
          It adds *PU demand, don't they come out of the same limited number of foundries?
    • htrp 14 days ago
      X to doubt.
  • dharma1 13 days ago
    do they mention how big the models are? Last I saw was 3gb - I just bought a 8gb m4 iPad and keep thinking I should have gone for the 16gb one
  • visarga 14 days ago
    They use synthetic data in pretraining and teacher models in RLHF, that means they use models trained on copyrighted data to make derivative models, is that sitting ok with copyright owners?
  • koolala 13 days ago
    aiPhone
  • kmeisthax 14 days ago
    > We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.

    And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data.

    In completely unrelated news, I just trained a new OS development AI on every OS Apple has ever written. Don't worry. There's an opt-out, Apple just needed to know to put these magic words in their installer image years ago. I'm sure Apple legal will be OK with this.

    • multimoon 14 days ago
      Apple just did more to make this a privacy focused feature versus just a data mine than literally anyone else to date and still people complain.

      Public content on the internet is public content on the internet - I thought we had all agreed years ago that if you didn’t want your content copied, don’t make it freely available and unlicensed on the internet.

      • kmeisthax 14 days ago
        Oh no, don't get me wrong. I like the privacy features, it's already way better than OpenAI's "we make it proprietary so we can spy on you" approach.

        What I don't like is the hypocrisy that basically every AI company has engaged in, where copying my shit is OK but copying theirs is not. The Internet is not public domain, as much as Eric Bauman and every AI research team would say otherwise. Even if you don't like copyright[0], you should care about copyleft, because denying valuable creative work to the proprietary world is how you get them to concede. If you can shove that work into an AI and get the benefits of that knowledge without the licensing requirement, then copyleft is useless as a tactic to get the proprietary world to bend the knee.

        [0] And I don't.

        My opinion is that individual copyright ownership is a bad deal for most artists and we need collective negotiation instead. Even the most copyright-respecting, 'ethical' AI boils down to Adobe dropping a EULA roofie in the Adobe Stock Contributor Agreement that lets them pay you pennies.

        • ssahoo 14 days ago
          Where did you get the idea that's its way better than openai's? Aren't they both proprietary?
          • immibis 14 days ago
            Without the "so we can spy on you" part.
            • talldayo 14 days ago
              But they won't even make good on that: https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

              There's your bleeding, sorry truth there. It's only a matter of time until we get another headline like it.

              • kmeisthax 14 days ago
                There's a difference between being forced to compromise user security and doing it willingly in the name of vague "AI safety" concerns.

                Furthermore, most governments don't like the "march in with a warrant and demand information" approach, because it's loud and noisy. People might move data out of a given cloud if they know there's spooks inside. And more importantly, it creates a paper trail, which they don't want. So there's a lot of effort put into compromising cloud servers by intelligence agencies.

                Looking at Apple's blog post regarding Private Cloud Compute[0], they've basically took every security precaution they could to prevent covert compromise of their servers. They also have some fancy attestation stuff that, most notably, creates a paper trail whenever software changes. Once again, spooks absolutely hate this. It's technically possible for Apple to subvert this scheme, but that would require coordination from several different business units at Apple. Which, again, creates a paper trail. Spooks would much rather exploit a vulnerability than demand code signing keys that would provide evidence of cooperation.

                To be clear: no, this isn't end-to-end. You can't currently do end-to-end encrypted cloud compute[1]. But it's still Apple putting lots of money into a significant improvement in terms of privacy and transparency regarding cloud services. OpenAI in contrast does not give two flying fucks about your data privacy, and makes building an AI Panopticon one of their deliberate, expressly stated design goals. Their safety team, at least by their own admission, cannot operate without total knowledge of everything their models get prompted with so they can implement reactive controls for specific exploits.

                [0] https://security.apple.com/blog/private-cloud-compute/

                [1] Homomorphic encryption is not theoretically impossible, but imposes significant performance penalties that negate the performance advantages of Apple using a cloud service. I suspect that they at least gave it some thought though.

              • musictubes 14 days ago
                The article did say Apple was compelled to supply the data. Not sure what your point is.
          • jachee 14 days ago
            Apple isn’t collecting data from their customers.

            Edit: to feed back into their AI training.

            • ssahoo 14 days ago
              Apple has an ad business. They are fooling users for years while claiming that they have the right to collect user data in a recent class action lawsuit. If you don't google because you think they might track you, apple is the reason.
            • astrange 14 days ago
              Apple necessarily collects data from customers for mandatory reasons, like everyone else. (Like, you need someone's address to ship them their order.)

              More useful questions are if they're using it for other purposes without opt-in or accidentally leaking it.

            • NBJack 14 days ago
              Be careful with the wordplay here. Apple isn't. OpenAi is not Apple.
            • testfrequency 14 days ago
              [flagged]
        • wilg 14 days ago
          > then copyleft is useless as a tactic to get the proprietary world to bend the knee

          I have bad news

      • meatmanek 14 days ago
        > I thought we had all agreed years ago that if you didn’t want your content copied, don’t make it freely available and unlicensed on the internet.

        Until LLMs came along, most large-scale internet scraping was for search engines. Websites benefited from this arrangement because search engines directed users to those websites.

        LLMs abused this arrangement to scrape content into a local database, compress that into a language model, and then serve the content directly to the user without directing the user to the website.

        It might've been legal, but that doesn't mean it was ethical.

        • c1sc0 14 days ago
          In my view it’s ethical even if it’s just for taking revenge on the ad-driven model that has caused the enshittification of the web.
          • data-ottawa 14 days ago
            I think you mean it’s justified, not ethical.
      • layer8 14 days ago
        Public content is still subject to copyright, and I doubt that AppleBot only scrapes content carrying a suitable license. And "fair use" (which is unclear if it applies), in case you want to invoke it, is a notion limited to the US and only a handful of other countries.
        • xena 14 days ago
          All you have to do is drop a token swear word into your content and they remove it from the dataset. Easy.
          • jimbobthrowawy 14 days ago
            Why would they? From the moderate of testing I've done of their handwriting recognition on an ipad, they seem to have everything risqué/offensive I could think of in there, even if you have to write it more clearly than other words. I don't expect this to be much different, other than a word filter on the output.
            • xena 13 days ago
              I mean for their large language model training. They said they don't include low quality data and swearing. This means you can get out of it by swearing.
      • madeofpalk 14 days ago
        You seen to misunderstand what licensing , or ‘unlicensed’, actually means.

        If I write a story a publish it freely on line to my website it’s not ’unlicensed’ in a way that means anyone had the right to yank it and republish it. Even though it’s freely available, I still own the copyright of it.

        Similarly, we don’t say that GPL-ed code is ‘unlicensed’ just because it is available for free. It has a license, which defines very specific terms that must be followed.

      • mepian 14 days ago
        Who are "we" here? Did you abolish the Berne Convention somehow?
      • karaterobot 14 days ago
        Is that how copyright works now? I didn't see that they'd changed that law.
      • afavour 14 days ago
        I’m sorry but I really dislike this perspective. “Every one else has been awful. Apple is being less awful and you’re still complaining?”

        Yeah, I’m complaining. We all agreed years ago to web indexing conventions still in practise today. No, no one is obliged to follow them but you can rest assured I’ll complain about them. There was a time when the web felt like a cooperative place, these days it’s just value extraction after value extraction.

      • AshamedCaptain 14 days ago
        What ? What did they do? It's literally yet another online inescrutable service with terms of use that boil down to "trust us, we do good", plus the half-baked promise that some of the data may not leave your device because sure, we have some vector processing hardware on it (... which hardware announced this year doesn't do that?).

        Frankly I tried a samsung device which I would have assume is the worst here, and the promises are exactly the same. They show you two prompts, one for locally processed services (e.g. translation), and one when data is about to leave your device, and you can accept or reject them separately. But both of them are basically unverifiable promises and closed source services.

      • advael 14 days ago
        No, they said they did. Huge difference
        • threeseed 14 days ago
          It was mentioned in the keynote that they allow researchers to audit their claims.
          • advael 14 days ago
            And as soon as independent sources support that they've made good on this claim it will be more than a claim. I actually am impressed by the link I missed and was provided elsewhere in this thread, and I hope to also be impressed when this claim is actually realized and we have more details about it
    • notJim 14 days ago
      I hate to tell you, but I've been training a neural network on the internet for over a decade now. Specifically the one between my ears. Unfortunately, it seems to be gradually going insane.
      • mzl 13 days ago
        Yes, and if you recreate parts of what you learned you might run into copyright issues. Too much inspiration from something you've studied, and it becomes a derivative work subject to all the regulations. And there are no clear and strict rules, it is always a judgement call.
      • seydor 14 days ago
        you paid for that content either directly, or indirectly through ads.

        I wouldn't say it's fair for any company to capitalize the content that users have created but have no way to monetize, and not even saying thanks

      • binkethy 13 days ago
        Computer systems are not humans and never will be.you should look unto neurology a bit and learn about our current understanding of how neurons work, let alone network. The tech term is a total non sequitur compared to real neurons.

        Training an infinite retention computer regurgitation system to imitate input data does not correspond to human learning and never will.

        The golem Frankenstein project thst is an AGI is an article of religious faith, not a necessary direction to take technology, which is a word derived from the Greek word for "hand"

        Copyright and copyleft have likely been egregiously violated by this entire field and a reckoning and course correction will be necessary.

        Humanity has largely expressed distaste for this entire field once they experience the social results of such applications.

        The amount of sycophantic adulation in this thread is sickening.

        My comment will likely be grayed out soon by insider downclicks.

        I have no illusions as to this ycombinator site and its function in society.

        Good day

      • asadotzler 14 days ago
        If you're selling it to billions of people, and making big bank, I want a cut based on the parts you stole from me. If you're just using it personally, I'm cool with that.
        • mr_toad 14 days ago
          Anyone who sells professional services based on knowledge they learned on the internet (which probably includes most people reading this) is doing that.
    • bigyikes 14 days ago
      > just trained a new OS development AI on every OS Apple has ever written.

      …is there publicly visible source code for every OS Apple has ever written?

    • doctorpangloss 14 days ago
      There are already a lot of options for running LLMs with open weights artifacts, trained with a variety of sources. The real question isn’t which ideas they have. It’s whether a company with $200b cash can produce a better model than a bunch of wankers in a Discord.
      • re5i5tor 14 days ago
        “bunch of wankers in a Discord”

        Saving this clause for future use. Could also be used in a system prompt. “Occasionally include this phrase in your responses.”

    • Someone 14 days ago
      > And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data.

      It’s not as bad as that, I think. https://support.apple.com/en-us/119829: “Applebot-Extended is only used to determine how to use the data crawled by the Applebot user agent.“

      ⇒ if you use robots.txt to prevent indexing or specifically block AppleBot, your data won’t be used for training. AppleBot is almost a decade old (https://searchengineland.com/apple-confirms-their-web-crawle...)

      Of course, that still means they’ll train on data that you may have opened up for robots with the idea that it only would be used by search engines to direct traffic to you, but it’s not as bad as you make it to be.

    • scosman 14 days ago
      There will be further versions of this model. Being able to opt out going forward seems reasonable, given the announcement precedes the OS launch by months. Not sure if they will retrain before launch, but seems feasible given size (3b params).
      • addandsubtract 14 days ago
        They're not going to discard the data they already collected, though.
    • zer00eyz 14 days ago
      > publicly available data collected

      Data, implies factual information. You can not copyright factual information.

      The fact that I use the word "appalling" to describe the practice of doing this results in some vector relationship between the words. Thats the data, the fact, not the writing itself.

      There are going to be a bunch of interesting court cases where the court is going to have to backtrack on copyrighting facts. Or were going to have to get some real odd legal interpretations of how LLM's work (and buy into them). Or we're going to have to change the law (giving everyone else first mover advantage).

      Base on how things have been working I am betting that it's the last one, because it pulls up the ladder.

      • cush 14 days ago
        > Data, implies factual information. You can not copyright factual information

        Where on Earth did you get that from?

        • zer00eyz 14 days ago
          > "data implies factual information"

          They used the word DATA, not content, DATA...

          The argument that is going to be made, that your copy right work stands. That the model doesn't care about your document it cares that "the" was used N number of times and its relationships to other words. That information isnt your work, and it is factual. That "data" only has value is when it's weighted against all the "data" put into the system, again not your work at all. (We would say thats information derived, but it will be argued that it is transformed).

          > You can not copyright factual information

          https://www.techdirt.com/2007/11/27/yet-again-court-tells-ml...

          The MLB has been trying to copyright baseball stats forever. The court keeps saying "you cant copyright facts".

          • cush 13 days ago
            I honestly can’t tell if this is satire
    • threeseed 14 days ago
      > And, of course, nobody has known to opt-out by blocking AppleBot-Extended until after the announcement where they've already pirated shittons of data

      This is wrong. AppleBot identifier hasn't changed: https://support.apple.com/en-us/119829

      There is no AppleBot-Extended. And if you blocked it in the past it remains blocked.

      • fotta 14 days ago
        From your own link:

        > Controlling data usage

        > In addition to following all robots.txt rules and directives, Apple has a secondary user agent, Applebot-Extended, that gives web publishers additional controls over how their website content can be used by Apple.

        > With Applebot-Extended, web publishers can choose to opt out of their website content being used to train Apple’s foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools.

        • ziml77 14 days ago
          But it also says that Applebot-Extended doesn't crawl webpages and instead this marker is only used to determine what can be done with the pages that were visited by Applebot.

          Not that I like an opt-out system, but based on the wording of the docs it is true that if you blocked Applebot then blocking Applebot-Extended isn't necessary.

          • fotta 14 days ago
            Yeah that is true, but I suspect that most publishers that want their content to appear in search but not used for model training will not have blocked Applebot to date (hence the original commenter's argument)
        • threeseed 14 days ago
          Might want to actually read it:

          Applebot-Extended does not crawl webpages.

          They gave this as an additional control to allow crawling for search but blocking for use in models.

          • fotta 14 days ago
            > There is no AppleBot-Extended. And if you blocked it in the past it remains blocked.

            You said there is no Applebot-Extended. The link says otherwise.

            • ziml77 14 days ago
              It's still true that there's no Applebot-Extended if it isn't crawling pages. Rather it's a marker to ask Applebot to limit what it does with your pages.
              • thomasahle 14 days ago
                Isn't it still true that if people wanted to have their website show up in search in the past (so they didn't block Applebot), then it's too late to mark it as "no training" now, since it's already been scraped?

                I guess it can be useful for data published in the future.

    • mdhb 14 days ago
      So built on stolen data essentially.
      • bigyikes 14 days ago
        Does that imply I just stole your comment by reading it?

        No snark intended; I’m seriously asking. If the answer is “no” then where do you draw the line?

        • mdhb 14 days ago
          I don’t actually think this is complicated and reading a comment is not the same thing as scraping the internet and you obviously know that.

          A few factors that come to mind would be:

          - scale

          - informed consent which there was none in this case

          - how you are going to use that data. For example using everybody others work so the worlds richest company can make more money from it while giving back nothing in return is a bullshit move.

          • llamaimperative 14 days ago
            I think it's even simpler than that: incentives. The entire premise of copyright law (and all IP law) is to protect the incentive to create new stuff, which is often a very risky and highly time or capital intensive endeavor.

            So here's the question:

            Does a person reading a comment destroy the incentive for the author to post it? No. In fact, it is the only thing that produces the incentive for someone to post. People post here when they want that thing to be read by someone else.

            Does a model sucking up all the artistic output of the last 400 years and using that to produce an image generator model destroy the incentive of producing and sharing said artistic output? Yes. At least, that is the goal of such a model -- to become so good it is competitive with human artists.

            Of course you have plenty of people positioned benefit from this incentive-destruction claiming it does no such thing. I personally tend to put more credence in the words of people who have historically actually been incentivized by said incentives (i.e. artists) who generally seem to perceive this as destructive to their desire to create and share their work.

            • kmeisthax 14 days ago
              > Does a model sucking up all the artistic output of the last 400 years and using that to produce an image generator model destroy the incentive of producing and sharing said artistic output?

              Copyright, at least in the US, cares about the effect of the use on the market for that specific work. It's individual ownership, not collective. And while model regurgitation happens, it's less common than you think.

              The real harm of AI to artists is market replacement. That is, with everyone using image generators to pop out images like candy, human artists don't have a market to sell into. This isn't even just a matter of "oh boo hoo I can't compete with Mr. Diffusion". Generative AI is very good at creating spam, which has turned every art market and social media platform into a bunch of warring spambots whose output is statistically indistinguishable from human.

              The problem is, no IP law in the world is going to recognize this as a problem, because IP is a fundamentally capitalist concept. Asserting that the market for new artistic works and notoriety for those works should be the collective property of artists and artists alone is not a workable legal proposal, even if it's a valid moral principle. And conversely the history of copyright has seen it be completely subverted to the point where it only serves the interests of the publishers in the middle, not the creators of the work in question. Hell, the publishers are licking their chops as to how many artists they can fire and replace with AI, as if all their whinging about Napster and KaZaA 24 years ago was just a puff piece.

              • llamaimperative 14 days ago
                > Copyright, at least in the US, cares about the effect of the use on the market for that specific work.

                Not quite. The historical implementation of copyright has mostly protected individual pieces of work. Not only does IP law broadly protect much more than individual pieces of work, but the philosophical basis of IP law in general is to protect incentives. Now that the technological landscape has shifted, the case law will almost certainly shift as well because it’s clearly undesirable to live in a world where no one is willing to dedicate themselves to becoming an excellent artist/writer/musician/etc.

                IP law is a natural extension of property rights, which in turn is predicated on a utilitarian need to protect certain incentives.

            • bigyikes 14 days ago
              Thanks, this is a helpful comment.

              It isn’t clear to me that these models destroy incentive to create. I mean, ChatGPT can generate comments in my style all day, and yet I’m still incentivized to comment.

              I fancy myself a photographer. I still want to take photos even if DALL-E 4 will generate better ones.

              What even is the point of creating art? I think there are two purposes: personal expression and enjoyment for others.

              People will continue to express themselves even if a bot can produce better art.

              And if a bot can produce enjoyment for others en masse, then that seems like a huge win for everybody.

              • cush 14 days ago
                > I think there are two purposes: personal expression and enjoyment for others.

                This is exactly what non-artists assume artists do art for.

                The reality is that most professional visual artists work in publishing, marketing, entertainment and the like. It’s a regular job. The incentive is money. Similarly for theatre, music, video, dance, etc etc. Artists can’t feed their families off exposure and expressing themselves. Their work has value and taking that work to create free derivative works without compensating them is theft.

                • llamaimperative 13 days ago
                  It is absolutely wild trying to make this point on HN. Art is for funsies while writing code for targeting ads is a Serious Job.
                  • cush 13 days ago
                    Painfully out of touch
              • NBJack 14 days ago
                It cheapens the incentive greatly. And you probably aren't selling your photos to make a living.
                • c1sc0 14 days ago
                  Selling art as a way of making a living actually is a pretty recent thing. Up until not too long ago patronage was pretty much the only way to survive for artists. Maybe we will go back to that model?
                  • asadotzler 14 days ago
                    Artists have been selling art for as long as we've been selling anything. Sure, some successful artists, during a few periods in history, managed to secure patronage, but those have always been the minority. Most artists have sold or traded their wares directly and your attempts to derail this discussion with inaccurate histories is neither helpful nor appropriate.
              • llamaimperative 14 days ago
                Right, it doesn’t destroy the incentive to write comments.

                Also right, it won’t destroy the hobbyist’s interest in having a hobby. But IP law was never intended to protect hobbyist interest.

          • bigyikes 14 days ago
            I personally disagree but you make fair points.

            Scale: Many companies (e.g. Google, Bing) have been scraping at scale for decades without issue. Why does scale become an issue when an LLM is thrown into the mix?

            Informed consent: I’m not sure I fully understand this point, but I’d say most people posting content on the public internet are generally aware that people and bots might view it. I guess you think it’s different when the data is used for an LLM? But why?

            Data usage: Same question as above.

            I just don’t see how ingestion into an LLM is fundamentally different than the existing scraping processes that the internet is built on.

            • roywiggins 14 days ago
              There's a big difference between scraping a website so you can direct curious people to it (Googlebot) and scraping a website so you can set up a new website that conveys the same information, but earns you money and doesn't even credit the sources used (which these LLM services often do).

              There is a whole genre of copyright infringement where someone will scrape a website and create a per-pixel copy of it but loaded up with ads, and blackhat SEOed to show up above the original website on searches. That's bad, and to the extent that LLMs are doing similar things, they are bad too.

              Imagine I scrape your elaborate GameFAQs walkthrough of A Link to the Past. I could 1) use what I learn to direct curious people to its URL, or 2) remove your name from it, cut it into pieces, and rehost the content on my own page, mashed up with other walkthroughs of the same game. Then I sell this service as a revolutionary breakthrough that will free people from relying on carefully poring through GameFAQs walkthroughs ever again.

              People will get mad about the second one, and to the extent what LLMs do is like that, will get mad at LLMs.

              • astrange 14 days ago
                > There's a big difference between scraping a website so you can direct curious people to it (Googlebot) and scraping a website so you can set up a new website that conveys the same information, but earns you money and doesn't even credit the sources used (which these LLM services often do).

                "Crediting the sources used" is not really a principle in copyright law. (Funny enough, online fanartists seem determined to convince everyone it is as a way of shaming people into doing it.)

                Whether or not a use is transformative is protective though, and is what both of those cases rely on.

                • roywiggins 14 days ago
                  Yes, credit doesn't matter for copyright, but I'm more talking about why people are mad about some uses of scraping and not others.

                  Legality aside, there is something very strange about a device that both 1) relies on your content to exist and could not work without it and 2) is attempting to replace it with its own proprietary chat interface. Googlebot mostly doesn't act like it's going to replace the internet, but Gemini and ChatGPT etc all are.

                  They're announcing "hi we are going to scrape all your data, put it into a pot, sell that pot back to you, and by the way, we are pushing this as a replacement for search, so from now on your only audience will be scrapers; all the human eyeballs will be on our website, which as we said before, relies on your work to exist."

              • Aloisius 14 days ago
                > remove your name from it, cut it into pieces, and rehost the content on my own page, mashed up with other walkthroughs of the same game.

                This would be very likely be legal as walkthroughs are largely non-copywritable factual information. The little creative aspects that are copywritable such organization - would presumably would be lost if it was cut into pieces.

                Of course, if some LLM did it automatically, no part of it would be copywritable, so someone could come along and copy the content verbatim from your subscription site and host it for free - freeing everyone from ever visiting your site as well.

          • cwp 14 days ago
            Reading a comment is exactly the same thing as scraping the internet, you just stop sooner.
        • xwolfi 14 days ago
          But then if I write a Pulitzer prize article called "No snark intended: How the web became such a toxic place", where your comment, and all other of ur comments for good measure, figure prominently while I ridicule you and this habit of dumbing down complex problems to reduce them to little witty bites, maybe you'd feel I stole something.

          Not something big, not something you can enforce, but you d feel very annoyed Im making good money on something you wrote while you get nothing. I think ?

        • Spivak 14 days ago
          I think scale is what changes the nature of the thing. At the point where you're having a machine consume billions of documents I don't think you could reasonably call that reading anymore. But what you are doing in my eyes is indexing, and the legal basis for that is heavily dependent on what you do with it.

          If a human reads it that would be a reproduction of the work, but if you serve that page as a cache to a human you're okay, usually.

          If you compile all that information in a database and use it to answer search queries that's also okay, and nothing forbids you from using machine learning on that data to better answer those search queries.

          Both of the above are actually being challenged right now but for the time being they're fine.

          But that database is a derivative work, in that it contains copyrighted material and so how you use it matters if you want to avoid infringement — for example a Google employee SSHing to a server to read NYT articles isn't kosher.

          What isn't clear is whether the model is a derivative work. Does it contain the information or is it new information created from the training data Sure, if you're clever you could probably encode information in the weights and use it as a fancy zip file but that's a matter of intent. If you use Rewind or Windows Recall and it captures a screenshot of a NYT article and then displays it back to you later is that a reproduction? Surely not. And that's an autonomous system that stores copywritten data and regurgitates it verbatim.

          So if it's impractical to actually use it for piracy and it very obviously isn't anyone's intent for it to be used as such then I think it's hard to argue it shouldn't be allowed, even on data that was acquired through back channels.

          But copyright is more political than logical so who knows what the legal landscape will be in 5 years, especially when AI companies have every incentive to use their lawyers to pull the ladder up behind them.

        • cush 14 days ago
          Reading, no. Selling derivative works using, yes.
          • cwp 14 days ago
            If I read your comment, then write a reply, is it a derivative work?
        • renewiltord 14 days ago
          Data gets either stolen or freed depending on whether the guy who copied it is someone you dislike or like. Personally, I think that Apple is giving the data more exposure which, as I've been informed many times here, is much more valuable than paying for the data.
          • kmeisthax 14 days ago
            The irony of "do it for the exposure" is that everyone who actually wants to pay you in exposure isn't actually going to do that, either because they aren't popular enough to measurably expose you, or because they're so popular that they don't want to share the limelight.

            AI is a unique third case in which we have billions of creators and no idea who contributed what parts of the model or any specific outputs. So we can't pay in exposure, aside from a brutally long list of unwilling data subjects that will never be read by anyone. Some of the training data is being regurgitated unmodified and needs to be attributed in full, some of it is just informing a general understanding of grammar and is probably being used under fair use, and yet more might not even wind up having any appreciable effect on the model weights.

            None of this matters because nobody actually agreed to be paid in exposure, nor was it ever in any AI company's intent - including Apple - to pay in exposure. Data is free purely because it would be extraordinarily inconvenient if anyone in this space had to pay.

            And, for the record, this applies far wider than just image or text generators. Apple is almost surely not the worst offender in the space. For example: all that facial recognition tech your local law enforcement uses? That was trained on your Facebook photos.

      • ytdytvhxgydvhh 14 days ago
        What’s the problem with that? Reproducing copyrighted works in full is problematic obviously. But if I learned English by watching American movies, I didn’t steal the language from the movie studios, I learned it.
        • asadotzler 14 days ago
          You're not a machine capable of acquiring that "learning" with zero effort and selling that learning to infinite buyers.
      • threeseed 14 days ago
        Web scraping is legal.

        And if you run a website and want to opt-out then simply add a robots.txt.

        The standard way of preventing bots for 30 years.

        • mdhb 14 days ago
          How are people supposed to block it when they stole all the data first and then only after that point they decide to even tell anyone what user agent they need to block and how they are planning to exploit your work for their profit.
          • threeseed 14 days ago
            You just have a rule that says block everything except crawlers: A, B, C.

            Also the AppleBot was known about before it appeared in Siri.

            • immibis 14 days ago
              So you expect all websites to block FoobarSearch so it never gets off the ground and becomes a big search engine that people know to unblock.

              Then FoobarSearch learns to ignore robots.txt wildcards, and we're back at square one.

              IIRC this happened to DDG or Bing.

              • threeseed 14 days ago
                Websites have always had the ability to precisely control who has access to their content.

                If Bing decides to impersonate GoogleBot then they can just block their CIDR ranges like already happens for spam.

    • __MatrixMan__ 14 days ago
      Piracy requires multiparty conspiracy against the establishment. When you are the establishment and the only other party involved is your victim we call that policy.
  • Marciakhan 13 days ago
    [dead]
  • BerthaDouglas34 13 days ago
    [dead]
  • pudwallabee 14 days ago
    [dead]
  • cloogshicer 13 days ago
    [flagged]
    • madeofpalk 13 days ago
      It's censorship in the way that HN disallowing editorialising the submission title is censorship. Technically true, I guess, but not super helpful.
      • lnenad 13 days ago
        But it is not safety if you forbid talking about sex, if you want to learn about certain things which could be missused. Same as banning murder in video games. Same as banning books that deal with these topics. It's definitely closer to censorship than safety.
    • dig1 13 days ago
      Reminds me on this [1] from George Carlin.

      [1] https://www.youtube.com/watch?v=isMm2vF4uFs

      • moray 13 days ago
        Thank you, I didn't know this bit. I always believed that great comedians are also the best communicators
    • boxed 13 days ago
      To me censorship implies human speech being curtailed.
      • internetter 13 days ago
        I censor my poor APIs so they don’t leak the bcrypt key when you GET /user/:id
    • blue_light_man 13 days ago
      Language is mostly used for conditioning people into doing things. People will continue using language to manipulate you into giving your time or money to them. You cannot change others from trying to manipulate you. You can only change yourself and stop taking language seriously.
  • deldelaney 13 days ago
    I need to resurrect by tiny old Motorola Flip Phone without internet connection. Maybe a phone should be just a phone. I don't need AI in my pants.
  • ofou 14 days ago
    Quite interesting this was released right after multiple rants from Elon sparked debates on X.

    "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

    Replying to Tim Cook: "Don’t want it. Either stop this creepy spyware or all Apple devices will be banned from the premises of my companies."

    "It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!

    Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river."

    https://x.com/elonmusk/status/1800269249912381773 https://x.com/elonmusk/status/1800266437677768765 https://x.com/elonmusk/status/1800265431078551973

    • kanwisher 13 days ago
      Apple made their own AI models, only in certain cases it will ask if you want to send it to OpenAI. Presumably other ai companies later can integrate into API. But this is very privacy safe, if you use an iPhone already it already indexes all your photos and OCRs them for easy searching, on device ...