It is hard to imagine the scale of these things, I know if you had tapped me on the shoulder at Intel in the 80's and said, "in 40 years, the chip your looking at will fit in a couple of square mm of silicon." I would not have believed you. Of course it doesn't "really" fit in that small a slice as you need to get the signals into and out of it and those points take space, but it is doesn't take much space.
I did a rough calculation a few days ago which roughly indicated that you could now get 300 complete 4004’s into the space occupied by a single 4004 transistor in 1971.
(I may have got that wrong so happy to be corrected!)
Wifh 134k transistors and 49mm2, that's 365um2 per transitor. The N3 node from TSMC is quoted as ~300Mtr/mm2, so I believe 365um2 is 109k transtors, so I guess it means just under one 4004 could fit in the area of a single 1982 transitor. That said the Mtr/mm2 number is not the whole story, often single gates like an inverter could be made of many transistor, fingers, dummy gates, etc, so the effective number is lower. Would be interesting to see how many inv/nand/FF/SRAM was in the 4004, as it would be easier to get a fair comparison.
You got the wrong numbers for the 4004! The 4004 had just 2300 transistors in a 12 mm^2 area. So, today you can fit 3*10^8 * 12 = 3.6*10^9 transistors in that area, or 1.5 million 4004s, or 650 4004s in the area of the old transistor.
Thanks for the correction, I was looking at the 286, not the 4004! Not sure how I managed to get the wrong chip... Though someone mentioned the 286 in a comment below, so I guess I muddled that in.
In that example, we may as well compare a thermonuclear weapon to a conventional one—the scales are gargantuan. In my opinion a more apt comparison might be the TI-84 calculator, which was ~350x faster than the computers used in Apollo.
I tried to look up these numbers, but, understandably, I'm having trouble finding Apple Watch benchmarks. It looks like first gen Apple Watch (2016) had around 3GF [1], where P4 had around 12GF, so stretch that 15 to 20, and it could be in a reasonable ballpark.
But didn't you intellectually know that this would be the implication of Moore's law if it could actually hold over such a long period? Maybe it's more a manifestation of how counterintuitive exponential growth is to human sensibilities.
To be fair I took way too much math to be comfortable in my college days but when at Intel everyone (even Gordon) knew that "Moore's Law" wasn't a "law" (like the Law of Gravity) it was an observation on the current state of the art. As a result everyone made the assumption that semiconductor scaling was an S curve and we happened to be in the exponential part of that curve but it would taper off. And everyone in the design engineering team felt it would taper off way sooner than it has!
Yes. Process changes, transistor geometry, dielectric materials, were all things that people expected to change. That customers would stop using TTL was pretty avant-garde, but when ASICS became "affordable" (basically custom chips for non-chip companies) it was obvious that volume manufacturers would condense a bunch of TTL into a single package just for cost reasons.
But fundamentally electrons and 'charge' are quantum in nature and even in the 80's if you're transistors were "too close" you could get electron tunneling between them. So I, and others, assumed there was a really hard limit on how small you could go (and it was likely above 100nm rather than below it). And of course now hardly anyone used old 90nm tech, I think 40nm is the current "jelly bean" process but I freely admit I've not looked at the geometries used in the vast Chinese IC market of "support" chips.
Feynman gave a lot of talks about the natural limits of information storage, but knowing the theory and seeing it done in practice are two different animals!
So I wonder if the M2 Pro/Max/Ultra will after all be on the N3 node. If N3 HVM starts next month, the A15 being on N5P and it being a year until the A16 on N3 will be released and Apple being the first to use N3... some other Apple silicon needs to use the capacity first, or am I missing something?
All of the M1/Pro/Max/Ultra shared the same fundamental core design. Presumably the same will be true of the M2 series.
As someone else here said, fab nodes are more like Lego blocks than a printer’s dpi – moving a design to a new node means rebuilding it in terms of the new node’s building blocks, not just “shrinking the design”.
So if the M2 Pro/Max/Ultra are intended to follow the same “reuse the cores (possibly rearranged) with more GPU blocks/cache etc” it seems unlikely. But if the make a design break within the M2 series then it’s possible?
I’d expect Apple to follow their historical behavior and lead with the next A-series phone processor on the new node first. M1 Pro/Max/Ultra chips have huge surface areas – since process defect rates are driven down over time, it makes sense to start with your smallest chips first, so that you can get good yield out of your big chips once the defect rate is lower.
Are you sure about that? Everything I can find says that the M1 Pro and M1 Max do use the same cores as the M1, which are not based on the ones in the A15.
'We had indicated in our initial coverage that it appears that Apple’s new M1 Pro and Max chips is using a similar, if not the same generation CPU IP as on the M1, rather than updating things to the newer generation cores that are being used in the A15. We seemingly can confirm this, as we’re seeing no apparent changes in the cores compared to what we’ve discovered on the M1 chips.'
'The M1 has four high-performance "Firestorm" and four energy-efficient "Icestorm" cores, first seen on the A14 Bionic. [...] The M1 Pro and M1 Max use the same ARM big.LITTLE design as the M1, with eight high-performance "Firestorm" (six in the lower-binned variants of the M1 Pro) and two energy-efficient "Icestorm" cores, providing a total of ten cores (eight in the lower-binned variants of the M1 Pro).'
The efficiency cores on the Pro and Max aren't (so far as I can tell) faster than on the regular M1. But where the regular M1 has 4 performance + 4 efficiency cores, the Pro and Max have 6 or 8 performance + 2 efficiency. (Also, more L2 and L3 cache.)
> As someone else here said, fab nodes are more like Lego blocks than a printer’s dpi – moving a design to a new node means rebuilding it in terms of the new node’s building blocks, not just “shrinking the design”.
This is true but 'rebuilding' specifically refers to producing a new chip layout (i.e. the thing you send off to the fab to manufacture). This can be a lot of work but is all 'back-end' work. You begin with the RTL giving the logical/functional design of the chip and implementation engineers push it through synthesis, place and route etc to produce a physical chip layout. This is what you have to redo, you can just start with the exact same RTL.
When you design that RTL with a particular node in mind you can likely achieve better performance/area but it's not essential to do so.
Plus when you want to do the back-end work you need a fairly complete design to work from. So for instance Apple could be getting a back-end team to build a 3nm M2 now whilst the front end design team are busy working on the M3 (specifically targeted and optimized for 3nm).
Intel famously followed the "tick-tock" model where they would alternate between designing a new architecture and then moving that architecture to a new processing node.
So not sure it's true that you can't shrink down a largely similar architecture from one process to another.
Obviously it's fallen off the wagon a bit here, but seems more due to operational issues at Intel than it being fundamentally not doable
That tick-tock model actually illustrates that it takes significant work to re-do the physical layout of a design for a new process node, even if the higher-level microarchitecture design remains unchanged. You don't get the benefits of a full node shrink for free.
The first step is to prove the process - to the appropriate level of control limits, and then the second step is to optimize the process.
You may say it's "significant", but if you look at it another way, it de-risks both tasks as opposed to doing both at the same time but having much higher risk.
So in my mind, the "significant work" is less relevant, as the derisking is much more important.
Not necessarily - tic-tock let them double the number of new product releases compared to having new architectures launch simultaneously on new processes. Doubling the number of product releases was valuable competitively, as well as reducing their risk of having dated products if a deadline was missed.
There was work for sure to move to a new process, but that wasn't necessarily duplicated work with the architecture being created on a prior process. My impression was that it was already parallelized, and that not having to develop new architectures on new processes prevented a fair bit of contention.
Bigger chips have much worse yields, which is why phone chips get the latest nodes first. No way they launch 3nm with M2 Max. I was expecting 3nm A16 and 5nm M2 Max for release this fall and then the 3nm M3 progression next year. But they normally start iPhone HVM in July so A16 must be 5nm, there's no way Apple would delay the iPhone launch to November, it's too soon for M3, I have no idea what they're producing either.
Yea normally we'd expect M2 Pro/Max etc to be on N5P like the M2. It can't be M3 because M2 was just released. A16 is way too far out. So what does that leave for N3 production? Either it's M2 Pro etc. or something completely different.
Actually thinking a bit more... there is the Apple VR headset which seems to be getting closer to production and I'm sure it could use the efficiency from the new node plus is priced high enough to warrant the costs. Some speak of a launch beginning of 2023 with 1.5M units produced. That would be bang on in terms of schedule.
There are rumors that only the Pro models will get A16, and that they will have specs that justify a base price increase like more flash memory by default.
So it isn't outside the realm of possibility that the A16 design is 3nm, and they will delay its launch and/or otherwise make it less desirable to deal with lower initial production yield.
The iPhone 14 with A16 will be announced September 7th it looks like. People will have them within a month from now. There is no way it can be N3 as it takes months from HVM start to actual devices hitting the shelves. The iPhone 14 has already millions of devices sitting in warehouses.
The A16 will be N4P I'd guess. In my initial comment I should have said A17 for N3 instead of A16.
>I wonder if the M2 Pro/Max/Ultra will after all be on the N3 node.
IMO, the plain M2 was a 3nm design that had to be backported to TSMC 5nm due to the node transition taking longer than projected, in the same way that Intel's Sunny Cove had to be backported from Intel's 10nm to their 14nm++++++ node.
The rumors now say that Apple is getting ready to build M2 Pro and M3 on TSMC 3nm.
>According to one analyst, they will be coming from TSMC, and will debut later this year. Even more tantalizing is the notion that these Apple SoCs will likely be the very first to use TSMC’s bleeding edge 3nm process. This is mildly surprising given the M2 chip revealed this week was made using TSMC’s 5nm process, just like the previous M1 products. Launching the M2 on two different nodes would require Apple to do the design work twice — once for a 5nm M2 and once for the 3nm M2 Pro.
Pardon my ignorance on this subject, but at 3 nm, don’t you get into some weird quantum artifacts because the layers are so stupidly close? Curious how that is addressed?
Note that 3nm and the rest of the process names in the past decade don't actually measure the size of the transistor - it's all marketing[0], and is why Intel is dropping the "nm" naming in favor of "Intel 7, Intel 4, Intel 3, Intel 20A, Intel 18A"[1]. However, maybe someone can link to resources that explain the potential [quantum] hurdles they have to overcome as they've increased density and performance.
Okay but who they are fooling exactly? It's not like an average soccer mum shops for CPUs and choses the 3nm because she is impressed with the 3 nanometre transistor gates, right?
Are they after investors or traders? Why they don't call it something like superlaser 9000 and the next year hyperlaser 10K speedmaster?
My friends from marketing tell me that the target here are the press, specifically the industry press. In a competitive market you want to make it "hard" or "easy" to compare your product to your competitors product or service, if you're the leader you want it to be 'hard' and if you're the competitor you want it to be 'easy'.
I originally asked this exact same question of the guy at AMD who started talking about their x86 chips in "equivalent" megahertz because they the actual clock rate was slower but they had a better instruction per clock (IPC) and so got more done per second than their Intel counterparts, but Intel was winning in the "Megahertz race" because that was what the press was fixated on using to describe the "leading" chips. Anyway, if you're a leader and and you can get the press talking about something your competitor isn't (or can't) do, then you "win" the perception of being ahead.
In semiconductors, this has been very effective at getting the press to see Intel as "behind" because their "nanometer" number was stuck in the double digits while "more advanced" fabs were already in production on lower nanometer number processes. (Note the scare quotes are all just to indicate topics that both TSMC, Samsung, and Intel would have different takes on the current state of the art here).
A large part of the current usage of the terminology comes from historical momentum. It used to be the case that transistor size was fairly well correlated with performance and capability. Moore's Law was well-known and a simple transistor size was a good, easy way to approximate performance potential for power consumption, speed, capability, complexity, etc... That single number started getting slapped on all of the marketing and documentation as a somewhat useful "overall performance rating" substitute. Companies got used to showing it off. Consumers came to expect it. The real, actual usefulness of the number diminished slowly enough that it's just continued so far. And marketing doesn't like removing stuff that they think makes their products look good.
It's also still useful as an identifier to distinguish between one node process and another, so it's not entirely pointless, even if the nomenclature is meaningless.
> It's not like an average soccer mum shops for CPUs and choses the 3nm because she is impressed with the 3 nanometre transistor gates, right?
No but enthusiasts (read: gamers) do. A lot of PC industry marketing nowadays is geared towards retail PC builders, who are very impressed by tech jargon.
Even though the process sizes aren’t objective or comparable across manufacturers, they do indicate meaningful improvement in sizing / density from a specific manufacturer.
Companies spend millions on brand marketing and that is the result. Needs to be short and memorable, your suggested naming is too long and more than 2 syllables.
The problem with that measurement is that you get extremely different numbers depending on what you're making. e.g. the density of memory is massively different to the density of logic.
So you pick something relatively standard (like an SRAM cell), and you use that.
There was a concerted effort some time back to formalize a standard measurement of process density using something like "million transistors per square centimetre" by a standards organization. (IEEE?) It wasn't a perfect measurement but it was a lot better than width. It failed so completely that I can't even Google it any more. The awkward name probably didn't help.
edit: it's "MT/mm2". Some people actually use it, but more in the informal sense that has the problem you espoused rather than the formalized one, which I still can't find.
A lot of Wall Street seems to have issues with non GAAP accounting metrics like Annual Recurring Revenue (based on the the few earnings meeting I've listened in on), I don't have high hopes that this would work.
I don't think that's necessarily problematic anymore then measuring fluid viscosity is problematic. Once someone start's getting into debates over the size of transistors on microprocessors then you are already getting pretty wonky and are going to need some science to describe things anyways.
That wouldn't capture the fact that there are fundamental physical barriers being run into. "Smallest feature size" is a perfectly adequate metric until things get complicated. Replacing an insufficient simple metric with a different one insufficient metric isn't the right solution imo.
Each node will have many sets of transistors depending on what you want in the tradeoffs between power/density/performance. Typically a half dozen or so options.
Well current is proportional to W/L, and W and L is area
We can always increase W, but decreasing L requires technological advancements
So I’m not sure on the value of area (W and L), L alone is more relevant (for the reason I said above)
we are dealing with finfets / GAA now which are not the same as planner transistor, but I guess there should still be relevance in L and W because these measurements are important even for resistors
So probably some sort of equivalence between a FF and a planar transistor should be given to name the nodes
My remembrance of the specific quantum effects that you're thinking of are from quantum tunneling[1] of electrons. The problem occurs when the gate size gets small enough that electrons can pass through without the transistor being switched on, which starts to happen around 3nm.
The name used to correspond to the minimum gate length down to ~14nm. But the smallest feature size in 3nm (i.e. minimum gate length) is certainly much smaller than 24nm.
I couldn't find the information readily available online so I'm not sure I can answer that (all of this stuff is under NDA). But even then, I could only tell you the "drawn" dimension, which is what gets shown on the screen. There are a lot of digital and physical processing steps that change the actual dimensions. Once something is manufactured inevitably one of the IC teardown companies will do a cross section and publish all of this information.
Because anybody buying at an industrial level is wise to it being untrue at a gate length level. Plus it's easily available to consumers that the gate length isn't that size. Plus it's bullshit that at some level everybody buys, even Intel and TSMC in some roundabout way, at the highest levels for sure, though not at the ground level. At the ground level even eg 180 nanometer has virtues that 28 nanometer lacks, it's totally different things, different texture different everything, the graybeards know. They know. Then there's different radiation resistance but that's too obvious.
So if everybody believes in the nanometers, nobody cares.
This seems to be a good discussion. QM effects have already affected design decisions in some cases, and are a major factor in the design of the manufacturing process machinery (which uses extreme UV / soft x-ray):
> "Quantum effects typically occur well behind the curtain for most of the chip industry, baked into a set of design rules developed from foundry data that most companies never see. This explains why foundries and manufacturing equipment companies so far are the only ones that have been directly affected, and they have been making adjustments in their processes and products to account for those effects. But as designs shrink to 7/5nm and beyond, quantum effects are emerging as a more widespread and significant problem, and one that ultimately will affect everyone working at those nodes..."
and
> "“At very small dimensions of the body, the semiconductor band structure gets ‘quantized,’ so instead of a continuous energy spectrum for the carriers, for example, only discrete energy levels are allowed,” Mocuta said.
This quantum confinement has several possible consequences. Among them:
• A transistor threshold voltage change.
• A change in the density of states (DOS), or the number of carriers available for current conduction.
• A change in carrier injection velocity."
The most natural evolution forward would seem to be processing with unconventional quantum-effect phenomena, probably operation specific (addition, multiplication, ...). So not quantum computers in the sense of implementing quantum circuits, but rather opportunistic exploitation of quantum effects. These foundries and manufacturing equipment companies would logically sit on their insights as it might turn out to be a slow but steady march towards miniaturized quantum computers eventually.
Think of how thick towels started as a manufacturing defect: a machine in a conventional cloth factory had a part break down, and instead of churning out the usual flat cloth it erroneously wasted a loop of yarn at each 'weave' (for lack of better words as I'm not into weaving). Having no immediate solution at hand to recover the yarn from the thick cloth was sold / distributed as scrap. The problem of the machine was identified and fixed. The users of the cheap scrap came back for more as they discovered the superior water absorbing qualities... Since the fault was documented they could intentionally reproduce the desired 'faulty' cloth.
It's not even that. Nothing in a modern process node is as small as the node size, and in fact with modern processes it's not even close. I just checked, and per some reference on wikipedia TSMC 3nm's metal pitch is expected to be 24nm (so, ~12nm wide metal "wires" on the lowest level of interconnect are the smallest things you'll see on a picture).
What's been happening is that fabs have been exploiting more and more tricks to increase transistor density while still using the larger feature sizes. So flat transistors became finfet's, increasing their gate area and allowing chips to use fewer of them for the same silicon area, etc...
So read "3nm" as "a process with the same transistor density as you would expect had some ancestral ~90nm process been shrunk by a factor of 30".
Thanks, this is a great explanation. It seems like these "nm" indicators are much like measuring a car's power in "horsepower". It is certainly measuring something real but its connection to actual horses has long since atrophied.
This was always my understanding is that the litho(?) mechanism can "print" in 3nm resolution, which allows the overal scaling of all features to be based on this "pixel" resolution.
I was dumbfounded when even hearing about 14NM YEARS before it was a thing (I also got to see 64-core concepts at intel in ~1999 or so)
3nm is mind boggling amazing.
Whatever happened to "voxels" (before the graphics term, intel was creating "voxels" that were used to use light to transfer vertically between layers... but I stopped following CPU arch years ago.
Firstly I'd note that some commentators are saying this number (3nm) is a meaningless marketing term. That's not correct.
3nm means the smallest feature - eg, the width on a channel[1] not the size of a transistor.
There are quantum effects at this level (and indeed parger), and one of the big challenges with process design is minimusing them. See [2] for an overview.
You should be reading "3 nm" as "3 nm equivalent". It doesn't mean anything is 3nm, it's just the simplest way of expressing transistor density without making people transition to a different measurement they are not used to. I would personally like Tr/μm², but I'm fine with nm too.
Yes...but various scaling issues & quantum weirdness & a load of other miseries were ramping up (with each shrink) long, long before 3nm. That's part of why the auto industry can't "just build a fab" for the ~30x larger feature size that their chips need.
The problem for anyone wanting to build an older generation fab is cost.
It costs (within an order of magnitude) the same to build a modern fab as it does to build one for a process 1-5 generations back, maybe more. You have a roughly similar backlog for equipment too. For your troubles you get far fewer chips per wafer so your cost per chip is higher. And the chips are slower and use more power. That makes it much harder to get any kind of payback on a depreciating asset that only gets more out of date. You also risk demand for your new 45-nm or 90-nm fab dropping off toward zero in 10 years.
Historically older fabs would see a drop-off in business as new chips were designed for new processes so as time went on there was more and more capacity available for cheap on the older nodes. That cycle is and has been slowing down though so there isn't much slack even for older fabs.
I'm not sure where the market will end up. If the current shortages are a temporary backlog + hoarding then things will work themselves out within 1-3 years and anyone starting lots of fab construction risks bankruptcy - something that has happened multiple times in the past as keeping a fab idle is equivalent to burning it down so you end up having to dump chips at cost or even a small loss. On the flip side if the recent disruptions are merely accelerating an existing trend then anyone kicking off fab construction stands to make a lot of money.
I don't think that's what's stopping them building factories for outdated processes, rather it's "after you clear the backlog, there will be no high-margin items to produce". So everyone is just waiting instead.
You could make a microcontroller on a 3nm node if you wanted to, but first you'd have to design a new core, and then tell people to pay $100 per chip instead of $0.01.
TL;DR: the chip shortage is an economics problem, not a physics problem.
Accounts vary, but the chip shortage looks like it will end up costing the auto industry ~~$10 billion in profits. If any major automaker could "just build a fab" - in the sense of "the physics & engineering of ~90nm chip manufacture are pretty simple & cheap, so $250M and 6 months will get us a good-enough fab", then at least one automaker (or A-list supplier) would have done so. If a new fab paid for itself 10X or more before the backlog cleared - no sane CFO would much care whether it was worth keeping open after that.
(Edit: Yes, trying to read things in a different way - "weird quantum artefacts" at 3nm have nothing whatever to do with the automakers' problems.)
(Edit2: Here is the point which I was originally trying to make: "Chip manufacturing, even at 3nm x ~30 = ~90nm, is still extremely difficult. That fact is a big part of why the automakers did not attempt chip manufacturing, even at ~90nm.")
Well, I don't think the logistics of 90nm chip manufacture are simple or cheap. It's still pretty high tech stuff, people aren't doing it in their garage.
I don't know why automakers didn't engage their partners here to expand manufacturing. I am sure they asked, and the companies that can build 90nm fabs decided not to. Maybe it doesn't make sense after the backlog is cleared, maybe they like the higher prices? And if a car company wanted to start manufacturing chips themselves, they'd have to hire engineers, license patents, work out bugs, etc. and the risk is that the shortage is completely gone after you do all of that. (And, all this during a pandemic. If they wanted to use wood to build the physical building containing the fab, there was a shortage of that. So, a lot of problems to solve, and 10 billion dollars starts looking like a small number.)
That seems a bit far fetched. TSMC N3 has 314.73 MTr/mm2. Just making a quick back of the envelope calculation: 314.73 MTr/mm2 is 3177.3266 nm^2 per transistor. That includes interconnect. Making it 1000x smaller would make a single transistor 3nm^2 including interconnect. I would most definitely expect quantum effects at that level.
Any links as to when he said it? A quick Google search didn't return any results. Or did you mix up Cristiano Amon with Jim Keller who said something similar?
If it was a Podcast [1] then it was Jim Keller. And if it was couple of years ago than the CEO of Qualcomm would be Steve Mollenkopf, not Cristiano Amon. And Steve Mollenkopf isn't an engineer so he is highly unlikely to ever say anything like that.
Does anyone have rumors/insider knowledge about the progression of Epyc/Threadripper?
Threadripper seems insanely expensive right now, will the next generation be faster at least or use less energy? Or, in other words, does it make sense to wait?
Threadripper is pretty reasonably priced; a 5995WX is about $101/core, or about 3% more than a Epyc 7773x for 10% more performance. For comparison, a Xeon Platinum 8380, which has roughly similar perf per core, costs $224/core. Sure, consumer CPUs are a bit cheaper; i9-12900kf is about $45/core (though half of those are slow cores) and 5950X is about $34/core, but price discrimination for server lines has always been fairly standard. I think the least expensive you can get into Epyc Milan is around $55/core, but that's on a part that only needs half the cores on an 8-core chiplet to be functional; the 7773x needs all of them, for 8 chiplets, and the 5995WX is that but with even tighter binning for higher clocks.
Almost all Threadripper capable silicon ends up in Epyc. Once AMDs server market share is saturated Threadripper will come back. The reason why Intel is looking to push new HEDT is because their server market share is decreasing.
Genoa, from which the next generation TR will likely derive, is up to 96 cores in 12 N5 chiplets with a new N6 IO chiplet that supports PCIe5 and 12 channel DDR5.
It has some added instructions (AVX512) and will likely be more efficient at the same clock rate but they will be pushing the clocks higher instead.
I would not wait around unless you have plenty of time. I doubt a cut down Genoa will show up on workstations until the server market is satiated.
I wouldn't want to be waiting on Epyc given Intel's difficulty with sapphire rapids. Probably going to be impossible to get them until at least q2 next year when sapphire starts to hit the shelves. Until then I bet the big cloud providers will buy out all the stock.
Next gen threadripper will be even more overpriced. Threadripper was nice with gen 1 Ryzen because AMD had to earn mindshare. Now it's just a thorn in their side, thus the attempts to gradually move it from enthusiast to pro territory.
Threadripper pro is going to be the only workstation that makes sense anymore. Consumer threadrippers are being phased out. This is all public knowledge . You are better off with high end Ryzen now .
I'm designing in 5nm right now and every process has multiple standard cell libraries to choose from. There is usually a high performance library and then a high density library with smaller cells. The high density library has smaller transistors but the smaller weaker devices are slower. A single chip though will have a mix of both. The high performance cells would be used in the CPU cores while the slower high density cells are used in blocks that don't need the extra speed. An example is an ethernet PHY where the device on the other end of the cable expects you to run a certain speed.
Cool! I’d love to hear more about the node design process, it seems as though presumably the blocks of transistors are already combined into chip Lego and each of these uses interference patterns to etch things that are smaller than would be possible otherwise? We must be hitting the theoretical limit in a few generations right?
Even if all parts of a die use the same transistor library, you'll still usually have lots of different clock domains so that different cores can run at different speeds according to how busy they are. And you'll sometimes have a few cores that have been identified as capable of running one or two speed bins higher than the others, from ordinary variation.
But there's probably also an example of some phone chip that has Cortex A53 cores implemented on two different transistor libraries to hit significantly different performance/power points.
That would not solve much. Different types of transistors take different surface areas, so it would not prevent different companies using different references. Also, they can very well have a high-density library for benchmarks that can be quite different from what they can do when they actually need the thing to work.
The solution is simple: treat their numbers as brands. A Core i9 is generally better than an i3 of the same generation, but where a Ryzen 5 is compared to that is anyone’s guess (depends on the exact models, generations, etc).
Node sizes have had nothing to do with a characteristic length for quite a while now, that ship has sailed.
They sorta have these things, but no one looks at it.
For example the scaling factor from from say 5nm to 3nm for transitory is X. But for SRAM, things have been getting progressively worse so it often X/2.
And with caches going up you end up using a process node terrible for sram, but needed for cpu transistors, and it’s a huge waste. You can see why you might use a different process tech for caches, and AMD is clearly going this direction.
Anyhow, TMSC 3nm is just a marketing term. On the physics side, it has about 7-8 key differences to previous 5nm. That’s 6-7 too many things for people to care or remember about.
Intel used to use a part of the transistor when it takes about scale. But then the geometry of the transistor changed! That number no longer has meaning. And transistor geometry changes over time, and you have multiple to choose from on the same process node. Oops.
Even if it were practical, the question remains: why would they do that?
The only people upset with this are benchmark warriors. People in the industry or familiar enough know how to navigate the characteristics of a process and do not need one single neat number. Customers by and large don’t care: business are after the cheapest and best supported, and consumers are mostly price-sensitive except for higher-end brands that don’t communicate at all on this sort of things.
This would not solve any of the industry’s problems.
> please, change this metric to transistor per square millimeter. Thanks!
I think there's problems with that metric too. Not all transistors are created equal. Depending on the switching speed that you want, how much leakage current you can tolerate, etc., I bet you can vastly change how many transistors you can fit in a given area.
This won't help in terms of clarity of how good the transistor is. As others mentioned, there are different kinds of transistors and different manufactures would still use their smallest (read not necessarily the fastest/strongest/mostly used) to market their process.
A major reason I think this won't work is that transistor size no-longer limits the density of the chip. Nodes below 20nm have transistor contacts (what connects the silicon to metal), and metal tracks that are much larger than the transistor. The contacts typically limit the pitch between transistors and hence the density. A lot of innovation is now done to shrink those elements rather than work on the transistor physics/materials/size directly.
> A major reason I think this won't work is that transistor size no-longer limits the density of the chip. Nodes below 20nm have transistor contacts (what connects the silicon to metal), and metal tracks that are much larger than the transistor. The contacts typically limit the pitch between transistors and hence the density. A lot of innovation is now done to shrink those elements rather than work on the transistor physics/materials/size directly.
Layman question from not GP, but wouldn't reducing the "overhead" around transistors increase the transistors-per-area metric and thus be at least somewhat useful?
Hey, you are right. I thought about that right after I posted my comment but the devil is in the details.
As you shrink the contacts and the metals their resistance and capacitance exponentially increase. This means that both your power will go up and your speeds will go down. Also the process becomes more prone to manufacturing errors. So shrinking those elements blindly without innovation just to increase the density numbers is not really a good metric of the quality of the process.
I did not know about it, thanks! It looks like it is holding well.
I work in Analog so I'd like a more analog based definition (which also works for digital tbh) based on size or density, trans-conductance(gm), output resistance (ro), and Unity frequency (fT) but that's never going to happen :<
Maybe something closer to a real-world design would be better, like "space-optimized 6502 processors per millimeter^2" or "kilobytes of SRAM per millimeter^2".
Ideally you'd have some sort of metric that includes power efficiency, performance, and maybe even cost since smallness isn't really a feature in itself unless you're making something space-constrained like a hearing aid. It's hard to condense that down into a single number though.
I don't think this is fair though. Some processes may support finfet, or gaffet, or some neat trick, where, if you switched to the process, but didn't optimize for size, you may be able to double the performance at half the power.
Like you suggest, it's some crazy multidimensional problem space. There's no hope in representing it with one number. And, nobody that's actually designing would care about any of these numbers. In the end, they would only be used by marketing, which is all they're used for now.
Unfortunately, details like that are considered IP and kept secret. Sometimes there's a press release that gives some sort of metric somewhat related to density. But only when that metric happens to make that company's process look good relative to others.
Do you mean transistors per cubic millimeter? Cubic centimeter (cc) may be a better choice since it’s already a more common measure of volume at smaller scales for most of the world.
Areal density rather than volumetric density will continue to be the most relevant basis for comparison as long as chips are still being fabricated on wafers with a fixed total area. Flash memory has been 3D for years but the important density measurement correlated to cost is still Gbit/mm^2.
No, cubic mm makes no sense because transistors are created on the surface of the wafer. It's surface area that matters. You can't get more transistors by using a thicker wafer.
> You can't get more transistors by using a thicker wafer.
Well, you technically can, but it's not starting with a thicker wafer, it's growing a second (or more) layer of dopable silicon on top of the normal doped silicon/transistor gate insulator/wiring/more wiring/even more wiring stack of the chip, then adding another full stack on that new dopable surface. Pretty sure the fabrication infrastructure for that makes conventional, GDP-of-a-small-country photolithography fabs look cheap by comparison, though. Plus you'd be at least squaring (and probably much worse) the already not very good production yield.
Yes I'm aware of Wafer on Wafer and similar technologies, but cubic mm would still be the wrong measure. Even if you stack multiple wafers or dies on top of each other you're still restricted by surface area, not volume. (Not yet anyway - I guess there might be a future where HBM gets too tall to fit in phones but we're far away from that future.)
> I would assume you could dope in interconnects, between the sides.
Actually, you probably can't; the distance between the sides is practically astronomical compared to the horizontal feature size. I don't remeber the exact actual dimensions, but if you assume a a 1-nm trace is equivalent to a 10-meter road, then a 1-mm wafer thickness would be equivalent a 10'000-km planetary diameter, so you're getting close to the scale of routing a internet cable through the center of the earth. (At least it's not molten, I guess?) And doping generally works by diffusion, not drilling a hole.
It's a term meant for transistor manufacturers which refers to the gate length, the defining feature of the field effect transistor. It affords multiple advantages additional to more transistors per mm^2 which is merely a design choice, such as power usage.
in fairness, they did say in the article that the 3nm configuration has 1.6x the feature density of their 5nm configuration, which is arguably a more complete metric than transistor packing.
I had a question. I apologize if this a bit naive. The first slide shows the N3 releases go though 2025 and then we see the arrival of N2. How significant are these numbers in regards to end of Moore's law? Will we saw N1 in the next 10 years or is that getting to a process that requires something radically new like carbon nanotubes etc?
I'll also mention that we should have reasonable node progress through 2030 at least. There's plenty of room at the bottom with just EUV multi-patterning and a few other very practical improvements. Beyond that I'm less hopeful but I wouldn't bet against it.
I think this is not the correct way of thinking of Moore's law. Like mentioned elsewhere, Moore's law is not so much a law as a self-fulfilling prediction. Semiconductor companies follow Moore's law (or recently an approximation) to stay competitive because their competitors do it and hence Moore's law continues. The cost of not marching along to Moore's law can mean you will get left in the dust (see Intel).
I think a better question to ask is whether the underlying economic factors behind continuous process tech improvements are healthy. Is there enough value-add to the final user by continuous process tech improvements? Are the costs for that improved process tech scaling with the value-add? And is the competitive landscape healthy? While that holds true, companies will keep looking for process tech improvements to give them a competitive edge.
In the 80-90s this was very much true but in recent decades it was to a lesser extent hence why we see consolidation/reduction in the number of foundries, foundry services to amortize the cost of older node-tech, and R&D going to companies/partnerships that can capture the most end user value-add (Apple/TSMC).
Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade. There are a lot of process tech innovation that can lead to better performance that are not necessarily scaling related. In fact, scaling transistors stopped being very useful a while back afaik due to the breakdown of Dennard's scaling.
>"I think a better question to ask is whether the underlying economic factors behind continuous process tech improvements are healthy. Is there enough value-add to the final user by continuous process tech improvements?"
I agree with this and I suppose I just sort of reached for Moore's Law out of habit and maybe a bit of laziness. Thanks for articulating the question more appropriately.
>"Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade."
Completely insignificant. They're just names. Intel is going "Intel 4" -> "Intel 3" -> "Intel 20A". Everyone will call them 4nm, 3nm, 20Å (Angstrom) despite them explicitly not being that, but it's a hard industry habit to break.
I can offer a broad generalization. The new manufacturing process should get us some double digit percentage gains in performance, power consumption or heat. Current Apple product designs are mainly constrained by heat as seen with the recent thermal throttling of the new MacBook Air, so a future M3 processor could be pushed a little harder before it hits its thermal limit. Performance and battery life are already pretty fantastic, so not as much of a practical difference there.
The only current products that are really hurting for better chips are high powered wearables like watches, VR headsets or AR glasses. The next few years should see some tangible improvements in those products, but probably not much difference in desktop, laptop or phone. Datacenters, since they run at high utilization rates, will continue to take advantage of the cost savings of slightly more performance per watt and dollar per transistor costs.
That is offset some by higher defect rates in newer and smaller processes and ever increasing manufacturing costs per wafer. But the metric that matters is cost per transistor. Historically that was always dropping, but more recently it may be flat for some nodes, increasing temporarily or just not dropping as fast as the historical trend. Some cost estimates for TSMC’S 3nm node:
"N3 technology will offer up to 70% logic density gain, up to 15% speed improvement at the same power and up to 30% power reduction at the same speed as compared with N5 technology (According to TSMCs website). If this holds true we could see 300+ MT/mm2."
That's not what Anandtech found when they tested the performance and efficiency cores used in the A15 and M2.
Performance:
>In our extensive testing, we’re elated to see that it was actually mostly an efficiency focus this year, with the new performance cores showcasing adequate performance improvements, while at the same time reducing power consumption, as well as significantly improving energy efficiency.
Efficiency:
>The efficiency cores have also seen massive gains, this time around with Apple mostly investing them back into performance, with the new cores showcasing +23-28% absolute performance improvements, something that isn’t easily identified by popular benchmarking. This large performance increase further helps the SoC improve energy efficiency, and our initial battery life figures of the new 13 series showcase that the chip has a very large part into the vastly longer longevity of the new devices.
I agree. But my point was that the industry in broad strokes is not moving into the direction of lower power. The opposite in fact. It's been happening for years.
> We may be close to the age where most laptops are fanless, for example.
Correct me if I’m wrong but high performance fanless laptops are only possible with ARM chips atm. Has any other big laptop chip designer even announced plans for anything remotely close to M1/M2?
Architecture does still matter. M1's decoder is 8 issue. You just can't build that for x86 without pipeline depth or some other resulting tradeoff making it pointless.
I'd be curious to. Wikipedia seems to says that this might be mostly marketing speak. 'The term "3 nanometer" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers.'
Samsung went into 3nm production last month and says the new fabrication process is 45 percent more power efficient than its previous 5nm process, has 23 percent higher performance, and 16 percent smaller surface.
So extrapolating this we expect a 1nm (if this is even possible) would offer roughly the same amount of gain. Not sure what we can have beyond 1nm
N2 in 2025. N1.4 or may be N14A in 2027. Roughly N1 in 2030. I believe we will hit an economic wall by the end of this decade. i.e The cost of Next Gen node exceed what the market are willing to pay for it.
It's hard to remember looking at such tiny marginal improvements that they do compound over successive generations. "Only" a few percent is the difference between the Apollo computers and being able to simulate them with Redstone in Minecraft on a consumer computer over the gap of time that separates them.
Minecraft contains some native dependencies, though; you'll need something like https://copy.sh/v86/ or https://bellard.org/jslinux/ with the right operating system image to run it in browser.
From the outside, having Apple securing the latest and greatest nodes for so long feels kind of anti-competitive.
Apple has been selling phones and laptops with N5 for two years and I don't think we still have any competing product in the hands of consumers using N5 yet. If I'm not mistaken Nvidia and AMD are about to release products using N5.
While I fully agree it gives Apples a huge advantage to be on a better node than competitors by 2+ years.
But this is also a free market.
Apple is both willing to pay more and has the needed scale to buy up all capacity. If someone else wanted to both pay more and buy up all capacity - they have the ability to do so.
I hate Apple and monopolies as much as anyone. But, no - what was described is not anti-competitive at all.
If - in addition to what was described - part of the deal is that Apple would only make the deal if TSMC would not add any more capacity for 2 years NO MATTER WHAT - that is blatantly anti-competitive and monopolistic.
If no one else is willing to pay for TSMC to build another facility to produce more chips - then that's a free market. Apple was the highest bidder and won.
It would be a lot simpler if anti-competitive behaviour only encompassed clearly immoral and unfair behaviour, but it tends to be more nuanced than that. There's no simple rule and a court would probably have to spend a long time concidering such a case.
The essential facilities doctrine might apply (the question isn't whether competitors are literally forbidden from the latest chips, it's whether they're "practically or reasonably" unable to access the latest chips, and all evidence suggests that's currently the case). Possibly it's the competitors own fault for refusing to pay the entirely reasonable price TSMC is insisting on, and perhaps Apple's contract with TSMC actually has carefully written clauses to ensure others could practically gain access without having to spend enormous amounts of money, but which nobody has been willing to use for their own reasons, but it certainly seems possible that the contracts effectively ensure Apple have sole access to the latest hardware, which would line up neatly with the apparent scenario we're in - that Apple currently has sole access to the latest hardware.
That's not to say it's a slam dunk case either. I wouldn't care to guess either way.
Shutting out all competitors from new technology for years by outspending them (arguably from profits largely made via other monopolistic like behaviors) could qualify for many.
I'll never understand the endless parade of apple goons who rush to defend them at every turn. Use some critical thought instead of treating it like sports
What you're describing is a monopsony, not a monopoly. And Apple is not by any contortion of the imagination a monopsony for chips. Apple and Samsung are the largest buyers, but they're barely cracking single digits market share.
I'm describing a company that is able to pay far higher than competitors due to anticompetitive practices which drive their margins far above what would be realized in a truly competitive environment.
I'd gladly like to see the alternative timeline where Microsoft charged 30% for all software on windows and where you and others shamed them for it. Because I know for a fact all of those defending Apple here would have at the time
Most of the successful companies outside of games either don’t allow in app purchases at all and you pay for the subscription outside of the App Store or give users the choice of paying outside of the App Store.
And if you look at their revenue breakdown, it’s clear that most of it comes from hardware.
They have a monopoly on providing services within iOS.
When an ecosystem becomes large enough, societally impactful enough, and is controlled exclusively by one entity, that qualifies as a monopoly within that ecosystem.
You remove any capacity for competition to drive down margins. Apple can ban Spotify tomorrow to promote apple music on their devices, for example. Not anti-competitive in your mind?
You have two gated monopolistic ecosystems, choose the one you want to be gouged by. In a competitive environment, margins get compressed close to cost to deliver. If a company can achieve huge margins on items that could otherwise be much cheaper to deliver in a competitive environment, that qualifies as anti competitive.
How much does it cost the supplier to host apps via CDN? Or to process a single payment? Apple can take as much margin as they want on these because nobody else is allowed to provide those services within their ecosystem
The law will adapt to this, it's quite obvious to those that are unbiased.
> You remove any capacity for competition to drive down margins. Apple can ban Spotify tomorrow to promote apple music on their devices, for example. Not anti-competitive in your mind?
The amount of REVENUE Apple makes from iTunes is likely less than $5Bn per quarter.
Apple is not using their iTunes monopoly to buy chips.
They're using their insane amount of phone sales (at absurd profit margins) to buy chips.
Again, I hate Apple as much as anyone. Anyone who thinks Apple is a monopoly in phones is a moron. They don't even own 30% of the smartphone market...
I agree Apple is anti-competitive within the ecosystem of the iPhone. That's not going to change their ability to buy all of TSMC's chips.
You can poll your friends, colleagues, and relatives - how many people use iPhone vs something else. If % is more than 50%, then apple is monopoly effectively
same with mobile tablet market, just make a table of how many ipads vs alternatives.
> You can poll your friends, colleagues, and relatives
It only makes sense to do this on the level of a governing body, because the term 'monopoly' is defined by governing bodies and the remedies against it are enforced by the same.
You can cherry-pick / gerrymander any group you want to make it look like Apple has a monopoly by the numbers, but in order to make it stick you'd need to demonstrate that Apple fixes prices (they don't, their phones cost ~2x or more what competitors' phones cost) or that they're somehow excluding other competitors from access to the phone market (they're not).
the current definition of monopoly is coming from industrial ages, when all goods were perfectly replaceable with each other, like steel, tobacco, or grain or other commodities. From this point of view, prices of goods are similar to each other and using market share (in units sold) is good enough proxy to identify monopolists (identify monopolists using topline approach)
However, iPhone is not a commodity, it comes bundled with an OS and plethora of apps and services, the whole ecosystem.
If we identify monopolists using bottom-line approach, then Apple will be a clear monopolist. As would be Facebook and Google. as they should be, if lawmakers truly cared about public good
> If we identify monopolists using bottom-line approach, then Apple will be a clear monopolist.
Per the parent, the challenge is to convince governments to take your proposed approach. Repeating the argument to try to convince a HN crowd is not just non-productive, but counter-productive if it doesn't move toward this goal.
Taking away property and/or converting investments into a utility when a business becomes too successful results in a lot of nasty byproducts and most governing bodies will shy away from this.
Instead, you likely see the issue completely side-stepped. Nobody truly cares about labelling Apple a monopoly or declaring iOS to be a somehow accepted as a market onto itself - they want a change in behavior.
The challenge is how to side-step this when per the letter of the law, they have literally done nothing wrong. For instance, the EU will likely get a lot of pressure due to new regulations being solely against US-headquartered countries for the primary benefit of companies in the EU.
>I'll never understand the endless parade of apple goons who rush to defend them at every turn.
May be because people are not defending Apple but actually giving the correct account of the story?
I dislike modern Apple, or Tim Cook's Apple. But doesn't mean I would agree what they are doing ( in the context of TSMC ) are monopolistic. Without Apple, there would be no N3 by 2023. It is as simple as that. Apple in this case is not a customer, but more like a partner who are willing to invest ( and take the associated risk ) and spend to get and push for the latest technology.
What if we found out they were shredding all extra silicon so that competitors cannot get them?
Backing up though I think it's interesting that Apple has so much money that they can afford to buy up leading nodes for years. Why do they have that much money? I think it's obviously because of anti competitive behavior elsewhere such as their App Store which basically gives them a money printer.
This is clearly false. They break down their revenue every quarter, hardware revenue dwarfs services revenue. Even within their services revenue there is Apple Music, subscriptions and the reported 14 billion a year that Google pays Apple to be the default search engine.
Their hardware is the real money printer as apple only make ~20% of its revenue from “services” which includes things like Apple TV, music, and iCloud storage.
Free markets often result in accumulation of wealth and marketshare, with the big players making anticompetitive moves. Anti-trust regulations make the market less free.
The free market sucks. People need to stop valorizing it.
That's the problem, that has not been solved, because those corporations growing so big have enough money to buy any politician or government.
To have a truly free market it should be accounting for this edge case - that is once a company grows beyond certain size it should by law be split.
There should be other provisions - for instance if the big corporation causes a legal trouble, they should match the legal team of they claimant - for instance, if someone is suing Apple because of an issue with their hardware and Apple sets aside £100m legal budget - by law they should be required to set £100m for the claimant, so there is level playing field. That of course only if the judge decides for the case to go on.
I'm not sure what you're getting at. The definition of a free market is one in which private corporations can do what they want without government regulation; the more laws there are which regulate business, the less free the market is. So saying that a law which splits companies once they reach a certain size makes the market freer sounds like an oxymoron to me.
It would certainly have the potential to make the market work better and improve competition, but that's not what a free market is.
It's not longer a free market if corporation big enough can capture it and essentially dictate the rules - and making it no longer free.
Corporations also can't do what they want - for instance they cannot commit fraud, evade taxes, sell dangerous products and so on.
You have something like that in sports, when in order for participants to compete freely, they are being tested for illegal drugs, so that no one has a unfair competitive advantage.
I think we're using different definitions of a "free market" here. I'm using something close to this one from Oxford which Google shows when googling "free market definition":
> an economic system in which prices are determined by unrestricted competition between privately owned businesses.
"Unrestricted competition", to me, clearly says that businesses should be able to compete without such restrictions as being split up by the government if they become too big.
Or this definition from investopedia (the first actual result on Google):
> The free market is an economic system based on supply and demand with little or no government control.
The government splitting up businesses is certainly a form of government control.
However, Wikipedia's first paragraph is:
> In economics, a free market is a system in which the prices for goods and services are self-regulated by buyers and sellers negotiating in an open market without market coercions. In a free market, the laws and forces of supply and demand are free from any intervention by a government or other authority other than those interventions which are made to prohibit market coercions. Examples of such prohibited market coercions include: economic privilege, monopolies, and artificial scarcities.
Apparently, the government imposing control and restricting competition is part of Wikipedia's definition of a "free market", so extremely harsh anti-monopoly and laws against anti-competitive behavior fits perfectly well in Wikipedia's definition, but makes the market less free according to the definitions I've been following.
The important part is, I agree that a lot of government intervention is required to make markets function efficiently. I just personally wouldn't call that a "free market", rather a well-regulated market.
I broadly agree with your post, however, the "Unrestricted competition" as you define would only work in a vacuum.
Company growing too big is creating restrictions for other companies - from being able to lobby government to increase regulation of the area they operate in, so that the barrier to enter the market is getting set higher and higher for new companies or ensure that the tax system is limiting the growth of companies that could in future become a competition.
Other examples I can think of is when a big corporation is buying up the entire supply of parts that other businesses need to build their products. So a single whale move can wipe out entire sectors (that's how chip shortage is affecting small and medium business).
So your definition of free market for me is like a great function that works very well, but crashes under edge cases.
In this particular example, it seems like fortune favored the company that invested billions of dollars and years of R&D in creating a combination of hardware and software that consumers desire.
Well, Apple is essentially bankrolling development and production, so they get exlusive rights. Anyone is free to outbid Apple for this right, and also free to not impose that at all. Thanks to Apple's money, TSMC was able to secure a large chunk of ASML EUV equipment.
Also, Apple didn't make intel fumble their 10 nm process or Qualcomm to miss aarch64. Result of anticompetitive practices of intel and Qualcomm made them lazy and that resulted in this, not Apple buying TSMC capacity.
It’s a free market, but not a competitive market. Antitrust laws reduce the freedom of the market in order to increase competition, because it’s considered better for the majority of market participants to trade that freedom for competition.
Apple did this with the ipod and later ipod mini too, full exclusivity for small drives in deals (covering music players only) to monopolize the segment, eventually across multiple additional drive manufacturers and technologies after the mini to keep it full monopoly. Everyone else was stuck for years with laptop drives that could never be pocketable.
I'm not sure if node exclusivity was why Apple has the first true wireless earbuds, there was one competitor first with some giant ones so I suspect so, but lots of competitors followed fast (other foundry's nodes were more competitive then). Anyone know the history on that and what nm chips were used by them and earliest competitors?
Consumers couldn't buy products with a pocketable harddrive: Apple had exclusivity agreements with all manufacturers of them to completely monopolize their use for music players.
The story goes that IBM (?) shopped the hard drive around and no one was interested except for Apple. It’s still the lack of vision by other manufacturers.
Also remember that the iPod wasn’t an immediate hit and it only sold 1 million in total in its first two years. It wasn’t until two years later when it was made available on Windows.
Apple did the same with Flash storage before the iPod Nano came out. Any of its larger competitors at the time could have had the vision to see that flash based players were the future and ensure they had the supply.
Apple was _years_ away from having the first wireless earbuds. I had The Bragi headphones (not Headphones) like 3 years before the AirPods got even announced. With basically all features sans handover but with an internal MP3 player and waterproof. Those were amazing but also somewhat clunky. But they were definitively long before Apple entered that market
Okay, let's take this same logic and move it up one layer of abstraction.
There are multiple fabricators (supply) and a small pool of chip manufacturers (demand).
As a chip manufacturer, I cannot go out and buy competitive silicon. I can't even get last-gen silicon. The best you can get is an improved 7nm node from Samsung, but nobody wants to run their CPU on the blood and guts of Exynos chips. So, by definition, I'd say that Apple has monopolized the cutting-edge lithography business through leveraging a partner.
A chip manufacturer absolutely could. Apple runs higher margins so they are willing to spend what it costs to get the bleeding edge stuff. Their customers appreciate the bleeding edge stuff and are willing to spend more to get it.
It isn't like Samsung and AMD can't afford to jump on new nodes, they just won't be able to raise their prices enough for it to make sense to them.
They can't, Apple has a contract to get all current supply, TSMC can't sell anything to AMD even if AMD paid more than Apple since they don't have any more to sell. They will get more to sell in a few years, but today they don't, you would have to go and convince Apple to sell their supply to you.
AMD has every right and ability to offer TSMC money for slices of that exclusivity. We know this because Huawei split exclusivity with Apple for the first batches of N5 back in 2020. I imagine it cost them a lot of money.
Not that I disagree with your overall message, but to nitpick: the "free market" is exactly the thing that fails into "100% monopolistic anti-competitive moves" without "anti-trust regulations". I do not want a free market, I want a regulated market.
Monopoly: the exclusive possession or control of the supply of or trade in a commodity or service.
The fact they got more money than competitors and leverage that to get exclusive control doesn't matter much. Textbook definition of a monopoly. On legal side, IANAL, no idea if this could be illegal somewhere.
Yeah, this is what people said about 5nm too. Guess what? You can't even buy the 5nm node today because Apple still buys up the majority of it, and the few other competitors like Nvidia and AMD fight like animals for the scraps.
Whatever you want to call it, it's pretty anticompetitive. Seeing them do the same thing for 3nm doesn't inspire confidence that they're going to do the right thing this time around.
Except for the first year where they take a headstart where nobody else can either have matching chips or figure out what issues come out of this process, making them get an even bigger advantage as they have stabilized their process on year 2 when everyone is barely building up.
Don't be a clown. You know fully well what Apple is doing.
if one were to make the argument of exclusivity, TSMC is the right example, not apple. As much as i dislike apple's practices, they are paying what they can afford.
If other fabs were available, apple's ability to pay up would not be leading to accusations of anti-competitive behaviour.
>If other fabs were available, apple's ability to pay up would not be leading to accusations of anti-competitive behaviour.
If I ignore reality and make up non-existent fabs, the extremely monopolistic, anti-competitive behaviour is not a problem!
>they are paying what they can afford.
The problem is not paying what they can afford, it's paying that in addition to forbidding TSMC from increasing their capacity on that node while the contract is ongoing. It's both paying what they can afford, and paying so that others can't afford it.
Do they forbid adding capacity? I thought they basically finance the capacity in trade for the right to do first bid on any capacity for a certain timeframe. And they have the margins to just pay more than any competitor. AND they are also supply constraint. Ie it’s not that they buy stuff and then _not_ use it (on the contrary they are well known for carrying nearly no inventory)
If some other company is willing (or really: able) to make a competing offer, I’m not aware of TSMC not being interested in that. It just happens that the number of companies with the margins to spend what probably amounts to the GDP of a small nation is approximately zero (or one if you count Apple)
> The problem is not paying what they can afford, it's paying that in addition to forbidding TSMC from increasing their capacity on that node while the contract is ongoing. It's both paying what they can afford, and paying so that others can't afford it.
Except they didn't have exclusive access? Huawei also bought 5nm. Intel contracted 3nm along with apple before backing out.
>The problem is not paying what they can afford, it's paying that in addition to forbidding TSMC from increasing their capacity on that node while the contract is ongoing.
I think you'd find it rude and, ultimately, unacceptable if a competitor with 1000x as much cash as you bought all of the raw materials that your business needs at a mark-up, leaving you completely unable to deliver any product whatsoever. That's rightly the kind of abuse of market position that is regulated by anti-trust laws.
Except this is not what is happening. They are paying to buy all the "Top quality" A grade materials, the competitors has plenty of choice to buy B grade materials.
The seller of Top Quality Material are in no position to sell their A grade material to all buyers.
They key is who is offering the higher price and is there a pattern. A company with a dominant market position can use the resources that flows from said dominance to keep out competitors.
> Apple is both willing to pay more and has the needed scale to buy up all capacity. If someone else wanted to both pay more and buy up all capacity - they have the ability to do so.
I think the issue is that the investment costs are so high that this ability is limited to a very small group of companies, as is the required knowledge.
I don't really know where I stand on this myself; in principle I agree with your point of view, but I also think it's a little bit too simplistic.
So I'm critical of Tim Cook because he's just not a leader like Steve Jobs was. This doesn't mean he isn't capable however. Tim Cook has had a massive impact on Apple's success and things like this TSMC deal are more evidence of this.
This began more than 10 years ago when Apple was sitting on a massive pile of cash. What do you do with all that cash? Most companies will simply pay dividends or, more commonly now, do share buybacks (side note: it's become popular to view share buybacks as some kind of systemic problem but they're functionally no different to dividends).
Instead Apple engaged in vendor financing. A lot of components Apple needs requires massive capital investment. Companies might borrow money for this. Apple essentially became the bank, saying we'll give you the money for this. In return Apple gets some combination of preferential pricing, guaranteed availability or exclusivity for a certain period.
Eventually Apple doesn't even need to spend money to do this. Just the commitment for a buyer the size of Apple to purchase your output can have the same effect. It'll help secure financing and Apple can extract the same preferential treatment for that commitment.
This is the logistics side of Apple's business and Tim Cook was the architect of that.
Apple is willing to pay upfront and fund research to be first in line. Intel, Nvidia, and others could probably do the same if they wanted to and were willing to take on that risk.
From a (slightly) insider perspective, companies that want the latest node actually pay billions of dollars yearly to TSMC for both factory construction and research, not just production capacity. Thus, in a few years anyone will be able to buy similar production capacity for the same cost, but Apple is bearing massive upfront costs others will not need to pay, for the privilege of being first. I'd call that a benefit for everyone, not anti-competitive. TSMC claims that without this investment, new nodes would be far less frequently deployed, so somebody has to be willing to pay the upfront to help them out.
I wouldn't go so far as to say that TSMC wouldn't exist without Apple. TSMC was and continues to be the preferred manufacturer for many other fabless design companies, like Nvidia, AMD, and MediaTek.
TSMC probably got big when Apple started using them for everything. Historically, Nvidia has used TSMC for most Geforce cards (sometimes using another foundry like Samsung), and even the earlier TNT2[0] from 1999. AMD bought ATI, which has been using TSMC for some Radeon chips going back 15[1] if not 20[2] years.
Why would you ignore the 2014 number? Isn't that exactly the cash infusion that Apple did to help TSMC become the leading semiconductor manufacturer? They make 20B in 2013, and then a year later suddenly make additional 60B on top of that? 60B can fund a lot of R&D.
Actually that outlier simply was a wrong number it seems, as per [1].
But my reasoning was:
* Most importantly, TSMC's market share grew uniformly, and they already were quite dominant before (market shares: 2013: 49%, 2014: 54%, 2015: 55%, from [1])
* I don't know whether that additional revenue is from Apple. Apple's use of TSMC has to have started in 2013 if they shipped the products in 2014, and to my knowledge didn't end in 2014.
* If Apple could use a node size in 2014, actual development of that has to have started quite a few years before.
TSMC would exist without apple, it just would be way less relevant
TSMC started becoming relevant because apple switched from Samsung to TSMC way back in in the second iPhone if I am not mistaken
They started getting a lot of cash and investing it back in RnD, and with time got to where they are now, leading the industry by far with only Samsung capable of providing somewhat competing technology but with much smaller market share
That's a good point. However from a consumer and enthusiast perspective I would like all competitors to be able to compete on level playing field to see which one gets the most out of the technology.
Apple doesn't need to be altruistic for something to be good for the industry as a whole. Also the M1/M2 hype is well deserved. Sure they aren't gaming computers but pulling off the transition with zero issues for normal people and relatively minor issues for developers (mac, web, any type) is impressive, especially with the battery life and how cool these run.
I wouldn't say it always propels society forward. It pushes and pulls and eventually some forward momentum grows but the forward movement comes much more from basic research and eventual applications of it. Most basic research wouldn't survive with pure market capitalism at all.
They don't have to be altruistic. Without Apple's investment or similar, TSMC is very clear they would deploy new nodes far less frequently, simply because they could not otherwise afford it.
That would be true if other competing offers weren't getting turned away. Nvidia dumped 9 billion dollars into TSMC last year just to get a fools ransom of 5nm wafers. They don't even get the option to buy 3nm, so it's not about 'continued investment' or anything. TSMC is simply optimizing for their most lucrative customer, the failure is not with them but rather capitalism failing to incentivize fair treatment of all their customers. I don't think TSMC or Apple is to blame here, but the displacement of the semiconductor industry has been pretty disastrous over the past 3 years. It needs to stop, and it starts by breaking up nonsense exclusivity deals like this one.
Huawei launched a line of cellphone chips using N5 at the same time as Apple, right before they got banned from using TSMC. It seems like others could pay enough if they really wanted to.
We probably have 3nm production on this timescale because of the long term relationship between Apple and TSMC and the $100bn + that Apple have spent with TSMC over that time period. No one else has done this.
Apple pays the money and takes the risks, so they get the rewards.
Takes what risks ? Buying the riskless new transistors that TSMC puts out because they know that they work ?
Apple taking risks would be making the chips themselves (see: the failure that was itanium and how it screwed Intel because they had to reorganize their facilities for it), not buying it wholesale and just having TSMC print it.
The risk that TSMC doesn't deliver on node improvements / deliver the relevant SoCs in the required numbers.
And if you think that risk is small / nil I would point you to Intel's recent track record.
And just to point out that Apple actually does bear the risk you've highlighted as Itanium was a design (which Apple does) not a manufacturing failure.
There is also risk that sufficient consumers do not want to buy Apple’s products at a sufficient price premium to offset the price premium that Apple paid to TSMC.
For a specific recent example of this risk we can look at Nvidia who put down huge deposits for N4 wafers and now doesn't have enough customer demand for those chips.
One risk comes from the long lead times of chip design and production. They are committing to a design and process and hoping that production can occur on time and in the right volume to meet there needs. That is not a certainty and a delay could be very expensive since product releases are tied to the new chips.
The issue seems to be these other companies being ready to port their designs. I have not seen any report that Apple is undertaking anti-competitive behaviors in order to secure the entire generation of wafers.
> Apple has been selling phones and laptops with N5 for two years and I don't think we still have any competing product in the hands of consumers using N5 yet.
Qualcomm's current flagship mobile chip, the Snapdragon 8+ Gen 1, is on TSMC 4nm.
Biggest question here I think is the geopolitical things about this: Where will this be developed? My understanding is that the only fabs this is possible is within Taiwan. That's a massive issue with the current posturing of war.
It's not like they have 3nm chips ready the day after production. When they "begin production", it takes months until the first actual, working chip from that production leaves the fab.
And then it's not like the day after, Apple could produce iPhones or Macs with those chips. It takes months from having the first chip in hand until production churns out volumes of finished devices.
I think the it must be more outside the window than the release dates make it seem -- Apple must, I guess, do some QA and build up a decent stockpile before the release of a new iPhone.
Gigahertz weren't inflated per se, but there were similar effects to the name games with process nodes with clockspeeds when those were more directly seen as indicators of performance. I'm thinking particularly of AMD having Athlon model numbers that were 1:1 to clockspeeds for years, then Athlon XP switching to ones that were above their actual speeds, to sort of vaguely indicate which Intel clockspeed they were claiming performance parity with.
Of course once they were beyond the clockspeed-above-all Pentium 4, Intel themselves stopped focusing on marketing clockspeeds and things gave way to basically arbitrary naming/numbering schemes. (Or to some extent, core counts.)
Well, performance is roughly = clock speed * instructions per cycle.
When intel was dominant (pre-Anthlon XP and pre-Core era), Clock speed alone could have been used to measure performance. Then, AMD Athlon XP come out with higher IPC than Pentiums, but the customers were used "Higher clock, higher performance" even though Athlon XP was faster at the same clock speed.
So, they've changed model names to indicate performance parity, that was the game intel played. They didn't lie or mislead about clock speeds anywhere.
That's my point: this seems to me more less exactly where we are now with "Intel 7" being their "10 nm but better-er" process and the previously-known-as-7nm becoming "Intel 4." They're saying, well, the actual nanometer number has long been a little wishy-washy anyway, so we're jettisoning the pretense that this is a measurement of anything in particular and just picking a number that sounds better and aligns with our competition's numbers.
And of course it's not just Intel marketing this way, everyone does, but their more recent naming change makes it more transparent. Tired of years of saying "welll, TSMC's Xnm (or X number with no units that's sort of implied to be nm) is really equivalent to our X+Ynm," they just go for marketing on parity.
Like I said in the start of my previous post, this isn't actually inflating (or deflating as the case may be) but it's just taking a concept that's widespread among the consumers as a proxy for quality/value and marketing to that idea rather than any actual physical characteristic.
I used to work at intel. I remember when people complained how our hz weren't as fast as the apple hz at the time. So i remember people complaining about how a 2ghz chip was the same speed as 800Mhz apple. So in some sense it's slightly different than nanometers, it still was a marketing thing. So yes hz are real. yes clocks are real. The relevance of the hz was what was in question. However with nanometers, it is kindof the opposite. The nm thing is an advertisement thing but it is relevant to performance.
TSMC and Samsung inflated their numbers a decade ago, Intel inflated their numbers last year to catch up, so now the nm numbers from all the fabs mean roughly the same thing.
what do you mean exactly? you meant reunification? i don't see how that could be a problem, we do business with China already, it would just become one of their state, you can still trade with them
Or you meant as a result the US would not be able to steal the fabs, tech and workforce? (and by steal i mean purchase, just like we write about how China is stealing our tech when they purchase patents or by other deep internal means)
Samsung is in a worse position than intel, not only it is Korean, and they have problems with their northern neighbor, as well as with Japan, but they also got hit by the anti-competitive behavior and trade war from the US, and were told to stop development of their in-house chip (Exynos), to place Qualcomm as a western leader, despite their poor tech
And i'm not sure if The Verge is a trustable source, they either have insider knowledge or their source is manipulative, because that contradicts with the Exynos story
If what they claim is true, then that'll be interesting to see if their "promise" can become a reality, time will tell
But as a matter of fact, still nothing is happening in the west, and that was my entire point
>what do you mean exactly? you meant reunification? i don't see how that could be a problem, we do business with China already, it would just become one of their state, you can still trade with them
I'm sorry what? A chinese invasion of Taiwan would be catastrophic for the global supply chain. It would require a force a few times bigger than the entire D-Day invasion, and there's no way the chip fabs wouldn't be smouldering ruins afterwards. The reason china hasn't already invaded is that it would have a nearly incalculable cost for very little benefit. Far better to rattle the sword for the nationalists every so often and trade your way to prosperity.
It's not in the US's interest for China and Taiwan to reunify, i could see the US sending weapons and missiles, as well as Britain and Israel sending their mercenaries and drones
But i don't think that's what most Taiwanese want, it would be a literal suicide, it's a small island, they see it as a political affair, not as a military one
The military view is what the western media want to push for some reasons
We seen it during the Pelosi visit btw, what's next? help elect a Zelensky bis? It failed with Abe, wind is blowing backward it seems
So sad. Such a waste. Won't even make it before the invasion. Trying to be valuable for a worthless president-elect of America.
It is not 3 nanometers according the Metric System anyway.
Oh well.
Mainland flags on the Taiwanese monuments. How many months away, let me check my predictions, I think 70 days or so, give or take 15. Shit getting real hot real fast.
Why do we waste state of art, bleeding edge manufacture capabilities on some cars?
Who needs cars this fast?
Who needs newer cars?
To give you some answer. I'd say that newer technology enables the following, completely not exhaustive list:
- Macbooks that can run on a single charge for whole day and have very high performance.
- Cameras in pocket/phone that almost no-one except professional photographers have to carry photo equipment
- Electric cars which can finally travel such distance that they are actually useful
- Combustion cars so that every city we live is not toxic. And they get to be economic too.
- Comfort within a car which we value.
- Phones are small (oh sorry this can cause flamewars, so let's say it like this - circuits get small)
- Technology demand, whether it comes from manufacturing phones or something else, benefits making computers (i.e Apple Silicion was for phones/tables, now for macs). It benefits other vendors as they get to access the technology too. So much investments enables TSMC to actually enable the technology.
Simple. It takes time to get state of the art, bleeding edge CPU nodes to the point where they can have a consistently high yield of large-die-area chips, such as desktop processors. Yields on a new production line are often poor due to the many defects that would take out far too many large chips, but allows decent yields of smaller chips. Thus, the typical node starts manufacturing on small chips and works its way upward to larger chips as yields improve.
But FPGAs, despite being way faster than software, are only ~1/5th to ~1/10th as fast as dedicated silicon, at best, on a small scale. An Apple M1 in FPGA would run like it was manufactured 2 decades ago, if not worse.
Plus, FPGAs get absolutely huge with terrible yields for complex work. RED Cameras have a massive, multi-thousand-dollar FPGA (crossing over $10K as a part), and it is only powerful enough to process their custom video codec and nothing else. Still cheaper than designing a custom chip for that considering how niche RED Cameras are, but absolutely stupid for anything broader. An FPGA that could run an M1 equivalent would be larger than the laptop, cost over $100K, and be slow as a turtle racing across Oregon.
They're extremely popular devices, people are actively demanding more from them every single year, companies use these marginal improvements as selling points, and these devices cost upwards of a thousand dollars, bringing a healthy profit.
I think you hit the nail on the head with the "storm" word.
Phone consumers seem to be a little bit irrational. E.g. friend has a phone with insane screen resolution and refresh rate.
People will argue consumers want better battery life but this is meaningless. Processor efficiency increases, software gets bloated and we are back at square one. Every phone in the last decade has been a "one day" battery life.
You hit the nail on the head... a few years back my phone was a de-Googled Motorola running LineageOS. It had no push notifications or any third-party apps on it besides Spotify and ProtonMail and I could go several days without charging it. My current-generation iPhone has superior hardware in every respect and it's down to 50% after a day of usage.
You need about 1400 calories a day, 2l of water, a 40x40x180cm box to stand in, somewhere to drain your waste, and a temperature that is about 20-24 degrees. Oh and some oxygen
Everything else is want and as a society, we agreed that you don't need to justify your want.
Whether that is a bottle of beer or a new smartphone.
a rather old opinion: the race for thinness in Apple design has been over for, like, 3-5 years now? They hit a local maxima and also learned when they went too far. Now products, like MacBook Pros and iPad Pros, are returning to a bit more of their "ideal" sizes.
The same question applies. It seems that Moore's law improvements on speed and power consumption over the past ~10 years have served only to help programmers write shittier more bloated software such that the net performance and efficiency gain is zero or slightly negative.
Moore claims that CPU's will double in power/efficiency given a specific increment of time (e.g. n years). That hasn't been happening for at least ~10 years now so I'm not sure how mentioning it actually applies here.
The point I was making is that it's not a phone anymore, it's a full fledge computer that is arguably more advanced than your average desktop.
My cheap chinese phone lasted almost a week, over 5 years. I got a new flagship Samsung one, somehow the thing loses around 30% battery every day even if I do nothing. I think manufacturers need more efficient phones to keep doing whatever the heck they are doing, it's not like the "user" owns these things.
I think for most people, up until perhaps very recently, their phone usage patterns have continued to scale with the improvements in efficiency and performance and progress has been very stagnant. As in, the 2-3 phones they've had over the last 5-6 years all get through the day and then it's charging time if you want to be OK on the next day. Even if you're on 40% at the end of the day, you still have to charge.
The two-day mainstream smartphone phone is not here yet.
You don't have to be phone-addicted for your phone usage to keep going up. In a lot of societies ever more daily tasks continue to migrate off PC and onto phone apps. The number of services you interact with primarily through your phone (say, booking a hairdresser appointment or getting notified your dry-cleaning is done) keep going up everywhere. People use their phone for contactless NFC payments, for public transit check-in/check-out, etc. Having enough charge to last the day becomes increasingly not optional.
Everyone gets habituated by context to look at the thing more and more as the gateway to everything.
By all means use a slower phone. And I’ll keep using a faster one that saves me seconds a day which cumulatively turn into minutes which cumulatively turn into hours to do other things with my life.
They don’t specifically need fast, they need power efficiency. Better power efficiency means lighter, thinner or more capable phones with longer battery life.
(I may have got that wrong so happy to be corrected!)
What calc did you have?
1. https://www.mouser.com/blog/stunning-increases-in-processing...
But fundamentally electrons and 'charge' are quantum in nature and even in the 80's if you're transistors were "too close" you could get electron tunneling between them. So I, and others, assumed there was a really hard limit on how small you could go (and it was likely above 100nm rather than below it). And of course now hardly anyone used old 90nm tech, I think 40nm is the current "jelly bean" process but I freely admit I've not looked at the geometries used in the vast Chinese IC market of "support" chips.
As someone else here said, fab nodes are more like Lego blocks than a printer’s dpi – moving a design to a new node means rebuilding it in terms of the new node’s building blocks, not just “shrinking the design”.
So if the M2 Pro/Max/Ultra are intended to follow the same “reuse the cores (possibly rearranged) with more GPU blocks/cache etc” it seems unlikely. But if the make a design break within the M2 series then it’s possible?
I’d expect Apple to follow their historical behavior and lead with the next A-series phone processor on the new node first. M1 Pro/Max/Ultra chips have huge surface areas – since process defect rates are driven down over time, it makes sense to start with your smallest chips first, so that you can get good yield out of your big chips once the defect rate is lower.
- A5(X) 32nm and 45nm - A9(X) 14nm and 16nm - A10(X) 16nm and 10nm
If apple wanted to use N3 for the A15 or the M2 then they'd have designs pretty much ready but due to TSMC delays didn't roll those into production.
I'm pretty sure Apple does a few designs in parallel and react according to what the supply chain and market conditions allow.
Indeed, the core designs were different. The M1 had A14 CPU cores, while the pro/max were based on the A15.
A14 —> M1 A15 —> M1 Pro/Max/Ultra, M2
So it stands to reason
A16 —> M2 Pro/Max/Ultra, M3
Though that does not take into account the process node.
'We had indicated in our initial coverage that it appears that Apple’s new M1 Pro and Max chips is using a similar, if not the same generation CPU IP as on the M1, rather than updating things to the newer generation cores that are being used in the A15. We seemingly can confirm this, as we’re seeing no apparent changes in the cores compared to what we’ve discovered on the M1 chips.'
And Wikipedia: https://en.wikipedia.org/wiki/Apple_M1
'The M1 has four high-performance "Firestorm" and four energy-efficient "Icestorm" cores, first seen on the A14 Bionic. [...] The M1 Pro and M1 Max use the same ARM big.LITTLE design as the M1, with eight high-performance "Firestorm" (six in the lower-binned variants of the M1 Pro) and two energy-efficient "Icestorm" cores, providing a total of ten cores (eight in the lower-binned variants of the M1 Pro).'
The efficiency cores on the Pro and Max aren't (so far as I can tell) faster than on the regular M1. But where the regular M1 has 4 performance + 4 efficiency cores, the Pro and Max have 6 or 8 performance + 2 efficiency. (Also, more L2 and L3 cache.)
This is true but 'rebuilding' specifically refers to producing a new chip layout (i.e. the thing you send off to the fab to manufacture). This can be a lot of work but is all 'back-end' work. You begin with the RTL giving the logical/functional design of the chip and implementation engineers push it through synthesis, place and route etc to produce a physical chip layout. This is what you have to redo, you can just start with the exact same RTL.
When you design that RTL with a particular node in mind you can likely achieve better performance/area but it's not essential to do so.
Plus when you want to do the back-end work you need a fairly complete design to work from. So for instance Apple could be getting a back-end team to build a 3nm M2 now whilst the front end design team are busy working on the M3 (specifically targeted and optimized for 3nm).
[1] https://www.macrumors.com/2022/08/18/m2-pro-chip-3nm-product...
So not sure it's true that you can't shrink down a largely similar architecture from one process to another.
Obviously it's fallen off the wagon a bit here, but seems more due to operational issues at Intel than it being fundamentally not doable
https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model
The first step is to prove the process - to the appropriate level of control limits, and then the second step is to optimize the process.
You may say it's "significant", but if you look at it another way, it de-risks both tasks as opposed to doing both at the same time but having much higher risk.
So in my mind, the "significant work" is less relevant, as the derisking is much more important.
There was work for sure to move to a new process, but that wasn't necessarily duplicated work with the architecture being created on a prior process. My impression was that it was already parallelized, and that not having to develop new architectures on new processes prevented a fair bit of contention.
The hardware side's optimizing the process on the first cycle, then doing a new one on the second.
Can you imagine having to live this life? Port to a new platform every other release?
Actually thinking a bit more... there is the Apple VR headset which seems to be getting closer to production and I'm sure it could use the efficiency from the new node plus is priced high enough to warrant the costs. Some speak of a launch beginning of 2023 with 1.5M units produced. That would be bang on in terms of schedule.
So it isn't outside the realm of possibility that the A16 design is 3nm, and they will delay its launch and/or otherwise make it less desirable to deal with lower initial production yield.
The A16 will be N4P I'd guess. In my initial comment I should have said A17 for N3 instead of A16.
They can't release 3nm A16 this year. I'm pretty sure it's on something like N4P.
IMO, the plain M2 was a 3nm design that had to be backported to TSMC 5nm due to the node transition taking longer than projected, in the same way that Intel's Sunny Cove had to be backported from Intel's 10nm to their 14nm++++++ node.
The rumors now say that Apple is getting ready to build M2 Pro and M3 on TSMC 3nm.
>According to one analyst, they will be coming from TSMC, and will debut later this year. Even more tantalizing is the notion that these Apple SoCs will likely be the very first to use TSMC’s bleeding edge 3nm process. This is mildly surprising given the M2 chip revealed this week was made using TSMC’s 5nm process, just like the previous M1 products. Launching the M2 on two different nodes would require Apple to do the design work twice — once for a 5nm M2 and once for the 3nm M2 Pro.
https://www.extremetech.com/computing/336862-apples-m2-pro-c...
My wild ass guess is that the M3 they are talking about here is the original M2 3nm design that was delayed.
e.g. no reason to believe. Everyone here is "an analyst" in that sense.
There’s an M2 Pro chip and an M3 chip in the works, according to industry publication DigiTimes.
https://www.digitaltrends.com/computing/apple-to-deliver-3nm...
0: https://www.pcgamesn.com/amd/tsmc-7nm-5nm-and-3nm-are-just-n...
1: https://www.anandtech.com/show/16823/intel-accelerated-offen...
Are they after investors or traders? Why they don't call it something like superlaser 9000 and the next year hyperlaser 10K speedmaster?
I originally asked this exact same question of the guy at AMD who started talking about their x86 chips in "equivalent" megahertz because they the actual clock rate was slower but they had a better instruction per clock (IPC) and so got more done per second than their Intel counterparts, but Intel was winning in the "Megahertz race" because that was what the press was fixated on using to describe the "leading" chips. Anyway, if you're a leader and and you can get the press talking about something your competitor isn't (or can't) do, then you "win" the perception of being ahead.
In semiconductors, this has been very effective at getting the press to see Intel as "behind" because their "nanometer" number was stuck in the double digits while "more advanced" fabs were already in production on lower nanometer number processes. (Note the scare quotes are all just to indicate topics that both TSMC, Samsung, and Intel would have different takes on the current state of the art here).
It's also still useful as an identifier to distinguish between one node process and another, so it's not entirely pointless, even if the nomenclature is meaningless.
No but enthusiasts (read: gamers) do. A lot of PC industry marketing nowadays is geared towards retail PC builders, who are very impressed by tech jargon.
There was a concerted effort some time back to formalize a standard measurement of process density using something like "million transistors per square centimetre" by a standards organization. (IEEE?) It wasn't a perfect measurement but it was a lot better than width. It failed so completely that I can't even Google it any more. The awkward name probably didn't help.
edit: it's "MT/mm2". Some people actually use it, but more in the informal sense that has the problem you espoused rather than the formalized one, which I still can't find.
We can always increase W, but decreasing L requires technological advancements
So I’m not sure on the value of area (W and L), L alone is more relevant (for the reason I said above)
we are dealing with finfets / GAA now which are not the same as planner transistor, but I guess there should still be relevance in L and W because these measurements are important even for resistors
So probably some sort of equivalence between a FF and a planar transistor should be given to name the nodes
https://en.wikipedia.org/wiki/3_nm_process gives a reasonable summary in its introduction.
My remembrance of the specific quantum effects that you're thinking of are from quantum tunneling[1] of electrons. The problem occurs when the gate size gets small enough that electrons can pass through without the transistor being switched on, which starts to happen around 3nm.
[1] https://en.wikipedia.org/wiki/Quantum_tunnelling
> a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers
So if everybody believes in the nanometers, nobody cares.
Until you hit a wall.
https://semiengineering.com/quantum-effects-at-7-5nm/
> "Quantum effects typically occur well behind the curtain for most of the chip industry, baked into a set of design rules developed from foundry data that most companies never see. This explains why foundries and manufacturing equipment companies so far are the only ones that have been directly affected, and they have been making adjustments in their processes and products to account for those effects. But as designs shrink to 7/5nm and beyond, quantum effects are emerging as a more widespread and significant problem, and one that ultimately will affect everyone working at those nodes..."
and
> "“At very small dimensions of the body, the semiconductor band structure gets ‘quantized,’ so instead of a continuous energy spectrum for the carriers, for example, only discrete energy levels are allowed,” Mocuta said.
This quantum confinement has several possible consequences. Among them:
• A transistor threshold voltage change. • A change in the density of states (DOS), or the number of carriers available for current conduction. • A change in carrier injection velocity."
Think of how thick towels started as a manufacturing defect: a machine in a conventional cloth factory had a part break down, and instead of churning out the usual flat cloth it erroneously wasted a loop of yarn at each 'weave' (for lack of better words as I'm not into weaving). Having no immediate solution at hand to recover the yarn from the thick cloth was sold / distributed as scrap. The problem of the machine was identified and fixed. The users of the cheap scrap came back for more as they discovered the superior water absorbing qualities... Since the fault was documented they could intentionally reproduce the desired 'faulty' cloth.
What's been happening is that fabs have been exploiting more and more tricks to increase transistor density while still using the larger feature sizes. So flat transistors became finfet's, increasing their gate area and allowing chips to use fewer of them for the same silicon area, etc...
So read "3nm" as "a process with the same transistor density as you would expect had some ancestral ~90nm process been shrunk by a factor of 30".
I was dumbfounded when even hearing about 14NM YEARS before it was a thing (I also got to see 64-core concepts at intel in ~1999 or so)
3nm is mind boggling amazing.
Whatever happened to "voxels" (before the graphics term, intel was creating "voxels" that were used to use light to transfer vertically between layers... but I stopped following CPU arch years ago.
https://cordis.europa.eu/project/id/554
3nm means the smallest feature - eg, the width on a channel[1] not the size of a transistor.
There are quantum effects at this level (and indeed parger), and one of the big challenges with process design is minimusing them. See [2] for an overview.
[1] https://www.electropages.com/blog/2022/05/samsungs-3nm-techn...
[2] https://semiengineering.com/quantum-effects-at-7-5nm/
[1] https://www.electropages.com/blog/2022/05/samsungs-3nm-techn...
Similarly printing text at 600 dpi doesn't mean that the actual width of the stems in the letters is 1/600 of an inch.
It costs (within an order of magnitude) the same to build a modern fab as it does to build one for a process 1-5 generations back, maybe more. You have a roughly similar backlog for equipment too. For your troubles you get far fewer chips per wafer so your cost per chip is higher. And the chips are slower and use more power. That makes it much harder to get any kind of payback on a depreciating asset that only gets more out of date. You also risk demand for your new 45-nm or 90-nm fab dropping off toward zero in 10 years.
Historically older fabs would see a drop-off in business as new chips were designed for new processes so as time went on there was more and more capacity available for cheap on the older nodes. That cycle is and has been slowing down though so there isn't much slack even for older fabs.
I'm not sure where the market will end up. If the current shortages are a temporary backlog + hoarding then things will work themselves out within 1-3 years and anyone starting lots of fab construction risks bankruptcy - something that has happened multiple times in the past as keeping a fab idle is equivalent to burning it down so you end up having to dump chips at cost or even a small loss. On the flip side if the recent disruptions are merely accelerating an existing trend then anyone kicking off fab construction stands to make a lot of money.
You could make a microcontroller on a 3nm node if you wanted to, but first you'd have to design a new core, and then tell people to pay $100 per chip instead of $0.01.
TL;DR: the chip shortage is an economics problem, not a physics problem.
(Edit: Yes, trying to read things in a different way - "weird quantum artefacts" at 3nm have nothing whatever to do with the automakers' problems.)
(Edit2: Here is the point which I was originally trying to make: "Chip manufacturing, even at 3nm x ~30 = ~90nm, is still extremely difficult. That fact is a big part of why the automakers did not attempt chip manufacturing, even at ~90nm.")
I don't know why automakers didn't engage their partners here to expand manufacturing. I am sure they asked, and the companies that can build 90nm fabs decided not to. Maybe it doesn't make sense after the backlog is cleared, maybe they like the higher prices? And if a car company wanted to start manufacturing chips themselves, they'd have to hire engineers, license patents, work out bugs, etc. and the risk is that the shortage is completely gone after you do all of that. (And, all this during a pandemic. If they wanted to use wood to build the physical building containing the fab, there was a shortage of that. So, a lot of problems to solve, and 10 billion dollars starts looking like a small number.)
CEO: Okay, we will have the 20nm node ready by Q4 next year, right?
Engineer: No, you see, there are quantum effects...
CEO: You're fired!
(Q1 next year:)
CEO: Now, we need to have this 20nm node ready by the end of the year!
New engineer: Sure, we will have the "20nm node" ready by then.
[1] https://www.youtube.com/watch?v=Nb2tebYAaOA
Threadripper seems insanely expensive right now, will the next generation be faster at least or use less energy? Or, in other words, does it make sense to wait?
It has some added instructions (AVX512) and will likely be more efficient at the same clock rate but they will be pushing the clocks higher instead.
I would not wait around unless you have plenty of time. I doubt a cut down Genoa will show up on workstations until the server market is satiated.
Maybe do it like this: plot nm vs density for "properly" named old nodes and extrapolate further for current nodes.
Completely naive about this level of engineering, but very interested :)
For example (just to chuck some more entropy on the pile) googling "tesla 5nm" finds a bit of an indication they're doing things in that space.
But there's probably also an example of some phone chip that has Cortex A53 cores implemented on two different transistor libraries to hit significantly different performance/power points.
AnD wItH tHe sAmE nUmBeR oF nAnOmEtErS tOo lol
The solution is simple: treat their numbers as brands. A Core i9 is generally better than an i3 of the same generation, but where a Ryzen 5 is compared to that is anyone’s guess (depends on the exact models, generations, etc).
Node sizes have had nothing to do with a characteristic length for quite a while now, that ship has sailed.
For example the scaling factor from from say 5nm to 3nm for transitory is X. But for SRAM, things have been getting progressively worse so it often X/2.
And with caches going up you end up using a process node terrible for sram, but needed for cpu transistors, and it’s a huge waste. You can see why you might use a different process tech for caches, and AMD is clearly going this direction.
Anyhow, TMSC 3nm is just a marketing term. On the physics side, it has about 7-8 key differences to previous 5nm. That’s 6-7 too many things for people to care or remember about.
Intel used to use a part of the transistor when it takes about scale. But then the geometry of the transistor changed! That number no longer has meaning. And transistor geometry changes over time, and you have multiple to choose from on the same process node. Oops.
So… back to marketing terms.
The only people upset with this are benchmark warriors. People in the industry or familiar enough know how to navigate the characteristics of a process and do not need one single neat number. Customers by and large don’t care: business are after the cheapest and best supported, and consumers are mostly price-sensitive except for higher-end brands that don’t communicate at all on this sort of things.
This would not solve any of the industry’s problems.
I think there's problems with that metric too. Not all transistors are created equal. Depending on the switching speed that you want, how much leakage current you can tolerate, etc., I bet you can vastly change how many transistors you can fit in a given area.
https://skywater-pdk.readthedocs.io/en/main/contents/librari...
See how there are six foundry provided cell libraries that make different performance, power and area trade-offs. Even though it’s all 130nm.
There are even more libraries than that too. Like the OSU one that makes even different trade offs.
[1] https://read.nxtbook.com/ieee/spectrum/spectrum_na_august_20...
This won't help in terms of clarity of how good the transistor is. As others mentioned, there are different kinds of transistors and different manufactures would still use their smallest (read not necessarily the fastest/strongest/mostly used) to market their process.
A major reason I think this won't work is that transistor size no-longer limits the density of the chip. Nodes below 20nm have transistor contacts (what connects the silicon to metal), and metal tracks that are much larger than the transistor. The contacts typically limit the pitch between transistors and hence the density. A lot of innovation is now done to shrink those elements rather than work on the transistor physics/materials/size directly.
Layman question from not GP, but wouldn't reducing the "overhead" around transistors increase the transistors-per-area metric and thus be at least somewhat useful?
As you shrink the contacts and the metals their resistance and capacitance exponentially increase. This means that both your power will go up and your speeds will go down. Also the process becomes more prone to manufacturing errors. So shrinking those elements blindly without innovation just to increase the density numbers is not really a good metric of the quality of the process.
transistor switches /( s * mm * Joule) ?
Koomey's law seems to hold strong.
I work in Analog so I'd like a more analog based definition (which also works for digital tbh) based on size or density, trans-conductance(gm), output resistance (ro), and Unity frequency (fT) but that's never going to happen :<
Ideally you'd have some sort of metric that includes power efficiency, performance, and maybe even cost since smallness isn't really a feature in itself unless you're making something space-constrained like a hearing aid. It's hard to condense that down into a single number though.
I don't think this is fair though. Some processes may support finfet, or gaffet, or some neat trick, where, if you switched to the process, but didn't optimize for size, you may be able to double the performance at half the power.
Like you suggest, it's some crazy multidimensional problem space. There's no hope in representing it with one number. And, nobody that's actually designing would care about any of these numbers. In the end, they would only be used by marketing, which is all they're used for now.
Well, you technically can, but it's not starting with a thicker wafer, it's growing a second (or more) layer of dopable silicon on top of the normal doped silicon/transistor gate insulator/wiring/more wiring/even more wiring stack of the chip, then adding another full stack on that new dopable surface. Pretty sure the fabrication infrastructure for that makes conventional, GDP-of-a-small-country photolithography fabs look cheap by comparison, though. Plus you'd be at least squaring (and probably much worse) the already not very good production yield.
Actually, you probably can't; the distance between the sides is practically astronomical compared to the horizontal feature size. I don't remeber the exact actual dimensions, but if you assume a a 1-nm trace is equivalent to a 10-meter road, then a 1-mm wafer thickness would be equivalent a 10'000-km planetary diameter, so you're getting close to the scale of routing a internet cable through the center of the earth. (At least it's not molten, I guess?) And doping generally works by diffusion, not drilling a hole.
I naively assumed you could get an interconnect through, by taking up a large area of silicon. :)
I think a better question to ask is whether the underlying economic factors behind continuous process tech improvements are healthy. Is there enough value-add to the final user by continuous process tech improvements? Are the costs for that improved process tech scaling with the value-add? And is the competitive landscape healthy? While that holds true, companies will keep looking for process tech improvements to give them a competitive edge.
In the 80-90s this was very much true but in recent decades it was to a lesser extent hence why we see consolidation/reduction in the number of foundries, foundry services to amortize the cost of older node-tech, and R&D going to companies/partnerships that can capture the most end user value-add (Apple/TSMC).
Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade. There are a lot of process tech innovation that can lead to better performance that are not necessarily scaling related. In fact, scaling transistors stopped being very useful a while back afaik due to the breakdown of Dennard's scaling.
I agree with this and I suppose I just sort of reached for Moore's Law out of habit and maybe a bit of laziness. Thanks for articulating the question more appropriately.
>"Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade."
This was what I was looking for. Cheers.
It isn’t actually a law, it is more of an observation / prediction
But people have been inventing new technologies to be able to still improve chips (for instance, finfets and now GAA)
The only current products that are really hurting for better chips are high powered wearables like watches, VR headsets or AR glasses. The next few years should see some tangible improvements in those products, but probably not much difference in desktop, laptop or phone. Datacenters, since they run at high utilization rates, will continue to take advantage of the cost savings of slightly more performance per watt and dollar per transistor costs.
https://wccftech.com/tsmcs-3nm-wafer-prices-will-erode-trans...
https://en.wikichip.org/wiki/3_nm_lithography_process#TSMC
So pretty big deal for computing. We may be close to the age where most laptops are fanless, for example.
Most non apple devices still on 7-11nm equivalents since apple gets first dibs on new gen fabs from TSMC (pay for the right)
Doubtful since both Intel and Apple are actually INCREASING power requirements over time. Look at Apple M1 vs M2. Intel i7-1185g7 vs i7-1280p.
The manufacturing process matters far more than whatever tweaks they make intra-generation. M1 is 5nm, Intel is 7-11nm
That's not what Anandtech found when they tested the performance and efficiency cores used in the A15 and M2.
Performance:
>In our extensive testing, we’re elated to see that it was actually mostly an efficiency focus this year, with the new performance cores showcasing adequate performance improvements, while at the same time reducing power consumption, as well as significantly improving energy efficiency.
Efficiency:
>The efficiency cores have also seen massive gains, this time around with Apple mostly investing them back into performance, with the new cores showcasing +23-28% absolute performance improvements, something that isn’t easily identified by popular benchmarking. This large performance increase further helps the SoC improve energy efficiency, and our initial battery life figures of the new 13 series showcase that the chip has a very large part into the vastly longer longevity of the new devices.
https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...
Correct me if I’m wrong but high performance fanless laptops are only possible with ARM chips atm. Has any other big laptop chip designer even announced plans for anything remotely close to M1/M2?
M1 is the only 5nm processor on the market right now. Yes, you will see quite comparable performance once Intel and AMD get to 5nm as well
So extrapolating this we expect a 1nm (if this is even possible) would offer roughly the same amount of gain. Not sure what we can have beyond 1nm
N2 in 2025. N1.4 or may be N14A in 2027. Roughly N1 in 2030. I believe we will hit an economic wall by the end of this decade. i.e The cost of Next Gen node exceed what the market are willing to pay for it.
https://teavm.org/
Minecraft contains some native dependencies, though; you'll need something like https://copy.sh/v86/ or https://bellard.org/jslinux/ with the right operating system image to run it in browser.
Apple has been selling phones and laptops with N5 for two years and I don't think we still have any competing product in the hands of consumers using N5 yet. If I'm not mistaken Nvidia and AMD are about to release products using N5.
But this is also a free market.
Apple is both willing to pay more and has the needed scale to buy up all capacity. If someone else wanted to both pay more and buy up all capacity - they have the ability to do so.
If - in addition to what was described - part of the deal is that Apple would only make the deal if TSMC would not add any more capacity for 2 years NO MATTER WHAT - that is blatantly anti-competitive and monopolistic.
If no one else is willing to pay for TSMC to build another facility to produce more chips - then that's a free market. Apple was the highest bidder and won.
The essential facilities doctrine might apply (the question isn't whether competitors are literally forbidden from the latest chips, it's whether they're "practically or reasonably" unable to access the latest chips, and all evidence suggests that's currently the case). Possibly it's the competitors own fault for refusing to pay the entirely reasonable price TSMC is insisting on, and perhaps Apple's contract with TSMC actually has carefully written clauses to ensure others could practically gain access without having to spend enormous amounts of money, but which nobody has been willing to use for their own reasons, but it certainly seems possible that the contracts effectively ensure Apple have sole access to the latest hardware, which would line up neatly with the apparent scenario we're in - that Apple currently has sole access to the latest hardware.
That's not to say it's a slam dunk case either. I wouldn't care to guess either way.
Shutting out all competitors from new technology for years by outspending them (arguably from profits largely made via other monopolistic like behaviors) could qualify for many.
I'll never understand the endless parade of apple goons who rush to defend them at every turn. Use some critical thought instead of treating it like sports
I'm describing a company that is able to pay far higher than competitors due to anticompetitive practices which drive their margins far above what would be realized in a truly competitive environment.
I'd gladly like to see the alternative timeline where Microsoft charged 30% for all software on windows and where you and others shamed them for it. Because I know for a fact all of those defending Apple here would have at the time
And if you look at their revenue breakdown, it’s clear that most of it comes from hardware.
When an ecosystem becomes large enough, societally impactful enough, and is controlled exclusively by one entity, that qualifies as a monopoly within that ecosystem.
You remove any capacity for competition to drive down margins. Apple can ban Spotify tomorrow to promote apple music on their devices, for example. Not anti-competitive in your mind?
You have two gated monopolistic ecosystems, choose the one you want to be gouged by. In a competitive environment, margins get compressed close to cost to deliver. If a company can achieve huge margins on items that could otherwise be much cheaper to deliver in a competitive environment, that qualifies as anti competitive.
How much does it cost the supplier to host apps via CDN? Or to process a single payment? Apple can take as much margin as they want on these because nobody else is allowed to provide those services within their ecosystem
The law will adapt to this, it's quite obvious to those that are unbiased.
The amount of REVENUE Apple makes from iTunes is likely less than $5Bn per quarter.
Apple is not using their iTunes monopoly to buy chips.
They're using their insane amount of phone sales (at absurd profit margins) to buy chips.
Again, I hate Apple as much as anyone. Anyone who thinks Apple is a monopoly in phones is a moron. They don't even own 30% of the smartphone market...
I agree Apple is anti-competitive within the ecosystem of the iPhone. That's not going to change their ability to buy all of TSMC's chips.
Is there some number of gated ecosystems that we get gouged by that would be considered acceptable and non-monopolistic? Three, ten?
When a platform reaches a sufficient size (subjective), it must be opened to competition for services provided within that platform.
We're talking about the base operating system for global computing here, not a car's dashboard system
This sounds similar to how Torvalds has a monopoly on services within mainline Linux?
You can poll your friends, colleagues, and relatives - how many people use iPhone vs something else. If % is more than 50%, then apple is monopoly effectively
same with mobile tablet market, just make a table of how many ipads vs alternatives.
It only makes sense to do this on the level of a governing body, because the term 'monopoly' is defined by governing bodies and the remedies against it are enforced by the same.
You can cherry-pick / gerrymander any group you want to make it look like Apple has a monopoly by the numbers, but in order to make it stick you'd need to demonstrate that Apple fixes prices (they don't, their phones cost ~2x or more what competitors' phones cost) or that they're somehow excluding other competitors from access to the phone market (they're not).
However, iPhone is not a commodity, it comes bundled with an OS and plethora of apps and services, the whole ecosystem.
per this report ( https://www.imore.com/apple-takes-75-smartphone-profits-desp... ) it says apple is taking over 75% of profit pool of the market, despite selling only 13% in units.
If we identify monopolists using bottom-line approach, then Apple will be a clear monopolist. As would be Facebook and Google. as they should be, if lawmakers truly cared about public good
Per the parent, the challenge is to convince governments to take your proposed approach. Repeating the argument to try to convince a HN crowd is not just non-productive, but counter-productive if it doesn't move toward this goal.
Taking away property and/or converting investments into a utility when a business becomes too successful results in a lot of nasty byproducts and most governing bodies will shy away from this.
Instead, you likely see the issue completely side-stepped. Nobody truly cares about labelling Apple a monopoly or declaring iOS to be a somehow accepted as a market onto itself - they want a change in behavior.
The challenge is how to side-step this when per the letter of the law, they have literally done nothing wrong. For instance, the EU will likely get a lot of pressure due to new regulations being solely against US-headquartered countries for the primary benefit of companies in the EU.
Actually, if this were to happen back in the days, we would have everyone on Linux Desktop already...
>I'll never understand the endless parade of apple goons who rush to defend them at every turn.
May be because people are not defending Apple but actually giving the correct account of the story?
I dislike modern Apple, or Tim Cook's Apple. But doesn't mean I would agree what they are doing ( in the context of TSMC ) are monopolistic. Without Apple, there would be no N3 by 2023. It is as simple as that. Apple in this case is not a customer, but more like a partner who are willing to invest ( and take the associated risk ) and spend to get and push for the latest technology.
Backing up though I think it's interesting that Apple has so much money that they can afford to buy up leading nodes for years. Why do they have that much money? I think it's obviously because of anti competitive behavior elsewhere such as their App Store which basically gives them a money printer.
The free market sucks. People need to stop valorizing it.
In a purely free market, monopolies can form and exist in perpetuity
To have a truly free market it should be accounting for this edge case - that is once a company grows beyond certain size it should by law be split.
There should be other provisions - for instance if the big corporation causes a legal trouble, they should match the legal team of they claimant - for instance, if someone is suing Apple because of an issue with their hardware and Apple sets aside £100m legal budget - by law they should be required to set £100m for the claimant, so there is level playing field. That of course only if the judge decides for the case to go on.
It would certainly have the potential to make the market work better and improve competition, but that's not what a free market is.
Corporations also can't do what they want - for instance they cannot commit fraud, evade taxes, sell dangerous products and so on.
You have something like that in sports, when in order for participants to compete freely, they are being tested for illegal drugs, so that no one has a unfair competitive advantage.
Free != do what you want.
> an economic system in which prices are determined by unrestricted competition between privately owned businesses.
"Unrestricted competition", to me, clearly says that businesses should be able to compete without such restrictions as being split up by the government if they become too big.
Or this definition from investopedia (the first actual result on Google):
> The free market is an economic system based on supply and demand with little or no government control.
The government splitting up businesses is certainly a form of government control.
However, Wikipedia's first paragraph is:
> In economics, a free market is a system in which the prices for goods and services are self-regulated by buyers and sellers negotiating in an open market without market coercions. In a free market, the laws and forces of supply and demand are free from any intervention by a government or other authority other than those interventions which are made to prohibit market coercions. Examples of such prohibited market coercions include: economic privilege, monopolies, and artificial scarcities.
Apparently, the government imposing control and restricting competition is part of Wikipedia's definition of a "free market", so extremely harsh anti-monopoly and laws against anti-competitive behavior fits perfectly well in Wikipedia's definition, but makes the market less free according to the definitions I've been following.
The important part is, I agree that a lot of government intervention is required to make markets function efficiently. I just personally wouldn't call that a "free market", rather a well-regulated market.
Also, Apple didn't make intel fumble their 10 nm process or Qualcomm to miss aarch64. Result of anticompetitive practices of intel and Qualcomm made them lazy and that resulted in this, not Apple buying TSMC capacity.
How is that not free market?
As a consumer, I can now go out and buy a machine with Apple silicone, Intel, AMD.
If anything, we're finally seeing some improvements after years of chip stagnation.
I'm not sure if node exclusivity was why Apple has the first true wireless earbuds, there was one competitor first with some giant ones so I suspect so, but lots of competitors followed fast (other foundry's nodes were more competitive then). Anyone know the history on that and what nm chips were used by them and earliest competitors?
Apple was just smarter and convince consumers to buy its products at a premium - just like with iPhones.
Consumers couldn't buy products with a pocketable harddrive: Apple had exclusivity agreements with all manufacturers of them to completely monopolize their use for music players.
Also remember that the iPod wasn’t an immediate hit and it only sold 1 million in total in its first two years. It wasn’t until two years later when it was made available on Windows.
Apple did the same with Flash storage before the iPod Nano came out. Any of its larger competitors at the time could have had the vision to see that flash based players were the future and ensure they had the supply.
There are multiple fabricators (supply) and a small pool of chip manufacturers (demand).
As a chip manufacturer, I cannot go out and buy competitive silicon. I can't even get last-gen silicon. The best you can get is an improved 7nm node from Samsung, but nobody wants to run their CPU on the blood and guts of Exynos chips. So, by definition, I'd say that Apple has monopolized the cutting-edge lithography business through leveraging a partner.
It isn't like Samsung and AMD can't afford to jump on new nodes, they just won't be able to raise their prices enough for it to make sense to them.
Apple pays more hence they get the product
If someone else wants to pay more, they can get the product too
There are other foundries too, they could get their phone chip done with Samsung on a similar technology
What is true is that outside of TSCM and Samsung, you don’t have options for high end products
The fact they got more money than competitors and leverage that to get exclusive control doesn't matter much. Textbook definition of a monopoly. On legal side, IANAL, no idea if this could be illegal somewhere.
They will just get the first chips — maybe for the first year of production, idk
Whatever you want to call it, it's pretty anticompetitive. Seeing them do the same thing for 3nm doesn't inspire confidence that they're going to do the right thing this time around.
Except for the first year where they take a headstart where nobody else can either have matching chips or figure out what issues come out of this process, making them get an even bigger advantage as they have stabilized their process on year 2 when everyone is barely building up.
Don't be a clown. You know fully well what Apple is doing.
If other fabs were available, apple's ability to pay up would not be leading to accusations of anti-competitive behaviour.
If I ignore reality and make up non-existent fabs, the extremely monopolistic, anti-competitive behaviour is not a problem!
>they are paying what they can afford.
The problem is not paying what they can afford, it's paying that in addition to forbidding TSMC from increasing their capacity on that node while the contract is ongoing. It's both paying what they can afford, and paying so that others can't afford it.
If some other company is willing (or really: able) to make a competing offer, I’m not aware of TSMC not being interested in that. It just happens that the number of companies with the margins to spend what probably amounts to the GDP of a small nation is approximately zero (or one if you count Apple)
Except they didn't have exclusive access? Huawei also bought 5nm. Intel contracted 3nm along with apple before backing out.
Is there any proof that this is actually true? TSMC is currently building few new factories, I suppose they will produce 3nm chip in them.
I am sorry but that is simply not true. Sigh.
Except that Apple don’t supply or trade in chips. If they were buying up chips and hoarding them it might be considered a monopoly.
The seller of Top Quality Material are in no position to sell their A grade material to all buyers.
I think the issue is that the investment costs are so high that this ability is limited to a very small group of companies, as is the required knowledge.
I don't really know where I stand on this myself; in principle I agree with your point of view, but I also think it's a little bit too simplistic.
This began more than 10 years ago when Apple was sitting on a massive pile of cash. What do you do with all that cash? Most companies will simply pay dividends or, more commonly now, do share buybacks (side note: it's become popular to view share buybacks as some kind of systemic problem but they're functionally no different to dividends).
Instead Apple engaged in vendor financing. A lot of components Apple needs requires massive capital investment. Companies might borrow money for this. Apple essentially became the bank, saying we'll give you the money for this. In return Apple gets some combination of preferential pricing, guaranteed availability or exclusivity for a certain period.
Eventually Apple doesn't even need to spend money to do this. Just the commitment for a buyer the size of Apple to purchase your output can have the same effect. It'll help secure financing and Apple can extract the same preferential treatment for that commitment.
This is the logistics side of Apple's business and Tim Cook was the architect of that.
AMD used global foundries
Idk about mediatek, could be
What started the TSMC “boom” was apple
[0] https://www.techspot.com/article/653-history-of-the-gpu-part... "The TNT2 utilized TSMC's 250nm process and managed to deliver the performance Nvidia had hoped for with the original TNT."
[1] https://www.cadence.com/en_US/home/company/newsroom/press-re...
[2] https://www.techspot.com/article/657-history-of-the-gpu-part...
Ignoring the 2014 outlier (2013 was the year Apple started using TSMC) the growth curve looks pretty uniform for the last two decades.
But my reasoning was:
* Most importantly, TSMC's market share grew uniformly, and they already were quite dominant before (market shares: 2013: 49%, 2014: 54%, 2015: 55%, from [1])
* I don't know whether that additional revenue is from Apple. Apple's use of TSMC has to have started in 2013 if they shipped the products in 2014, and to my knowledge didn't end in 2014.
* If Apple could use a node size in 2014, actual development of that has to have started quite a few years before.
[1] https://investor.tsmc.com/english/annual-reports
Oh, I see. Makes much more sense than getting $60B out of the blue :)
Really, the biggest semiconductor manufacturer in the world with tons of customers cannot exist without Apple ?
TSMC started becoming relevant because apple switched from Samsung to TSMC way back in in the second iPhone if I am not mistaken
They started getting a lot of cash and investing it back in RnD, and with time got to where they are now, leading the industry by far with only Samsung capable of providing somewhat competing technology but with much smaller market share
What's the difference between having an agreement with a factory and having a better employee?
Apple pays the money and takes the risks, so they get the rewards.
Takes what risks ? Buying the riskless new transistors that TSMC puts out because they know that they work ?
Apple taking risks would be making the chips themselves (see: the failure that was itanium and how it screwed Intel because they had to reorganize their facilities for it), not buying it wholesale and just having TSMC print it.
And if you think that risk is small / nil I would point you to Intel's recent track record.
And just to point out that Apple actually does bear the risk you've highlighted as Itanium was a design (which Apple does) not a manufacturing failure.
You could say the same thing about Intel until they blew it with 10nm. Who knows what the future holds
Also blaming Intel's process problems on Itanium (discontinued 2011) is - well an interesting perspective!
https://www.tomshardware.com/news/intel-postpones-production...
The issue seems to be these other companies being ready to port their designs. I have not seen any report that Apple is undertaking anti-competitive behaviors in order to secure the entire generation of wafers.
Up to now it doesn’t seem like anyone is willing to pay.
I suspect the competition really is “ok” with waiting / don’t want to pay.
Idk if Samsung has an equivalent to TSMC 3nm though
Qualcomm saw big gains in efficiency and performance by moving from Samsung to TSMC. Various high end Android phones use this chip already.
https://www.anandtech.com/show/17395/qualcomm-announces-snap...
Qualcomm's current flagship mobile chip, the Snapdragon 8+ Gen 1, is on TSMC 4nm.
https://www.anandtech.com/show/17395/qualcomm-announces-snap...
And then it's not like the day after, Apple could produce iPhones or Macs with those chips. It takes months from having the first chip in hand until production churns out volumes of finished devices.
Might also lead to some awkwardness for Apple in having to port a design running on an older node to a newer one, if M3 does use 3nm.
By all accounts, and if the pattern holds true, the M[n] chip is generally the scaled up version of the latest A series design.
So if we have a release of A15 on 5nm, and the M3 is going to be based off that design, but ported to 3nm, that's kind of an interesting situation.
The lead time would be enough, that I doubt M2 was ever going to be a 3nm chip.
I can’t actually find anything other than ‘analyst states’ type links though.
TSMC call it N3 instead of 3nm for a reason. It's not 3nm.
Of course once they were beyond the clockspeed-above-all Pentium 4, Intel themselves stopped focusing on marketing clockspeeds and things gave way to basically arbitrary naming/numbering schemes. (Or to some extent, core counts.)
When intel was dominant (pre-Anthlon XP and pre-Core era), Clock speed alone could have been used to measure performance. Then, AMD Athlon XP come out with higher IPC than Pentiums, but the customers were used "Higher clock, higher performance" even though Athlon XP was faster at the same clock speed.
So, they've changed model names to indicate performance parity, that was the game intel played. They didn't lie or mislead about clock speeds anywhere.
And of course it's not just Intel marketing this way, everyone does, but their more recent naming change makes it more transparent. Tired of years of saying "welll, TSMC's Xnm (or X number with no units that's sort of implied to be nm) is really equivalent to our X+Ynm," they just go for marketing on parity.
Like I said in the start of my previous post, this isn't actually inflating (or deflating as the case may be) but it's just taking a concept that's widespread among the consumers as a proxy for quality/value and marketing to that idea rather than any actual physical characteristic.
The knowledge and education backlog is way too high for the West
Scary times
https://www.theregister.com/2022/06/13/column/
To be fair, most of the equipment used by TSMC is made by European companies such as ASML.
It's an insurance policy. TSMC/Samsung cannot function without ASML and ASML cannot exist without their demand.
https://www.theverge.com/2022/6/30/23189362/samsung-3nm-chip...
what do you mean exactly? you meant reunification? i don't see how that could be a problem, we do business with China already, it would just become one of their state, you can still trade with them
Or you meant as a result the US would not be able to steal the fabs, tech and workforce? (and by steal i mean purchase, just like we write about how China is stealing our tech when they purchase patents or by other deep internal means)
Samsung is in a worse position than intel, not only it is Korean, and they have problems with their northern neighbor, as well as with Japan, but they also got hit by the anti-competitive behavior and trade war from the US, and were told to stop development of their in-house chip (Exynos), to place Qualcomm as a western leader, despite their poor tech
https://old.reddit.com/r/Android/comments/71rjyx/why_exynos_...
And i'm not sure if The Verge is a trustable source, they either have insider knowledge or their source is manipulative, because that contradicts with the Exynos story
If what they claim is true, then that'll be interesting to see if their "promise" can become a reality, time will tell
But as a matter of fact, still nothing is happening in the west, and that was my entire point
I'm sorry what? A chinese invasion of Taiwan would be catastrophic for the global supply chain. It would require a force a few times bigger than the entire D-Day invasion, and there's no way the chip fabs wouldn't be smouldering ruins afterwards. The reason china hasn't already invaded is that it would have a nearly incalculable cost for very little benefit. Far better to rattle the sword for the nationalists every so often and trade your way to prosperity.
But i don't think that's what most Taiwanese want, it would be a literal suicide, it's a small island, they see it as a political affair, not as a military one
The military view is what the western media want to push for some reasons
We seen it during the Pelosi visit btw, what's next? help elect a Zelensky bis? It failed with Abe, wind is blowing backward it seems
In the same sense that Russia is trying to get Ukraine to reunify, sure.
I use the literal term of what's happening between China/Taiwan, that's it
Objectivity is what drive me, there is no winner if it is built on lies and manipulations
It is not 3 nanometers according the Metric System anyway.
Oh well.
Mainland flags on the Taiwanese monuments. How many months away, let me check my predictions, I think 70 days or so, give or take 15. Shit getting real hot real fast.
Who needs phones this fast?
To give you some answer. I'd say that newer technology enables the following, completely not exhaustive list:
- Macbooks that can run on a single charge for whole day and have very high performance.
- Cameras in pocket/phone that almost no-one except professional photographers have to carry photo equipment
- Electric cars which can finally travel such distance that they are actually useful
- Combustion cars so that every city we live is not toxic. And they get to be economic too.
- Comfort within a car which we value.
- Phones are small (oh sorry this can cause flamewars, so let's say it like this - circuits get small)
- Technology demand, whether it comes from manufacturing phones or something else, benefits making computers (i.e Apple Silicion was for phones/tables, now for macs). It benefits other vendors as they get to access the technology too. So much investments enables TSMC to actually enable the technology.
Plus, FPGAs get absolutely huge with terrible yields for complex work. RED Cameras have a massive, multi-thousand-dollar FPGA (crossing over $10K as a part), and it is only powerful enough to process their custom video codec and nothing else. Still cheaper than designing a custom chip for that considering how niche RED Cameras are, but absolutely stupid for anything broader. An FPGA that could run an M1 equivalent would be larger than the laptop, cost over $100K, and be slow as a turtle racing across Oregon.
It's just the perfect storm.
Everything else is want and as a society, we agreed that you don't need to justify your want.
Whether that is a bottle of beer or a new smartphone.
Programmers who cut corners while writing phone software.
The point I was making is that it's not a phone anymore, it's a full fledge computer that is arguably more advanced than your average desktop.
Do we need more?
The two-day mainstream smartphone phone is not here yet.
Everyone gets habituated by context to look at the thing more and more as the gateway to everything.
It's not just for phones--these nodes end up powering everything. But it starts with phones because that's where the money is.
Would you rather to have Intel keep pushing 14nm+++?