The seemingly endless improvement of computers could be coming to an end, finally breaking Moore’s Law
This is Geek Week, my newsletter about whatever nerdy things have happened to catch my eye over the past seven days. Here’s me, musing about something I don’t fully understand in an attempt to get my head around it: I imagine that’s how most editions will be. If you’d like to get this direct to your inbox, every single week, you can sign up here.
There’s an old gag among computer scientists: “The number of people predicting the death of Moore’s law doubles every two years.”
Moore’s law, if you’re unfamiliar, is about how quickly computers improve.
In 1965, Gordon Moore, the then chairman of Intel, published a paper: “Cramming more components onto integrated circuits”. He had noticed that the complexity of computer chips – the number of components per square centimetre – had doubled each year for four years. “There is no reason to believe,” he wrote, “it will not remain nearly constant for at least 10 years.”
That was 57 years ago. These days we normally talk about the Moore’s Law cycle as doubling every 18 months to two years. But given that he was only looking at four data points, it wasn’t a bad prediction.
Exponential growth is an astonishing thing. If you start with one pound and you double it every two years, after 20 years you’ll have more than £1,000 and after 40 years you’ll have more than a million.
And as the microchips got more complex, computers got cheaper and more powerful. The processor in an iPhone 13 has about 15 billion transistors, about a thousand times as many as the chip in a top-end Pentium III from 2000. Your smartphone, if it was transported back to the mid-1990s, would have more raw computing power than the world’s most expensive supercomputer. Moore’s Law has shaped the modern world.
And now, apparently, it’s slowing down.
As I said at the beginning, this is not the first time someone has said that. Foretelling the Law’s demise was described as an “industry joke” way back in 2000. But I thought I’d better check if it actually was.
I think my conclusion, if you don’t mind my getting it out of the way early, is: “Yes in a narrow boring sense, ‘it’s complicated’ in a wider more interesting sense”.
The narrow yes
I asked Steve Furber, a professor of advanced processor technology at the University of Manchester, about Moore’s Law, and he said that the rate of doubling is definitely slowing.
Partly that’s a function of fundamental physics. “You get about five silicon atoms to the nanometer,” says Furber. “We’re now talking about processors that are four or five nanometers across, so we’re talking about structures that are maybe 20 atoms wide.” It’s difficult to make a useful machine that’s much smaller than that.
It’s also a function of a different bit of fundamental physics, namely thermodynamics. Doing processing work creates heat, and the more crammed-in the chips are, the harder it is for heat to escape. Semiconductors become less efficient at higher temperatures, so warmer computers are worse computers.
The main reason, though, is economics. For a long time, making the components smaller made processing power cheaper: “The economics were positive down to about 20 to 30 nanometers,” Furber says. “But now they’re going the wrong way.”
The result is that (according to this 2018 paper at least, if I’ve read it correctly) Moore’s Law has significantly slowed since about 2010. I said earlier that the doubling time was about 18 months to two years: that paper suggests that in the last decade or so, it’s become more like every four years.
For the consumer, this means that your computers are getting better and cheaper less quickly than they did a decade or two ago.
Ben Southwood, an economist and founder of the progress-studies magazine Works in Progress, notes that the progress we are making is much more expensive and researcher-intensive. He pointed me to a paper called “Are ideas getting harder to find?”, by the Stanford economist Nicholas Bloom, which says that the number of researchers (or at least the amount of money spent on R&D by the big computing firms) has gone up 18-fold since 1971. That is, companies have to work much harder and pay much more to get the same sort of advances.
I should admit that I’m not at all confident in this. Prof Furber will definitely know more than I do about the topic, but other people who also know more about the topic (for instance this guy) say that Moore’s Law is “as close to a perfect trend as empirical laws over long time spans can be asked to give”. And I have to admit, when I look at graphs of transistor density and so on, I can’t see a curve.
But with low confidence, I’m going to say that Moore’s Law is slowing down. And that’s what you’d expect: you can’t get exponential growth forever. “All industries go through an S-curve,” says Furber. “We had the exponential at the bottom, and we’re now approaching the asymptote.”
The interesting “it’s complicated”
But here’s why I think that story is incomplete. Moore’s Law is a very specific measure: the number of transistors on a square centimetre of a processing unit. It has driven our computing landscape for the last 50 years, but 50 years isn’t very long.
The number of calculations you can perform per second is measured in “floating-point operations per second”, FLOPS. The iPhone 13’s processor can do something like 1.5 trillion calculations a second, or 1.5 teraFLOPS. Before the invention of the digital computer, we still performed calculations, but had to do it with more primitive devices. A few thousand years ago, calculations would have been performed by hand on clay tablets. Then someone invented counting boards, and the abacus, and so on. Then more modern devices like slide-rules and the “arithmometer”.
As measured in FLOPS, our ability to do calculations was already getting cheaper before the computer. This paper suggests it was much slower, a handful of percentage points a year rather than 30 or 40 per cent as it has been since 1950 or something. But since at least 1800, and presumably since before then, the cost of calculations has been dropping.
Moore’s Law, in the narrow sense of the number of transistors on a chip, could not predate the existence of transistors. But in the broader sense of humanity’s ability to perform calculations faster and more cheaply, a slower form of Moore’s Law existed for at least 150 years before the thing it describes did.
So even if Moore’s Law in that narrow sense is slowing down, what’s interesting is whether the cost of FLOPS will continue to fall. As Furber said, all technologies form an S-curve – rapid exponential improvement at first, then slowing progress afterward. But once progress in each technology is largely exhausted, then a new technology is developed, and a new S-curve begins.
So what happens next? Possibly it’s as simple as making 3D chips. A processor chip now is a two-dimensional system: transistors laid out on a flat surface. But Furber points out that those inch-wide MicroSD cards that you can plug into the side of your laptop “go well into the third dimension”, which is why you can get ridiculous amounts of data (512 gigabytes! Twenty-five thousand times as much as the hard drive on my dad’s 486SX in 1995) into such tiny spaces. There’s no reason that wouldn’t work for processors as well as memory, says Furber, although “watch the heat”.
Or maybe it’s something more exotic, like optical processors or quantum computing. (Apparently the state of the art in the quantum field is improving far faster than Moore’s Law.) If “Moore’s Law”, the narrow version, runs out of steam, I’d be surprised if something else didn’t take over.
Nerdy blogpost of the week: The low hanging fruit argument
The Moore’s Law debate is a microcosm of a wider one, which is: is human progress slowing down? There was a recent and fascinating Radio 4 documentary on the topic, and the growing “progress studies” movement is dedicated to looking at what the drivers of progress are.
I can’t get into the whole debate here, but Scott Alexander uses a “low-hanging fruit” metaphor to try to explain why progress might appear to be slowing:
Imagine some foragers who have just set up a new camp. The first day, they forage in the immediate vicinity of the camp, leaving the ground bare. The next day, they go a little further, and so on. There’s no point in travelling miles and miles away when there are still tasty roots and grubs nearby. But as time goes on, the radius of denuded ground will get wider and wider. Eventually, the foragers will have to embark on long expeditions with skilled guides just to make it to the nearest productive land.
He adds “tall people” as a metaphor for “geniuses”: they can discover fruit hanging near the camp which is out of reach of us normal-sized people.
It’s an interesting metaphor, but I think it misses something: it’s not that new discoveries are just lying around, and the more you discover the fewer there are left. It’s also that new discoveries/inventions open up the possibility of further discoveries. Someone invents the telescope, someone else can observe the movements of planets and discover gravity. I think the metaphor would be better if instead of fruit-gathering it was logging: you chop down the trees nearest camp, and in so doing it moves the treeline further away, but it also opens up more trees.
This is Geek Week, my newsletter about whatever nerdy things have happened to catch my eye over the past seven days. Here’s me, musing about something I don’t fully understand in an attempt to get my head around it: I imagine that’s how most editions will be. If you’d like to get this direct to your inbox, every single week, you can sign up here.