We overestimated the importance of node size and did not fully understand what it meant to jump from intel to Apple Silicon at the time.
We thought it was the node size dictating a lot of the energy consumption, when the architecture (ARM) is actually the main culprit. The significant jump in battery life compared to intel also cemented that Apple could pull a more significant battery life improvement each generation compared to intel. Albeit with a smaller significance than the first jump it is still at least 1+ hour per generation. With a node size reduction we were drowning in hype that it could be 3+ hours of battery life improvement.
Low and behold it was actually the initial architectural shift that dictated such a jump (x86 > ARM). Apple is no magical company, it was just intel being stuck to x86 that they had such piss poor battery life. To intel's credit they did improve battery life incrementally each node and minor architecture shift.
The truth is each architectural jump is more like a plateau. This is ARM's plateau of battery life where it will fluctuate little by little with each optimization. Depending on form factor as well of course. If we want to have another massive jump of 3+ hours we would either need a 10-20% increase in battery capacity for each form factor or jump to a whole new more efficient architecture (RISC V?).
This is my current understanding with my limited knowledge so I might be wrong.
The vast majority of consumers don't understand nodes and didn't realize transistors stopped shrinking in a linear manner since about 45nm. Back when we were talking 90nm or 65nm, those improvements in performance and power were real. Anything today called 14nm or 3nm doesn't have a single transistor at that size. It's purely marketing.
Intel's bread and butter is servers. Those Xeons are extremely high margin. x86 continues to lead performance when power doesn't matter.