By the end of its operation in 1956, ENIAC (Electronic Numerical Integrator And Computer) contained 20,000 vacuum tubes; 7,200 crystal diodes; 1,500 relays; 70,000 resistors; 10,000 capacitors; and approximately 5,000,000 hand-soldered joints. It weighed more than 30 short tons (27 t), was roughly 2.4 m x 0.9 m x 30 m (8 ft x 3 ft x 98 ft) in size, occupied 167 m2 (1,800 sq ft) and consumed 150 kW of electricity. One of the first Turing-Complete, digital, general purpose computer, the ENIAC would easily outweigh existing computers.In active service from 1946 to 1955, ENIAC was designed to calculate artillery firing tables for the United States Army’s Ballistic Research Laboratory, its first program was a study of the feasibility of the thermonuclear weapon.

On comparing this behemoth to a modern iPhone, say the iPhone 6, one would get an idea of how fast technology has progressed in the last fifty years. The iPhone 6 has a thickness of 6.9 millimeters (0.27 in) and the display is 4.7 inches in size, and it weighs around 130 grams. But when it comes to processing power, this relatively tiny device outpowers the ENIAC by over a thousand times: the ENIAC had somewhere around 1600 to 17,468 processing elements whereas iPhone 6 has around 2 billion of them. It boasts of about 128 billion to half a trillion bits of onboard memory, whereas the ENIAC had only around 1600 bits.

Making sense of the mass downscaling in size, and upscaling in performance - Moore’s Law:

This doesn’t come as a surprise anymore: there has been a law in the semiconductor and electronics industry that has given this downscaling a name: “Moore’s Law.” In a 1965 research paper, Gordon Moore, the co-founder of Fairchild Semiconductor and CEO of Intel, laid down the observation that the number of transistors in a dense integrated circuit doubles about every two years. In 1959, Douglas Engelbart discussed the projected downscaling of integrated circuit size in the article “Microelectronics and the Art of Similitude”. Engelbart presented his ideas at the 1960 International Solid-State Circuits Conference, where Moore was present in the audience.

For the thirty-fifth anniversary issue of Electronics magazine, which was published on April 19, 1965, Gordon E. Moore, who was working as the director of research and development at Fairchild Semiconductor at the time, was asked to predict what was going to happen in the semiconductor components industry over the next ten years. His response was a brief article entitled, “Cramming more components onto integrated circuits”. Within his editorial, he speculated that by 1975 it would be possible to contain as many as 65,000 components on a single quarter-inch semiconductor.

Picture credit: Wikipedia
“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.” - Gordon E. Moore

His reasoning was a log-linear relationship between device complexity (higher circuit density at reduced cost) and time. He justified this predicted trend of his by presenting the following three factors: 1. die sizes were increasing at an exponential rate and as defective densities decreased, chip manufacturers could work with larger areas without losing reduction yields; 2. simultaneous evolution to finer minimum dimensions; 3. and what Moore called “circuit and device cleverness”.

These three factors kept alive a healthy down-scaling rate of electronic devices for more than half a century. But there is an upper bound on how many transistors a manufacturer could cram on a circuit board. And this bound is necessitated by Quantum Mechanics, more specifically, Heisenberg’s Uncertainty Principle. This is fairly simple to understand by a hand wavy argument - increasing the number of transistors on one circuit board implied the reduction in size of the transistors. Now decreasing the transistor size too much would mean constricting the flow of electricity- rather, constricting the channel of electron flow. Such a decrease in free space for an electron would mean a very low error in determining the position of the electron in the circuit components, hence giving them a much higher bound of momentum - and in these cases, it would mean the electron could tunnel out of the circuit components, rendering them unusable.

In fact, this is proving to be true lately as the rate predicted by Moore is getting slower. As of 2015, Intel declared the time period for doubling the number of transistors is 2.5 years. The transistors in Intel’s latest chips already have features as small as 14 nanometers, and it is becoming more difficult to shrink them further in a way that’s cost-effective for production. The company’s chief of manufacturing said in February this year that Intel needs to switch away from silicon transistors in about four years. “The new technology will be fundamentally different,” he said, before admitting that Intel doesn’t yet have a successor lined up.

There are two leading candidates—technologies known as spintronics and tunnelling transistors—but they may not offer big increases in computing power. And both are far from being ready for use in making processors in large volumes. In fact, engineers and scientists are already wary that the slow period of growth and innovation the Silicon Valley has experienced of late due to Moore’s law coming to a halt, will continue until they don’t come up with a radically different method of building circuits.

Picture credit: Pixabay.com

Google’s answer to Intel’s dominance: ‘Neven’s Law’

Meanwhile in the Silicon Valley itself, at NASA’s Ames Research Center, Google is making big strides in its Quantum Artificial Intelligence Lab. But this progress was in a new direction: using a completely different paradigm of computation, Quantum Computation. In December 2018, scientists at Google AI ran a calculation on Google’s best quantum processor. They were able to reproduce the computation using a regular laptop. Then in January, they ran the same test on an improved version of the quantum chip. This time they had to use a powerful desktop computer to simulate the result. By February, there were no longer any classical computers in the building that could simulate their quantum counterparts. The researchers had to request time on Google’s enormous server network to do that. “Somewhere in February I had to make calls to say, ‘Hey, we need more quota,’” said Hartmut Neven, the director of the Quantum Artificial Intelligence Lab. “We were running jobs comprised of a million processors.”

What makes a Quantum Computer better?

Computers today function on binary bases. All calculations that they conduct occur in either 1 or 0, and this is the base on which modern computing is built. In quantum computers, qubits are used to perform the calculations, and they can be both 1 and 0 at the same time. This means that they will be doubly efficient as traditional computers, as they will provide double the computing power, increasing exponentially. For example, something that takes a normal computer 4 bits to do only requires 2 qubits, 16-bit operations require only 4 qubits. This exponential growth, combined with the exponential growth of the processors themselves, creates an environment wherein the computing power of classical computers seems to be falling behind.

What Google discovered, had never been done before. It was theorized that Quantum Computers could exponentially outperform their classical counterparts in certain operations. But in Google’s Quantum AI lab, the quantum chip was recording an improvement in speed that was ‘doubly exponential’, that is, where the speed of the quantum computer was expected to increase twice with each added qubit, like in the sequence 2^1, 2^2, 2^3, 2^4, the speed was increasing in the manner of the sequence 2^(2^1), 2^(2^2), 2^(2^3), 2^(2^4). Researchers at Google gave this growth the name “Neven’s Law”, after Hartmut Neven.

This is a significant breakthrough in computational powers because such speeds have never been realized in earlier systems, neither is there anything else in Nature that follows the same ‘doubly’- exponential growth rate. One might ask, that Quantum Computers are theorized to grow in processing speeds only exponentially with each added qubit, but the extra exponential lies in the improvement of quality of quantum chips used by Google.

Another extraordinary feat accomplished by the quantum computer is that this huge growth in pace is not restricted to only specific tasks and operations, rather it applied almost universally. This could possibly translate into what experts tout as “Quantum Supremacy” over classical machines, and Google might as well be the closest humanity has ever been to this goal. In fact, experts at Google claim that Quantum Supremacy might very well be achievable by the end of this year.

Experts have long been doubting whether the pace of technological advancement the world saw post-1950s, could continue much after the earlier part of the 21st Century. While it is certainly true that Classical Computers are now beyond the breezy period of growth laid down by Moore’s law, yet are going to innovate and improve in newer ways, still, the improvement brought by Google’s Quantum Chip, is a continuation of Moore’s Law. Even if humanity reaches the ceiling for (rather, the floor!) for minimum silicon device sizes, the death of Moore’s law would be more than compensated for by the immense increase in performance brought about by Neven’s Law, if quantum computers become a reality outside the laboratory anytime soon. This only acts as proof that Silicon Valley is not going to run dry on ideas and innovations anytime soon.