Robots don't have to be very intelligent to be intelligent enough. If a robot can follow simple orders and do the housework, or run simple machines in a cut-and-dried, repetitive way, we would be perfectly satisfied.
Constructing a robot is hard because you must fit a very compact computer inside its skull, if it is to have a vaguely human shape. Making a sufficiently complex computer as compact as the human brain is also hard.
But robots aside, why bother making a computer that compact? The units that make up a computer have been getting smaller and smaller, to be sure-from vacuum tubes to transistors to tiny integrated circuits and silicon chips. Suppose that, in addition to making the units smaller, we also make the whole structure bigger.
A brain that gets too large would eventually begin to lose efficiency because nerve impulses don't travel very quickly. Even the speediest nerve impulses travel at only about 3.75 miles a minute. A nerve impulse can flash from one end of the brain to the other in one four-hundred-fortieth of a second, but a brain 9 miles long, if we could imagine one, would require 2.4 minutes for a nerve impulse to travel its length. The added complexity made possible by the enormous size would fall apart simply because of the long wait for information to be moved and processed within it.
Computers, however, use electric impulses that travel at more than 11 million miles per minute. A computer 400 miles wide would still flash electric impulses from end to end in about one four-hundred-fortieth of a second. In that respect, at least, a computer of that asteroidal size could still process information as quickly as the human brain could.
If, therefore, we imagine computers being manufactured with finer and finer components, more and more intricately interrelated, and also imagine those same computers becoming larger and larger, might it not be that the computers would eventually become capable of doing all the things a human brain can do?
Is there a theoretical limit to how intelligent a computer can become?
I've never heard of any. It seems to me that each time we learn to pack more complexity into a given volume, the computer can do more. Each time we make a computer larger, while keeping each portion as densely complex as before, the computer can do more.
Eventually, if we learn how to make a computer sufficiently complex and sufficiently large, why should it not achieve a human intelligence?