AI researchers in the 80s ran into a problem: the more their systems knew, the slower they ran. Whereas we know that people who learn more tend to get faster (and better in other ways) at whatever it is they're doing.
The solution, of course, is: Duh. the brain doesn't work like a von Neumann model with an active processor and passive memory. It has, in a simplified sense, a processor per fact, one per memory. If I hold up an object and ask you what it is, you don't calculate some canonicalization of it as a key into an indexed database. You compare it simultaneously to everything you've ever seen (and still remember). Oh, yeah, that's that potted aspidistra that Aunt Suzie keeps in her front hallway, with the burn mark from the time she ...
The processing power necessary to to that kind of parallel matching is high, but not higher than the kind of processing power that we already know the brain has. It's also not higher than the processing power we expect to be able to throw at the problem by 2020 or so. Suppose it takes a million ops to compare a sensed object to a memory. 10 MIPS to do it in a tenth of a second. A modern workstation with 10 gigaops could handle 1000 concepts. A GPGPU with a teraops could handle 100K, which is still probably in the hypohuman range. By 2020, a same priced GPGPU could do 10M concepts, which is right smack in the human range by my best estimate.