I’m not sure that he is predicting the failure of parallel computing. The context of the remarks seems tiny, and seems even limited to specific tiny contexts (for example using “locks” at all is a very bad way to try to deal with race conditions, etc.).
I just did a Google search that got back to me in about 1/4 second after considering millions of items of indexed information spread around the world (this seems like parallelism working to me!)
But the simplest way to ponder this both philosophically and pragmatically is to note that biological neurons have a cycle time of about 5 milliseconds, yet we can do quite a bit of thinking and decision making in from about 1/3rd to 1/2 second. So: “a lot” can be computed by our brains in from 30 to 100 clicks.
Real-time brain scans of metabolism while thinking reveal that hefty percentages of the 86 billion neurons in our brains are doing something related to the thinking task.
Also, if you are familiar with molecular biology — and computer people should take the trouble to learn how all this works, each one of the 10 trillion or so cells in our body has billions of parallel pattern matches and actions. Some of these happen as rapidly as 1 microsecond. These converge to produce all the life cycle functions of each cell, including making more cells. The cells themselves started as a single fertilized ovum, and 45 cell divisions or so later, a baby was produced (it’s worth pondering the difference between this and the log to the base 2 of 10 trillion … what do you think those extra cell divisions were used for).
This sounds to me like “parallel computing” does work! and on scales that most computer people don’t do much thinking about.
Perhaps the poor job done in traditional HW and SW of architecture in general and parallism in particular is likely more reflective of the abilities and predilections of most of the computerists working in those areas.
One way to try and learn some really interesting things is to get a FPGA plug-in box for your computer and start to make highly parallel architectures with it (it’s a lot easier all around with (a) thinking parallel from the get go, and (b) realizing that many difficulties with parallel computing are actually due to the traditional von Neumann architectures which separate memories from processing. You can easily comingle these in a FPGA.
I have really thought much about this. The capability the internet and web has is too much. I really wonder how such a computerized element can possess such speed and efficiency