A few notes on the evolving progress toward a singularity of some kind. I've seen automated driving as a next step in that direction which will play out over the next decade or two. A few items this morning tracking the trend.
First up, Volkswagen has a "production-like" research prototype for a "Temporary Auto Pilot" that can take over driving at the convenience of the driver (eg in stop and go traffic, while distracted etc). This was done as part of a major EU research initiative spending 28 million euros on developing automated driving.
Secondly, The NYT Wheels blog reports that the state of Nevada assembly has passed a bill authorizing the transportation department to draft regulations governing automated driving on the road.
So the automated driving thing is coming along about as one might expect, and the research-industrial complex is driving hard towards this goal, so to speak
On another front, Megan McArdle posted this fascinating graph, which shows the number of new pharmaceutical molecules successfully bought to market per billion of R&D spend:
As you can see, productivity in this area has been dropping very fast - it's getting harder and harder to come up with worthwhile new drugs. It looks to me like the spend per molecule increases by a factor of ten about every thirty years - about 8% per year. So that's much faster than just salary growth - most of it is dropping productivity.
This makes a similar point to what I made in Moore's Law vs the Flynn effect. Apologists for proceeding as rapidly as possible to a singularity like to claim that there's nothing to worry about because we'll use all this fantastic AI to integrate with and augment human intelligence and make being human more and more fun and fantastic. But whenever you look at actual trends on making humans better/healthier/smarter etc, you see very modest progress and/or diminishing returns, while the progress of the machines is much faster. To me, that suggests the main symptom of the approach to the singularity will be to render a larger and larger fraction of the human population unemployable. And that's been going on for a few decades now:
Finally, yesterday, Jamais Cascio had a very weak argument for why there's nothing to worry about:
Our technologies are not going to rob us (or relieve us) of our humanity. Our technologies are part of what makes us human, and are the clear expression of our uniquely human minds. They both manifest and enable human culture; we co-evolve with them, and have done so for hundreds of thousands of years. The technologies of the future will make us neither inhuman nor posthuman, no matter how much they change our sense of place and identity.and
Technology is part of who we are. What both critics and cheerleaders of technological evolution miss is something both subtle and important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, and we change it every day by changing our technology. And it is this, more than the demands for abandonment or the invocations of a secular nirvana, that will give us enormous challenges in the years to come.Essentially, his argument comes down to saying that since nothing really terrible has happened due to our use of technology yet, nothing really terrible can happen in the future either. The Sumerians might beg to disagree, if only their civilization hadn't collapsed from salting their fields with new-fangled irrigation technology. More importantly, creating intelligence that can duplicate more and more of human's mental capabilities is fundamentally different than all other prior technological progress. Why? Because, once that is accomplished, we have nothing left to offer the economy in the way of productivity. It won't need us. I have no idea what will happen as a result, but I wonder if we'll just go crazy before we get there.