Back from lunch with 70% battery, so I'm going to return to live-blogging.
First up is
Marvin Minsky, who is of course a god in computer science. I'm very intrigued to know what he has to say on these subjects.
He thinks we're going to run out of workers due to increasing lifespan and low fertility! Don't think he's looked at the data on employment/population ratio, which show the reverse. This is his case for the need for smart robots and it's based on a total lack of knowledge of the relevant economic statistics!
Smart robots have not made a whole lot of progress since the 1970s, in his view. Humans are resourceful and complex (extended example on object recognition in vision) and it's been hard to duplicate that.
AI has split into different approaches that work on particular subfields. In the sixties/seventies, quite a lot of progress was made on algorithms that could solve basic math problems (high school/college algebra/calculus). So seemed easy to solve highly technical problems. But extremely difficult to get computers to solve "common sense" things that even young children can do.
Minsky believes that understanding of the brain is very limited at the large scale. Know a great deal about individual neurons, a little about connections between neurons, and almost nothing about how large assemblies of neurons collaborate to solve problems. Know that different brain regions do different things, but very little about how they are done.
Shouts out to Freud for an early theory of learning being due to strengthening neural connections, fifty years ahead of time. Also draws on Freudian ideas of inner critics. Wants to see more money diverted to top-down analysis of thinking/brain, versus bottom-up neuroscience in order to make better progress on AI.
"My prediction for the 2040s is that this will happen more slowly than most of us think, but that eventually it will happen".
Next up are
Roger Penrose and
Stuart Hameroff. They are famously AI sceptics and have a completely different theory than most people of how brain/consciousness work.
Hameroff is talking in person, Penrose by video. Hameroff's life is devoted to "What is consciousness". Defines consciousness as subjective awareness - objective correlates are not know scientifically. Eastern view is consciousness pervades universe. Western view is brain produces consciousness as some kind of excitation of neurons in a network of connections.
Outlining his microtubule theory of consciousness - too complex to liveblog but pretty interesting stuff - more plausible on first exposure than I would have guessed.
Means brain does 10
27 operations per second instead of 10
16 per second. Puts the singularity back a good ways (if you think the latter is tied to mere computation speed, anyway).
Now Penrose up - big gap in quantum mechanics where Schrodinger equation and making measurement are foundation principles, but inconsistent (apologies to the non-physicists here). Obvious place to look for consciousness. Schrodinger's cat - why don't we see superpositions? Articulating a new theory of quantum mechanics/consciousness here - I'm not going to follow this without a lot more study. Rest of the audience must be completely lost (assuming there aren't too many physics PhDs here). Whoa - their theory of consciousness somehow can accomodate it occurring not tied to the brain and so can accomodate out-of-body experiences etc.
These guys are either crazy or geniuses (or both).
Now
Alexander Panov, a Russian physicist. Outlining the idea of the technological singularity. Kurzweil prediction of date of singularity is based on comparing computational capacity of brains and computers, with brains approximated as number of synapses times switching frequency of about 100Hz, with computation extrapolated using Moore's Law.
Points out that software progress is not tied all that closely to hardware progress. Computer translation sucked in the eighties and sucks now, despite a million-fold increase in computer power. No indications that we understand how to program a strong AI. Claims the problem of simulating nervous system of the simple worm C. elegans (with only a few hundred neurons) is unsolved.
Also, individual neurons have been shown to be stateful and learn - so brain is more than just the neuronal interactions. More discussion of possibility of quantum effects in brain.
Martine Rothblatt
"The Purpose of Biotechnology Is the End of Death". Talking about mindclones - talking about duplication of consciousness, thinks "Humans will have no trouble getting used to being in two places at the same time". Speculating about "mindware" - software that can interact with a "mindfile" - an uploaded consciousness. Yawn - not happening soon. Very glib shallow defense of the idea that it's inevitable that we will be able to upload minds (given we still have no real idea how the brain/mind works, I don't see how this can possibly be certain).
Problem of mindclone civil rights. Does a mindclone have rights - while biological form still exists? Afterwards? "The cause celebre of the 21st century".
Are you legally responsible for the actions of your mind-clones?
What about mind-clone procreation? Do combinations of mind-clones that have never been biologically alive have rights?
Anders Sandberg
Ethics of mind uploading. Analogy with animals - what moral consideration do they deserve? Long philosophical discussion of rights of software.
Having said this, it's clear there will be a legal/practical minefield if we ever could upload minds. Who owns the upload? What happens if multiple uploads are running - can they all vote? What happens if somebody runs bootleg copies of you?