Wednesday, May 29, 2013

A Parallel

A stray thought I had last night.  I was thinking about the compulsion we in computer science have to develop artificial intelligence.  My experience is that most of my colleagues in the field simply don't really question whether or not it's a good thing that they are working on trying to build algorithms that are smarter than we are.  Whatever our reasons are, it's by and large not because we've really thought it through with an open mind and have decided it's a great idea.  Instead, we are compelled by powerful unconscious motivations, and then try to justify it after the fact.

The analogy that occurred to me is the physicists in the first half of the twentieth century figuring out nuclear physics ("splitting the atom") and eventually developing nuclear weapons.  They remain humankind's most destructive weapon.  And yet, in a strange way, they have led to marked moral/spiritual progress on the part of the species.  They were used twice, and then we've refrained from using them in anger since.  And as a result, there's been no open war between major powers since 1945.  To see how remarkable this is, here's a list of major wars in Europe - there have been wars between major powers every few decades since time immemorial.  But the prospect of nuclear war was so awful that we finally learned to stop.  At least, I hope it stays that way.

So perhaps that's the hope here.  In starting to build something that has the potential to completely tear our society apart altogether, maybe it will force us to finally confront the unconscious forces that drive us to blindly innovate and grow our economy, whatever the cost.  Being a bit more conscious about where we want to go would be a good thing.

13 comments:

  1. I'm a fan of Dave Pollard's 'Sweet Spot' taxonomy, which posits there are three facets to rewarding work/purpose: passion, skill, and need.

    I have the common problem of passion and skill being far easier to perceive and nurture than market need, and so find myself trading away my excess passion to apply skill where it is more needed.

    At those times when I'm most in despair over whether I have any skill at all to apply towards our big, unsolved problems; when a hail mary seems like the only available play--the longer the throw the better--I gravitate toward working an AI/AGI.

    I have an invitation, based on previous work I've done, to build out a portion of an AI research project. The work certainly doesn't pay, and the effort is one small, probably useless piece that nevertheless is the best thing I could personally do to [try to] move AI forward.

    And moving AI forward is the closest object to the center of my passion/skill/need complicity if I choose my own weights for each component.

    But in truth I'm working in healthcare with a biotech trajectory, because there is a clearly expressed market need that goes a long way toward mitigating that part of passion that is essentially dreaming.

    It's still easy for me to imagine a future of voluntary poverty where I get significantly more evangelical about AI. Doing precisely what you say: taking unconscious motivations (i.e., assuming a huge [that is, fun] risk in a domain where I have the most skill and passion) and finding a post-hoc justification (i.e., we need a hail mary.) to rationalize my decision.

    I'm reminded of your singulary > climate change > [remaining macro factors] post, which I very much personally use to support the above scenario. Climate change is more directly actionable for me; I'm a landholder in the southwestern US and there is on-ground mitigation work I'm doing. Climate change by itself will likely undo everything I'm working on here (or even could work on), which puts me in a frame of mind to be unconcerned with unintended consequences of AI. And take an offer to work on it regardless of justification.

    ReplyDelete
  2. Alan - what are your reasons for believing that building AI is a good thing?

    ReplyDelete
  3. Stuart - If you ask me what the nearest market pain that ~AI could solve, I would say that building a chatterbot that was good at listening to old people such that they would feel less lonely is plausibly a good thing. An aging population could/will reduce our collective ability to provide care and might be measurably better accomplished with that kind of tool-assisted support. If I believed that enough to act on it, however, I'd likely start by building sex robots, which is obviously, distressingly, and regrettably the same niche. >_> Regardless, that's the fundamental use case: we have asymmetric needs around roles we'd like to play and AI-like-things can make up for demand not met by human actors.

    My thinking otherwise is *significantly* more specious: if we can't, say, survive climate change I might as well work on something (anything) that has any chance of helping; or if it can't help might itself survive without us. Any chance is better than no chance.

    I'm not in fact working on either of these things, I wouldn't consider either of the above strong or even coherent beliefs. And I have better ways to use my time: I presume we have options for, again say, adapting to climate change that involve analysis of the actual problem and not hand waving around magical invocations of 'AI'.

    ReplyDelete
  4. I fail to understand this bullishness on artificial intelligence. Kevin Drum is even worse.

    Artificial intelligence, as a science, has made no material progress in the past 30 years.

    Artificial intelligence, as an engineering discipline, has over that time seen successes in new applications, but only on the back of extraordinary gains in processing speed, memory size and bandwidth, and disk storage.

    When you look at the prospects for further hardware advances in the pipeline, however, the picture looks increasingly grim.

    Retrospectively, successive process shrinks have already shown markedly diminishing performance returns. Even if engineers do manage to figure out how to reliably mass-produce chips on processes smaller than 14nm (which itself is very much in doubt), there is no evidence that the resulting chips will be more powerful by any useful measure. Nor is it at all clear that the price/transistor will be lower.

    That means that, unless currently unknown technologies emerge from some laboratory somewhere, we're quite likely looking at the end of processor performance increases within 18 months.

    Prospective DRAM and flash density increases are similarly constrained.

    The areal density of magnetic recording has also shown diminishing rates of growth. There appears to be more engineering headroom here than with silicon, but beyond a projected further doubling of density over the next five years, the physical, technical and engineering obstacles become daunting.

    So, if artificial intelligence really is going to take over the world, it either will have to accomplish it using the next few years of hardware performance improvements, or AI researchers are going to have to start discovering dramatically better algorithms.

    ReplyDelete
  5. It's great that you are willing to think about these issues. I think too many people either have no grasp of the situation or are too afraid of looking like idiots to say anything.

    But it's not at all clear to me that nuclear weapons have actually had the kind of effect you cite.

    http://www.thedailybeast.com/videos/2012/05/17/would-you-get-rid-of-nuclear-weapons.html

    At this point I am also equally unsure of whether AI researchers can reasonably be compared with people developing nuclear weapons. I think if it was clear that, say, you could develop human-level AI in the next ten years, you could make that comparison. I think human-level AI would be a very dangerous development. But I'm not convinced it's that close. And we could definitely use less-than-conscious AI for benign purposes...like facilitating the colonization of space, which could contribute greatly to long-term human survival and prosperity.

    I think a better candidate for a technology that is truly nefarious would be biotechnology. I don't see it as helping us address any other extinction risks and it seems to pose the same kind of fundamental questions that AI does, except maybe sooner.

    ReplyDelete
  6. A couple of things on AI.

    I don't think we need full-blown sentient AI to really make the (economic) world over. As it is now, automation in all its job-remaking, job-destroying forms has really changed things, as we've already seen in the ever lowering percentage of people working (men anyhow, and female employment seems to have plateaued as well.)

    The constant march of technology, never mind full-blown AI, has been at work in everything from building sidings that are more durable to cars that are virtually maint. free to automatic toll collection on roads such as the Massachusetts Turnpike (announced earlier this year.) Even if computing power does plateau, that holds up only one part of the technological revolution that has been eliminating the less-than-totally-creative jobs for some time now.

    Automobiles have remade this country and its landscape. Even though cars pretty much had gone as far as they could in terms of speed and comfort by the late 1960s, they continued to remake the geographical landscape for perhaps three decades afterwards. Only now are we seeing the effects of cars "peaking" as young people start moving away from car ownership. That is, there was quite a momentum to what cars were doing to our economy and society that took some time to play out. Likewise, even if AI basically hits some kind of speed limit in the near future, it will be years before even the present level of AI capability is fully absorbed and assimilated into the economy and society.

    On the comparison of AI to nuclear weapons, I agree, looking at the question from the developers' of each technologies' point of view, there are similarities, but I think with the use of nuclear weapons, there was tremendous awe at their use. Tens of thousands of burned, charred, and mangled bodies were immediately thrust upon the world. With advancing automation, technology, and AI, however, things are advancing in a more stealthy, quiet manner that I am not sure will provide the "Ah, Ha! OMG" moment to wake everybody up. It's that old, overused image of the frog being put in a cool pot of water and then put over the heat, as opposed to the use of nuclear weapons which had no slow cooking development time to enable complacency, although I certainly grant you all, regarding AI, things could change.

    ReplyDelete
  7. It's a pretty common sentiment, but I don't believe that you have to have actual Artificial Intelligence to change everything.

    I kind of think it is possible, and is coming, just not in Kurzweil's timetable.

    But leaving aside an electronic system that can state "I think, therefore I am," and mean it, we can replace numerous skilled and unskilled workers with computer apps.

    From picking fruit to reading scans of the human body for all sorts of maladies. Truck Drivers, Pilots, the sky is the limit.

    Plus we can set up a surveillance state that would have boggled George Orwell's mind.

    And we can do it dirt cheap.

    ReplyDelete
  8. What limit would the availability of AI lift? I don't think we'd be able to do anything fundamentally more, at least not with simple AI at first.

    What will inevitably happen, though, is a sharp increase in unemployment. A lot of jobs that couldn't be automatized before, will. As a result you would get massive protests because there really aren't many purposes remaining to which people can sell their labor. Consequently, the idea that wealth is divided by work (and its corollaries, the poor are lazy, the rich are productive, and working hard is rewarded) will become so thoroughly discredited that we're in for some major social upheaval.

    ReplyDelete
  9. I submit that there is no real thinking-out to be done beforehand.

    When one is at the birth of a technology or science, one doesn't really understand it yet, so every thought you have about whether or not its a good thing is just speculation.

    There's no way to evaluate speculations, so no way to arrive at any conclusion.

    Plus, just about everything that has ever been done has a negative as well as a positive potential. How could you possible know how history is going to play out on such a broad scale?

    Even in retrospect, it is almost impossible to say whether having made a discovery or created a new technology was good or not. Was the electrification of world a good thing or bad? Was the advent of the railroads? From one angle, they certainly seem like good things. But if we include the knowledge that the amount of fossil fuel we had to burn...Railroads led to massive industry, which required massive power. Electricity has to be generated on a massive scale to power the world.

    If human kind warms the global climate and in the process makes human life miserable or impossible on a large scale...were those things a good thing after all?

    But no one can know the outcome ahead of time.

    To believe we must, or even can, think out these things in advance is just folly. Rather, we do new things. Pay attention. Alter our behavior before things get too much out of whack.

    ReplyDelete
  10. To clarify the reasoning in my previous post, I think that space exploration is especially important not because of the usual threats that are cited alongside it like asteroids, but rather from bioengineered plague or nanotechnology. The threat from asteroids is real but could take millions of years to play out, so is not particularly important when ranked with these other dangers. The sooner we develop autarkic habitations off planet, I think the greater the likelihood there is of long-term survival. I do think "weak" AI and robotics could help with that.

    Even weak AI could indeed make the world over. But weak AI could not end our world, which strong AI could easily do. Weak AI might lead to different social structures. But sentient AI is a different ball game.

    About what Stephen B. said about an "OMG" moment, I think this is a great point. And this is why I actually would not support banning even research like Europe's Human Brain Project that is explicitly devoted toward creating sentient AI. I think sentient AI would be a disaster, but such a ban would be ineffective. I think the only real way to avoid AI, assuming AI is possible, is shutting down Moore's law entirely. That's not going to happen without an "OMG" moment. So I would rather projects like HBP succeed and hopefully provide that moment early enough, rather than ban them and witness a slow, stupefied march toward oblivion.

    ReplyDelete
  11. When we create an AI that can (at consumer pricing) sort, fold, iron and put away random baskets of clothes, then AI will have progressed to the point where I'm impressed. Until then, I wouldn't worry too much.

    ReplyDelete
  12. I think rather than restate my views in detail in response to some of these comments I will just draw attention (for anyone interested) to this post where I gave my initial take on the potential issues with AI: http://earlywarn.blogspot.com/2010/05/singularity-climate-change-peak-oil.html

    My views have not changed (to first order anyway) since.

    ReplyDelete
  13. First thing in computer science and "artificial intelligence" would be basic scientific honesty probably.

    But that would require renaming "research in AI" towards "research of AI"

    Indeed the overall "vulgarity" of the field.

    ReplyDelete