Wednesday, June 16, 2010

Computers Now Competitive at Jeopardy?

Interesting piece in the NYT:
‘Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge” ), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter.
Any of you AI sceptics feeling at least a little chill of obsolescence here?
Another key excerpt, that ties this into Moore's Law:
The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. Using this method, you could put hundreds of articles and books and movie reviews discussing Sherlock Holmes into the computer, and it would calculate that the words “deerstalker hat” and “Professor Moriarty” and “opium” are frequently correlated with one another, but not with, say, the Super Bowl. So at that point you could present the computer with a question that didn’t mention Sherlock Holmes by name, but if the machine detected certain associated words, it could conclude that Holmes was the probable subject — and it could also identify hundreds of other concepts and words that weren’t present but that were likely to be related to Holmes, like “Baker Street” and “chemistry.”

In theory, this sort of statistical computation has been possible for decades, but it was impractical. Computers weren’t fast enough, memory wasn’t expansive enough and in any case there was no easy way to put millions of documents into a computer. All that changed in the early ’00s. Computer power became drastically cheaper, and the amount of online text exploded as millions of people wrote blogs and wikis about anything and everything; news organizations and academic journals also began putting all their works in digital format.
(Of course, in great personal moments in irony, I'm sitting here writing this during lunch while keeping an eye on a test of my own latest-greatest statistical computer algorithm, which is busy searching for a handful of malicious events in bazillions of network packets on a high speed network).

Oh, and:
“I want to create a medical version of this,” he adds. “A Watson M.D., if you will.” He imagines a hospital feeding Watson every new medical paper in existence, then having it answer questions during split-second emergency-room crises. “The problem right now is the procedures, the new procedures, the new medicines, the new capability is being generated faster than physicians can absorb on the front lines and it can be deployed.” He also envisions using Watson to produce virtual call centers, where the computer would talk directly to the customer and generally be the first line of defense, because, “as you’ve seen, this thing can answer a question faster and more accurately than most human beings.”

“I want to create something that I can take into every other retail industry, in the transportation industry, you name it, the banking industry,” Kelly goes on to say. “Any place where time is critical and you need to get advanced state-of-the-art information to the front of decision-makers. Computers need to go from just being back-office calculating machines to improving the intelligence of people making decisions.” At first, a Watson system could cost several million dollars, because it needs to run on at least one $1 million I.B.M. server. But Kelly predicts that within 10 years an artificial brain like Watson could run on a much cheaper server, affordable by any small firm, and a few years after that, on a laptop.
All your jobs are belong to IBM...

21 comments:

  1. An expensive parlour trick, designed to game a game. And brittle as heck.

    Can I design the questions please please please?! (And the categories!) :-)

    A step up from google, though.

    Ferrucci says his team will continue to fine-tune Watson, but improving its performance is getting harder. “When we first started, we’d add a new algorithm and it would improve the performance by 10 percent, 15 percent,” he says. “Now it’ll be like half a percent is a good improvement.”

    That's because it doesn't actually know anything. At some point the illusion becomes very hard to maintain.

    ReplyDelete
  2. DM:

    I think you're profoundly wrong about the significance of this.

    It's 13 years since computers became better than even the best chess players at chess. Now here we have them close to being the best at a task that is much less computer-like and much more human like - this requires natural language parsing and interpretation and associative lookup across large domains of knowledge.

    I agree that it's not general human reasoning yet, but it's a hell of a big step closer. And in the meantime, it will be plenty to drive us another notch down in the E/P ratio.

    Wonder what the discussion of this issue inside Google looks like?

    ReplyDelete
  3. In a way, I think the real point is not really when we reach general computer intelligence. The point is that computers will become better at one specialized task after another, and with each one, that particular niche in the economy is gone for people, and the folks who might have benefited from it will have to turn to a less good option for them.

    ReplyDelete
  4. We may not disagree as much on the economic significance. This kind of thing will prove very useful & I really really want one!!

    I agree that it's not general human reasoning yet, but it's a hell of a big step closer.

    There's the rub. I don't think Watson is 'reasoning' at all like a mammal does (or getting closer). At it's core, the human mind isn't doing statistical analysis of symbols. Instead we seem to have complex models of reality in our minds.... some of it essentially apriori (observable in newborn human infants or even in puppies). Symbolic language is definitely not required. When an AI system starts with such, we're being had.

    Reasoning is also very emotional.

    Your experience when reading Kurzweil is a more extreme example of this point, I would venture. Very complex internal representations were perturbed and new models emerged and dominated. I've had similar experiences in different contexts. On a small scale, it happens ever day.

    Watson's usefulness comes from it's ability to search a virtual space created when humans rendered part of the real world into a vast conceptual apparatus tagged by symbols. It games the symbols with statistical associations & some expert systems rules at high speed.

    ReplyDelete
  5. DM:

    My hypothesis would be that the human mind is doing statistical analysis to recognize patterns, and then labelling the patterns with symbols. It's then reusing older circuitry for manipulating representations of concrete entities to do abstract reasoning (via metaphor, a la George Lakoff).

    There's no reason computers can't eventually be gotten to do something similar. (I mean, at some philosophical level people may always argue it's different, but if it allows the computer to fulfill the same economic functions...)

    ReplyDelete
  6. Duck-typed intelligence for the win? There will always be people who argue for a distinction without a difference. Eventually they'll be ignored.

    But AI is beside the point; it's not needed for the economic singularity.

    "The point is that computers will become better at one specialized task after another, and with each one, that particular niche in the economy is gone for people, and the folks who might have benefited from it will have to turn to a less good option for them."

    This is exactly the point. Expert systems are enough -- a system that can do medicine can also do law, accounting and business admin, and engineering. And for everyone else, it's the perfect PA. Labour productivity is about to accelerate much faster than any economy can grow.

    Combine this statistical pattern analysis with machine vision (here) and with animatronic-style robotics (near), and you have a new general-purpose technology that can move into maybe 80% of all economic niches.

    Once it's cheaper than a human working for two dollars a day, humans can never compete.

    ReplyDelete
  7. So, how soon can Watson plug the leak in the Gulf of Mexico?

    ReplyDelete
  8. I think you may turn out to be underestimating the importance of emotion and feeling in many situations, occupations and skill sets.

    However, even ignoring that, there are legitimate questions as to whether future energy supplies will be available, at non-economy-killing price and volatility, to actually run these wonderful machines. Things have to be made someplace, and shipped someplace, and sold/delivered someplace, and, hopefully, used someplace. All of this requires energy. Currently, it is "economic" to replace human labor with machine labor. In a future world, it may turn out to be more economical to do precisely the opposite.

    All of the "innovation" to date has taken place in a world of increasingly available, inexpensive energy with a relatively stable price structure. However, in a possible future with increasingly scarce energy whose price is increasingly volatile and expensive, innovation of this type may not be practical.

    Future innovation may have less to do with perfecting artificial intelligence of energy-hungry machines, and more to do with effectively utilizing localized, low energy systems in order to maintain or improve the quality of life of food-hungry human beings.

    Or, you might be right and a technological singularity of one kind or another may make all such debates about humans moot.

    Personally, I'd rather see us working for a better system for living without energy hungry machines than spending more energy making machines capable of thinking about living without humans. FWIW.

    Brian

    ReplyDelete
  9. I recall years ago reading about Alan Turing and the Turing Test. Is it still relevant?

    ReplyDelete
  10. Well, the Turing test isn't any less relevant than it was a few years ago, but there's not really agreement on whether it ever was.

    Look up Searle's Chinese room argument. It's basically what's being discussed in the comments above.

    ReplyDelete
  11. On the economic argument one must remember that it is not simply a case for beating out current labor rates in highly trained specialties. One must beat out the middle class survival costs to have a rapid transformation. Otherwise the transformation may still take place, but over a longer time frame.

    For example, say some specialty currently commands $100,000 a year. Someone develops a machine that can be operated at $90,000 a year. That will drive down the salaries of the specialists and cause students to factor in reduced future salaries into their decisions. Eventually the costs of training may exceed the reduced payback and no further people enter that specialty. However, the installed base of specialists will face several barriers, some economic, to moving into a new field, so at least the older ones will likely continue to accept reduced salaries until it becomes unbearable.

    This is similar to what happened with blue collar jobs and globalization. That transformation took a long time. Will the white collar frog recognize the slowly rising water temperature this time? Will that cause more political and economic resistance?

    One might argue that Moore's law will drive costs down, but that is competing with atomic physical limits as well as global resource physical limits. I think we'll see that drop off with oil production.

    ReplyDelete
  12. There are so many variables at play here. For example, do people really "need" all the stuff we consume? It seems to me that a lot of the manufacturing production that has recently moved to China or other low-wage areas may not really necessary in the first place. The companies that produce the consumer goods have to do a lot of marketing and advertising (not to mention designing planned obsolescence into their products) to convince us to keep consuming, or else the whole system will grind to a halt when we realize we can live quite nicely without maxing out our credit cards. And, we all play along, because if the companies we work for don't keep producing more and more, we may become unemployed ourselves.

    So, let us assume that low-cost robotics, controlled by "intelligent" computers, replace human labor for most production and management functions. Millions of humans will be rendered redundant. Will we need to keep running in this rat race in order to keep buying the stuff that the machines produce for us? And, if so, where do we get the money to pay for it all? Will we have to tax the owners of the machines in order to redistribute the wealth so that the rest of us can afford to keep buying their products? Or, might we save our money and learn to live more simply (especially if the cost of energy keeps going up), and just let the whole system collapse? Unfortunately, we all cannot simply take in each other's laundry...

    Maybe we'll end up living in one of those distopian futures where the central computer controls everything... No, I don't think so.

    As fossil fuel supplies continue to dwindle, the rules of the game will change in fundamental ways, and all those millions of workers who would have been replaced by Watson's descendents will end up back on the farm anyway, growing their own food the way our ancestors did before the carbon age.

    ReplyDelete
  13. It's properly "Jeopardy!" Some parodies: SCTV, Saturday Night Live. Would have been witty to phrase the title in the form of a question, too. "What is rampant Luddism?"

    Saw a bibliography by subject of science fiction in a reference library once, it was quite staggering in volume. Editors are constantly reminding prospective authors that EVERY subject imaginable has been covered, often a long long time ago, the ramifications of robotic labor too cheap to meter being one of them.

    Would appreciate a better search engine; beyond that I'm pretty content with computers as is. Doesn't it seem like we'll reach a stage where enough progress has been made? Or, if job security really becomes threatened for millions, political intervention will step in? After all, by all rights we could install RFID in all products and readers at store entryways, completely doing away with the need for checkout employees and allowing us to charge an entire cart full of groceries in an instant. Yet this far simpler application of tech isn't to be seen.

    ReplyDelete
  14. Can a computer experience pleasure?http://www.slate.com/id/2256711/entry/2256710/

    ReplyDelete
  15. Robert:

    Can a computer experience pleasure? I don't know, but I'm pretty sure that a computer with the appropriate peripherals can give pleasure...

    ReplyDelete
  16. Re RFID on groceries: there are good economic reasons why this isn't going to happen in the next few years.

    Checkout operators act as law enforcement, preventing the sale of tobacco and alcohol to minors. They also perform a few other functions--for some customers, talking to checkout operators is most of their daily social interaction. So there aren't big savings to be made by attempting to eliminate checkouts, and there are possible revenue losses.

    Also, RFIDs are an added item that has to be stuck to packaging, while barcodes are part of the printed design on packaging. (Finally! It took about 20 years, from the early 1970s.) Barcodes are good enough, and have an entire system built up around them, so it would be hard to dislodge them now that they're here.

    ---------------------

    The thing about new general-purpose technologies is that they take several decades to roll out. With "expert seeing robots," we're about at the point electricity was at when Michael Faraday was conducting his experiments in electromagnetism. Maybe a little before that point.

    So when I say "near future", I'm thinking "about fifty to eighty years." After 2150 and before 2100, roughly.

    Also, the singularity I'm talking about is an event in economic history: the point when labour is no longer a constraint on production. (Economists seem to believe this is already true of resources. Old versions of the "production function" had a term for resources, as well as labour and capital; newer versions don't. At least the simple ones.) Kurzweil has tried to create some kind of singularity religion, conflating production technology with immortal youth and other nonsense. Even when he's being more moderate, he underestimates the time needed for social adaptation and the investment cycle.

    ReplyDelete
  17. Size? RFID 'Powder' - World's Smallest RFID Tag: Science Fiction in the News Previously they were huge ungainly things - we're talking thickness of a hair here. New ones are more like dust particles. Yes, perhaps not mass producable quite yet, but give it time. I do enjoy a gab with certain checkers but this isn't a big part of the bottom line at all. Verifying age of purchasers is a no brainer - keep one meatsack on the payroll to shake all buyers down. Track the fact that minors attempted to purchase and tack it onto their criminal record while you're at it.

    The reaction in Europe to whatever strain of RFID was boycotts, this documented in the book Spychips. Ah, my story was published a million years ago, in the dark days of 2007.

    ReplyDelete
  18. Of course the first place we'll see real market penetration of computers owning decisions formerly in the domain of humans is where there is lots of money to be made. That's nowhere in the real economy. Wall Street is where the action is.

    http://www.theatlantic.com/magazine/archive/2010/07/monsters-in-the-market/8122/

    ReplyDelete
  19. KLR - holy shit, I wasn't aware of the 50 micron RFID tag. That's a seriously scary technology.

    ReplyDelete
  20. Apart from whether computers can *really* think like humans; what are the implications for an industrial society if financial markets and huge accounting bureaucracies can be replaced by an automated capital management system? While that sounds politically impossible, I don't think it would need to be imposed from above for something like Eric's 'monsters' to have an increasing role.

    Would that mean that capitalism was no longer dependent on economic growth, or consumerism?

    Google certainly isn't HAL, but its had a huge impact on a lot of people; would a mecha-Keynes make most people not just redundant, but economically irrelevant?

    ReplyDelete
  21. Great article. The statistical approach is indeed the way to go, although one does wonder about brittleness in the edge conditions. Would this brittleness ever preclude an MD application? (The computer get some minute part of its diagnosis not just wrong, but spectacularly wrong - enough to feed the lawyers and keep the technology from being adopted).

    As an aside, at some point I wonder if the concept of a hypervisor might eventually be applied here. That is, a higher-lever abstraction which employs lower-level specialist units (such as Watson, or the Big Blue algorithms) as the need arises.

    Analogous to the human brain, which has specific areas that are good at processing specific tasks, and then a thin cerebrum which "thinks" about these tasks at some level of abstraction.

    ReplyDelete