Wednesday, March 07, 2007

The Holy Grail of Search

There's an interesting article in today's New York Times about the future of internet searching, at least as perceived from Microsoft's vantage point. At Techfest this week, a Microsoft researcher demonstrated a new service called "Mix," which (if it works as demonstrated) would allow users to organize their results more efficiently. Mix may be available within six to nine months, according to the article. You can find a brief description of Mix if you scroll about three-quarters down this page.

Another service, Web Assistant, (no release date given; this name has been kicking around Microsoft for at least 10 years) would be even more intuitive, driving you to results that could distinguish whether your were looking for a football team, an automobile, or an exotic cat when you search for "jaguars." It manages this legerdemain by learning from your previous searches and by those of other searchers who have looked up the same topic.

It seems to me that the better the search function gets, the more we as librarians should be rethinking our mission. We have already lost the ready reference trade to Google and its kin; where will we seek refuge when the search engines can actually deliver substantial, unambiguous results right to the desktop?

3 comments:

Anonymous said...

When I read the line "looking for a football team, an automobile, or an exotic cat..." I immediately thought "COUGAR!" only to read that the search term in question was "Jaguar."

Do I get extra points for making the search more complicated? Would these software tools be able to suggest that I might also want to search for "cougar"?

Kurt

George said...

Sure. Take three bonus points out of petty cash!

When the tools are actually available --- if and when that happens --- I'd hope they would include pointers to similar searches. But in this case, I started with the noun "jaguar" and then went for the modifiers. I don't want to be around when the search functions can read my mind!

Bruce Newell said...

And meanwhile Danny Hillis and a company of other really smart people are starting a company to (I think) build out the semantic web. From their project blog (begin quote):

* Metaweb is an infrastructure that includes a massive data store, an API and a set of tools and services.

* Metaweb Technologies, Incorporated is the company building this infrastructure. We’re a group of very technical people working in San Francisco.

* Freebase is an open-data project. The goal of this project is to gather and organize open license data for the good of everybody. The data is stored in Metaweb, but because it has a very open CC-A license, it can be stored anywhere and is immune to the fortunes or failures of a single company.

More succinctly: Metaweb is an infrastructure and a company. Freebase is an open-data project that uses that infrastructure. (end quote)

Seems to me that libraries might have a complimentary role in this project.

The blog: http://roblog.freebase.com/

A short article in the NYT:http://www.nytimes.com/2007/03/09/technology/09data.html?ex=1331096400&en=a87d4f61e6052888&ei=5090&partner=rssuserland&emc=rss