Curated for content, computing, and digital experience professionals

Author: Lynda Moulton (Page 13 of 18)

Ontologies and Semantic Search

Recent studies describe the negative effect of media including video, television and on-line content on attention spans and even comprehension. One such study suggests that the piling on of content accrued from multiple sources throughout our work and leisure hours has saturated us to the point of making us information filterers more than information “comprehenders”. Hold that thought while I present a second one.

Last week’s blog entry reflected on intellectual property (IP) and knowledge assets and the value of taxonomies as aids to organizing and finding these valued resources. The idea of making search engines better or more precise in finding relevant content is edging into our enterprises through semantic technologies. These are search tools that are better at finding concepts, synonymous terms, and similar or related topics when we execute a search. You’ll find an in depth discussion of some of these in the forthcoming publication, Beyond Search by Steve Arnold. However, semantic search requires more sophisticated concept maps than taxonomy. It requires ontology, rich representations of a web of concepts complete with all types of term relationships.

My first comment about a trend toward just browsing and filtering content for relevance to our work, and the second one about the idea of assembling semantically relevant content for better search precision are two sides of a business problem that hundreds of entrepreneurs are grappling with, semantic technologies.

Two weeks ago, I helped to moderate a meeting on the subject, entitled Semantic Web – Ripe for Commercialization? While the assumed audience was to be a broad business group of VCs, financiers, legal and business management professionals, it turned out to have a lot of technology types. They had some pretty heavy questions and comments about how search engines handle inference and its methods for extracting meaning from content. Semantic search engines need to understand both the query and the target content to retrieve contextually relevant content.

Keynote speakers and some of the panelists introduced the concept of ontologies as being an essential backbone to semantic search. From that came a lot of discussion about how and where these ontologies originate, how and who vets them for authoritativeness, and how their development in under-funded subject areas will occur. There were no clear answers.

Here I want to give a quick definition for ontology. It is a concept map of terminology which, when richly populated, reflects all the possible semantic relationships that might be inferred from different ways that terms are assembled in human language. A subject specific ontology is more easily understood in a graphical representation. Ontologies also help to inform semantic search engines by contributing to an automated deconstruction of a query (making sense out of what the searcher wants to know) and automated deconstruction of the content to be indexed and searched. Good semantic search, therefore, depends on excellent ontologies.

To see a very simple example of an ontology related to “roadway”, check out this image. Keep in mind that before you aspire to implementing a semantic search engine in your enterprise, you want to be sure that there is a trusted ontology somewhere in the mix of tools to help the search engine retrieve results relevant to your unique audience.

Taxonomy and Enterprise Search

This blog entry on the “Taxonomy Watch” website prompts me to correct the impression that I believe naysayers who say that taxonomies take too much time and effort to be valuable. Nothing could be further from the truth. I believe in and have always been highly vested in taxonomies because I am convinced that an investment in pre-processing enterprise generated content into meaningfully organized results brings large returns in time savings for a searcher. S/he, otherwise, needs to invest personally in the laborious post-processing activity of sifting and rejecting piles of non-relevant content. Consider that categorizing content well and only once brings benefit repeatedly to all who search an enterprise corpus.

Prime assets of enterprises are people and their knowledge; the resulting captured information can be leveraged as knowledge assets (KA). However, there is a serious problem “herding” KA into a form that results in leveragable knowledge. Bringing content into a focus that is meaningful to a diverse but specialized audience of users, even within a limited company domain is tough because the language of the content is so messy.

So, what does this have to do with taxonomies and enterprise search, and how they factor into leveraging KA? Taxonomies have a role as a device to promote and secure the meaningful retrievability of content when we need it most or fastest, just-in-time retrieval. If no taxonomies exist to pre-collocate and contextualize content for an audience, we will be perpetually stuck in a mode of having to do individual human filtering of excessive search results that come from “keyword” queries. If we don’t begin with taxonomies for helping search engines categorize content, we will certainly never get to the holy grail of semantic search. We need every device we can create and sustain to make information more findable and understandable; we just don’t have time to both filter and read, comprehensively, everything a keyword search throws our way to gain the knowledge we need to do our jobs.

Experts recognize that organizing content with pre-defined terminology (aka controlled vocabularies) that can be easily displayed in an expandable taxonomic structure is a useful aid for a certain type of searcher. The audience for navigated search is one that appreciates the clustering of search results into groups that are easily understood. They find value in being able to move easily from broad concepts to narrower ones. They especially like it when the categories and terminology are a close match to the way they view a domain of content in which they are subject experts. It shows respect for their subject area and gives them a level of trust that those maintaining the repository know what they need.

Taxonomies, when properly employed, serve triple duty. Exposing them to search engines that are capable of categorizing content puts them into play as training data. Setting them up within content management systems provides a control mechanism and validation table for human assigned metadata. Finally, when used in a navigated search environment, they provide a visual map of the content landscape.

U.S. businesses are woefully behind in “getting it;” they need to invest in search and surrounding infrastructure that supports search. Comments from a recent meeting I attended reflected the belief that the rest of the world is far ahead in this respect. As if to highlight this fact, a colleague just forwarded this news item yesterday. “On February 13, 2008, the XBRL-based financial listed company taxonomy formulated by the Shanghai Stock Exchange (SSE) was “Acknowledged” by the XBRL International. The acknowledgment information has been released on the official website of the XBRL International (http://www.xbrl.org/FRTaxonomies/)….”.

So, let’s get on with selling the basic business case for taxonomies in the enterprise to insure that the best of our knowledge assets will be truly findable when we need them.

Search Engines Under the Hood

This week’s thoughts come from the pile of serendipitous reading that routinely piles up on my desk. In this case a short article in Information Week caught my eye because it featured the husband of a former neighbor, Ken Krugler, co-founder of Krugle. I’d set it aside because a fellow, David Eddy, in my knowledge management forum group keeps telling us that we need tools to facilitate searching for old but still useful source code. In order to do it, he believes, we need an investment in semantic search tools that normalize the voluminous language variants scattered throughout source code. That would enable programmers to find code that could be re-purposed in new applications.

Now, I have taken the position that source code is just one set of intellectual property (IP) asset that is wasted, abandoned and warehoused for technology archaeologists of centuries hence. I just don’t see a solid business case being made to develop search tools that will become a semantic search engine for proprietary treasure troves of code.

Enters old acquaintance Ken Krugler with what seems to be, at first glance, a Web search system that might be helpful for finding useful code out on the Web, including open source. I have finally visited his Web site and I see language and new offerings that intrigue me. “Krugle Enterprise is a valuable tool for anyone involved in software development. Krugle makes software development assets easily accessible and increases the value of a company’s code base. By providing a normalized view into these assets, wherever they may be stored, Krugle delivers value to stakeholders throughout the enterprise.” They could be onto something big. This is a kind of enterprise search I haven’t really had time to think about but may-be I will now.
One thing leading to another, I checked out Ken Krugler’s blog and saw an earlier posting: Is Writing Your Own Search Engine Hard? This is recommended reading for anyone who even dabbles in enterprise search technology but doesn’t want to get her/his hands dirty with the mechanics. It is short, to-the-point and summarizes how and why so many variations of search are battling it out in the marketplace.

I don’t want end-users to struggle too much with the under the hood details but when you are thinking about enterprise search for your organization, it is worth considering how much technology you are getting for the value you want it to deliver, year after year, as your mountains of IP content accrue. Don’t give this idea short shrift because search is an investment that keeps giving if it is chosen appropriately for the problem you need to solve.

Search Behind the Firewall aka Enterprise Search

Called to account for the nomenclature “enterprise search,” which is my area of practice for The Gilbane Group, I will confess that the term has become as tiresome as any other category to which the marketplace gives full attention. But what is in a name, anyway? It is just a label and should not be expected to fully express every attribute it embodies. A year ago I defined it to mean any search done within the enterprise with a primary focus of internal content. “Enterprise” can be an entire organization, division, or group with a corpus of content it wants to have searched comprehensively with a single search engine.

A search engine does not need to be exclusive of all other search engines, nor must it be deployed to crawl and index every single repository in its path to be referred to as enterprise search. There are good and justifiable reasons to leave select repositories un-indexed that go beyond even security concerns, implied by the label “search behind the firewall.” I happen to believe that you can deploy enterprise search for enterprises that are quite open with their content and do not keep it behind a firewall (e.g. government agencies, or not-for-profits). You may also have enterprise search deployed with a set of content for the public you serve and for the internal audience. If the content being searched is substantively authored by the members of the organization or procured for their internal use, enterprise search engines are the appropriate class of products to consider. As you will learn from my forthcoming study, Enterprise Search Markets and Applications: Capitalizing on Emerging Demand, and that of Steve Arnold (Beyond Search) there are more than a lot of flavors out there, so you’ll need to move down the food chain of options to get it right for the application or problem you are trying to solve.

OK! Are you yet convinced that Microsoft is pitting itself squarely against Google? The Yahoo announcement of an offer to purchase for something north of $44 billion makes the previous acquisition of FAST for $1.2 billion pale. But I want to know how this squares with IBM, which has a partnership with Yahoo in the Yahoo edition of IBM’s OmniFind. This keeps the attorneys busy. Or may-be Microsoft will buy IBM, too.

Finally, this dog fight exposed in the Washington Post caught my eye, or did one of the dogs walk away with his tail between his legs? Google slams Autonomy – now, why would they do that?

I had other plans for this week’s blog but all the Patriots Super Bowl talk puts me in the mode for looking at other competitions. It is kind of fun.

Search Adoption is a Tricky Business: Knowledge Needed

Enterprise search applications abound in the technology marketplace, from embedded search to specialized e-discovery solutions to search engines for crawling and indexing the entire intranet of an organization. So, why is there so much dissatisfaction with results and heaps of stories of buyer’s remorse? Are we on the cusp of a new wave of semantic search options or better ways to federate our universe of content within and outside the enterprise? Who are the experts on enterprise search anyway?

You might read this blog because you know me from the knowledge management (KM) arena, or from my past life as the founder of an integrated enterprise library automation company. In the KM world a recurring theme is the need to leverage expertise, best done in an environment where it is easy to connect with the experts but that seems to be a dim option in many enterprises. In the corporate library world the intent is to aggregate and filter a substantive domain of content, expertise and knowledge assets on behalf of the specialized interests of the enterprise, too often a legacy model of enterprise infrastructure. Librarians have long been innovators at adopting and leveraging advanced technologies but they have also been a concentrating force for facilitating shared expertise. In fact, special librarians excel at providing access to experts.

We are drowning in technological options, not the least of which is enterprise search and its complexity of feature laden choices. However, it is darned hard to find instances of full search tool adoption or users who love the search tools they are delivered on their intranets. So, I am adopting my KM and library science modes to elevate the discussion about search to a decidedly non-technical conversation.

I really want to learn what you know about enterprise search, what you have learned, discovered and experienced over the past two or three years. This blog and the work I do with The Gilbane Group is about getting readers to the best and most appropriate search solutions that can make positive contributions in their enterprises. Knowing who is using what and where it has succeeded or what problems and issues were encountered is information I can use to communicate, in aggregate, those experiences. I am reaching out to you and those you refer to complete a five minute survey to open the door to more discussion. Please use this link to participate right now Click Here to take survey. You will then have the option to get the resulting details in my upcoming research study on enterprise search.

Just to prove that I still follow exciting technologies, as well, I want to relay a couple of new items. First is a recent category in search, “active intelligence,” adopted as Attivio’s tag line. This is a start-up led by Ali Riaz and officially launched this week from Newton, MA. Then, to get a steady feed of all things enterprise search from guru Steve Arnold, check out his new blog, a lead up to the forthcoming Beyond Search: What to Do When Your Search Engine Doesn’t Work to be published by The Gilbane Group. You’ll be transported from the historical, to the here and now, to the newest tools on his radar screen as you page from one blog entry to another.

Nothing Like a Move by Microsoft to Stir up Analysis and Expectations

Since I weighed in last week on the Microsoft acquisition of FAST Search & Transfer, I have probably read 50+ blog entries and articles on the event. I have also talked to other analysts, received emails from numerous search vendors summarizing their thoughts and expectations about the enterprise search market and had a fair number of phone calls asking questions about what it means. The questions range from “Did Microsoft pay too much?, to “Please define enterprise search,” to “What are the next acquisitions in this market going to be going to be?” My short and flippant answers would be “No,” “Do you have a few hours?” and “Everyone and no one.”

I have seen some excellent analysis contributing relevant commentary to this discussion, some misinterpretation of what the distinction’s are between enterprise search and Web search, and some conclusions that I would seriously debate. You’ll forgive me if I don’t include links to the pieces that influenced the following comments. But one by Curt Monash in his piece on January 14 summarized the state of this industry and its already long history. It is noteworthy that while the popular technology press has only recently begun to write about enterprise search, it has been around for decades in different forms and in a short piece he manages to capture the highlights and current state.

Other commentary seems to imply that Microsoft is not really positioning itself to compete with Google because Google is really about Web (Internet) searching and Microsoft is not. This implies that FAST has no understanding of Web searching. Several points must be made:

  1. FAST Search & Transfer has been involved in many aspects of search technologies for a decade. Soon after landing on our shores it was the search engine of choice for the U.S. government’s unifying search engine to support Internet-based searching of agency Web sites by the public. Since then it has helped countless enterprises (e.g. governments, manufacturers, e-commerce companies) expose their content, products and services via the Web. FAST knows a lot about how to make Web search better for all kinds of applications and they will bring that expertise to Microsoft.
  2. Google is exploiting the Web to deliver free business software tools that directly challenge Microsoft stronghold ( e.g. email, word processing). This will not go unanswered by the largest supplier of office automation software.
  3. Google has several thousand Google Enterprise Search Appliances installed in all types of enterprises around the world, so it is already as widely deployed in enterprises in terms of numbers as FAST, albeit at much lower prices and for simpler application. That doesn’t mean that they are not satisfying a very practical need for a lot of organizations where it is “good enough.”

For more on the competition between the two check this article out.

Enterprise search has been implied to mean only search across all content for an entire enterprise. This raises another fundamental problem of perception. Basically, there are few to no instances of a single enterprise search engine being the only search solution for any major enterprise. Even when an organization “standardizes” on one product for its enterprise search, there will be dozens of other instances of search deployed for groups, divisions, and embedded within applications. Just two examples are the use of Vivisimo now used for USA.gov to give the public access to government agency public content, even as each agency uses different search engines for internal use. Also, there is IBM, which offers the OmniFind suite of enterprise search products, but uses Endeca internally for its Global Services Business enterprise.

Finally, on the issue of expectations, most of the vendors I have heard from are excited that the Microsoft announcement confirms the existence of an enterprise search market. They know that revenues for enterprise search, compared to Web search, have been miniscule. But now that Microsoft is investing heavily in it, they hope that top management across all industries will see it as a software solution to procure. Many analysts are expecting other major acquisitions, perhaps soon. Frequently mentioned buyers are Oracle and IBM but both have already made major acquisitions of search and content products, and both already offer enterprise search solutions. It is going to be quite some time before Microsoft sorts out all the pieces of FAST IP and decides how to package them. Other market acquisitions will surely come. The question is whether the next to be acquired will be large search companies with complex and expensive offerings bought by major software corporations. Or will search products targeting specific enterprise search markets be a better buy to make an immediate impact for companies seeking broader presence in enterprise search as a complementary offering to other tools. There are a lot of enterprise search problems to be solved and a lot of players to divvy up the evolving business for a while to come.

A Call for Papers and Microsoft creates a FAST Opening in the New Year

I closed 2007 with some final takeaways from the Gilbane Conference and notes about semantic search. Already we are planning for Gilbane San Francisco and you are invited to participate. There is no question that enterprise search, in all its dimensions, will be a central theme of several sessions at the conference, June 17th through 19th. I will lead with a discussion in which a whole range of search topics, technologies and industry themes will be explored in a session featuring guest Steve Arnold, author of Google Version 2.0, The Calculating Predator. To complement the sessions, numerous search technology vendors will be present in the exhibit hall.

A most important conference component will be a highlight for conference goers, shared-experiences about selecting, implementing and engaging with search tools in the enterprise. Everyone wants to know what everyone else is doing, learning and what they know about enterprise search. You may want to present your experiences or those of your organization. If you are interested, considering presenting, know of a good case study, usability or “lessons learned” from implementing search technology, please raise your hand. You can do this by reaching out through this link to submit a proposal and make reference to the “enterprise search blog call for papers.” You can be sure I’ll follow-up soon to explore the options for you or a colleague to participate. This is a great opportunity to be part of a community of practitioners like you and attend a conference that always has substantive value for participants.

Leave it to Microsoft to end the year with a big announcement and open the next one with an even bigger one. We knew that the world of enterprise search was going to contract in terms of the number of established vendors, even though it is expanding in new and innovative offerings. Microsoft had to make a bold play in an industry where Google has been the biggest player on the WWW stage while reaching deeper into the enterprise, tickling at Microsoft’s decades-old hold on content creation and capture. So, with the acquisition of FAST Search & Transfer, whose technology may not be the best in the enterprise search market but is certainly the most widely deployed at the high-end, Microsoft opens with a direct challenge to its largest competitor.

Boy! Have the emails been flying this morning. At least I know there will be plenty of material to ponder in the next few weeks and months. P.S. Don’t miss the action in San Francisco!

Enterprise Search and Its Semantic Evolution

That the Gilbane Group launched its Enterprise Search Practice this year was timely. In 2007 enterprise search become a distinct market force, capped off with Microsoft announcing in November that it has definitively joined the market.

Since Jan. 1, 2007, I have tried to bring attention to those issues that inform buyers and users about search technology. My intent has been to make it easier for those selecting a search tool while helping them to get a highly satisfactory result with minimal surprises. Playing coach and lead champion while clarifying options within enterprise search is a role I embrace. It is fitting then, that I wrap up this year with more insights gained from Gilbane Boston; these were not previously highlighted and relate to semantic search.

The semantic Web is a concept introduced almost ten years ago reflecting a vision of how the Worldwide Web (WWW) would evolve. In the beginning we needed a specific address (URL) to get to individual Web sites. Some of these had their own search engines while others were just pages of content we scrolled through or jumped through from link to link. Internet search engines like Alta Vista and Northern Light searched limited parts of the WWW. Then, Yahoo and Google came to provide much broader coverage of all “free” content. While popular search engines provided various categorizing, taxonomy navigation, keyword and advanced searching options, you had to know the terminology that content pages contained to find what you meant to retrieve. If your terms were not explicitly in the content, pages with synonymous or related meaning were not found. The semantic Web vision was to “understand” your inquiry intent and return meaningful results through its semantic algorithms.

The most recent Gilbane Boston conference featured presentations of commercial applications of various semantic search technologies that are contributing to enterprise search solutions. A few high level points gleaned from speakers on analytic and semantic technologies follow.

  • Jordan Frank on blogs and wikis in enterprises articulated how they add context by tying content to people and other information like time. Human commentary is a significant content “contextualizer,” my term, not his.
  • Steve Cohen and Matt Kodama co-presented an application using technology (interpretive algorithms integrated with search) to elicit meaning from erratic and linguistically difficult (e.g. Arabic, Chinese) text in the global soup of content.
  • Gary Carlson gave us understanding of how subject matter expertise contributes substantively to building terminology frameworks (aka “taxonomies”) that are particularly meaningful within a unique knowledge community.
  • Mike Moran helped us see how semantically improved search results can really improve the bottom line in the business sense in both his presentation and later in his blog, a follow-up to a question I posed during the session.
  • Colin Britton described the value of semantic search to harvest and correlate data from highly disparate data sources needed to do criminal background checks.
  • Kate Noerr explained the use of federating technologies to integrate search results in numerous scenarios, all significant and distinct ways to create semantic order (i.e. meaning) out of search results chaos.
  • Bruce Molloy energized the late sessions with his description of how non-techies can create intelligent agents to find and feed colleagues relevant information by searching in the background in ways that go far beyond the typical keyword search.
  • Finally, Sean Martin and John Stone co-presented an approach to computational data gathering and integrating the results in an analyzed and insightful format that reveals knowledge about the data, not previously understood.

Points taken are that each example represents a building block of the semantic retrieval framework we will encounter on the Web and within the enterprise. The semantic Web will not magically appear as a finished interface or product but it will become richer in how and what it helps us find. Similar evolutions will happen in the enterprise with a different focus, providing smarter paths for operating within business units.

There is much more to pass along in 2008 and I plan to continue with new topics relating to contextual analysis, the value, use and building of taxonomies, and the variety of applications of enterprise search tools. As for 2007, it’s a wrap.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑