Curated for content, computing, and digital experience professionals

Category: Enterprise search & search technology (Page 23 of 60)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

Prove It! – The POC & Other Types of Evaluation for Enterprise Search

When a product is described as “the only”, “the best”, the “most complete”, “the fastest,” “the leading,” etc., does anyone actually read and believe these qualifiers in marketing copy, software or otherwise? Do we believe it when an analyst firm writes reviews, or when product hype appears in industry publications?

Most technology buyers have a level of cynicism about such claims and commentary because we know that in each case there is a bias, with good reasons, for the praise. However, also for good reasons, language containing positive sentiment can have an effect – otherwise, it would not be so widespread. At the very least, sentiment analysis tools that are integrated with search engines will pick up on pervasive tones of praise, and from that create new content streams that compound the positive spin.

Being aware of marketing methods and influences on our psyche should arm us with caution but not to a point of being risk averse or frozen to indecisiveness. Instead, we need to find a way to prove the hype and claims through thoughtful, artful and strategic analytical processes. We need methods for testing claims that are appropriate for the solution sought.

First, we need to establish what is appropriate for our business need. Cost is often the primary qualifying factor when narrowing products that will be considered, but this may be short sighted. Business impact and benefits from applying the right solution need to be directly in our line of sight. If the solution you acquire can be evaluated to demonstrate a significant business benefit, the cost of a higher priced product may also be high-value to your business. Add to business impact the scope for the use of an enterprise search engine (how widely deployed and leveraged) and whether it can scale to include multiple searchable repositories across the organization; these attributes may enhance business impact.

Judging business impact, scope and scaling enterprise search products is a tricky proposition. You absolutely cannot do it by totaling the number of positive checks a vendor ticks off on a spreadsheet of requirements. While such a device can be useful for narrowing down a field of products to those you might select, it is only a beginning. All too often, this is where the selection process ends.

What needs to be done next? I recommend these evaluation steps that can be done concurrently:

  • Find customers by using social tools; reading and researching. With so many Web-based social and search tools it should be easy to identify individuals and enterprises that are actually users of products you are considering. Reach out, schedule talk time and have a pointed list of questions ready to investigate their experiences – listen carefully and follow up on any comments that sound a note of caution.
  • Run a proof-of-concept initiative that includes serious testing by key users with content that is relevant to the testers. Develop test cases and define explicitly for the testers what they are searching, and what you want to learn.
  • Keep careful notes throughout your interactions with vendors, as you seek information, test their products and request answers to technical questions. The same goes for the conversations with their customers, the ones you find on your own, not just the ones vendors steer you to. Your inquiry needs to include information about business relationship issues, responsiveness, ease of use, and how well a vendor can understand and respond to your business needs in these early relationship stages.
  • If things are not going smoothly, observe how a vendor reacts and responds; what is their follow-up and follow-through in the pre-purchase stage? Never succumb to the excuse that because they are “going through growing pains,” have “so much business demand” they are stretched thin, or that something else is more important to them than your product evaluation. If any of these creep in before you purchase, you have a major symptom conveying clearly that your business is not as important to the vendor or not as valuable as another company’s.

Longevity of use of an enterprise search application must be foremost in your mind throughout all of these steps. While many enterprises try to plan for upgrading or replacing legacy software applications to remain competitive and current using newer technologies, actual experiences are rarely ideal. You could be “stuck” with your choice for a decade or longer. Being in a dependent relationship with a vendor or product you are not happy with, will be a miserable experience and no benefit to your enterprise, no matter how popular the product is in the industry press.

The steps for selection will take a little longer than just sending out RFPs and reading responses, but it is really worth it over the long haul relationship you are about to engage.

Forecasting Software Product Abandonment

Given the announcement from Microsoft that it would make 2010 releases of Fast on Linux and UNIX the last for these operating systems, a lot of related comments have appeared over the past few weeks. For those of us who listened intently to early commentary on the Fast acquisition by Microsoft about its high level of commitment to dual development tracks, it only confirms what many analysts suspected would happen. Buyers rarely embrace their technology acquisitions solely (or even primarily) for the technology.

While these 2010 releases of Fast ESP on UNIX and Linux will continue to be supported for ten years, and repositories are projected to be indexable on these two platforms by future Fast releases, some customers will opt out of continuing with Fast. As newer and more advanced search technologies support preferred operating systems, they will choose to move. Microsoft probably expects to retain most current customers for the time being – inertia and long evaluation and selection processes by enterprises are on their side.

This recent announcement did include a small aside questioning whether Microsoft would continue to offer a standalone search engine outside of its SharePoint environment where the Fast product has been embedded and leveraged first. It sounds like the short term plan is to continue with standalone ESP, but certainly no long term commitment is made.

So, whatever stasis/constancy pre-Microsoft Fast customers were feeling sanguine about, it is surely being shaken around. Let’s take a look at some reasons that vendors abandon their acquisitions. First we need to consider why companies add products through acquisition in the first place. A simple list looks like this:

  1. Flat sales
  2. Need to penetrate a growth market or industry
  3. Desire to demonstrate strength to its existing customer base by acquiring a high-end brand name
  4. Need for technology, IP, and expertise
  5. Desire to expand the customer base, quickly

While item 1 probably was not a contributor to the Microsoft Fast acquisition, 2 and 3 certainly factored into their plan. Fast was “the” brand and had become synonymous in the marketplace with “enterprise search leader.” Surely Microsoft considered the technology IP and key employees that they would be acquiring, and having a ready-made customer-base and maintenance revenue stream would be considerations, too.
Customers do have reasons to be nervous in any of these big acquisitions, however. Here is what often get exposed and revealed once the onion is peeled:

  • Game changing technology is playing havoc in the marketplace; in search there are numerous smaller players with terrific technologies, more nimble and innovative development teams with rigorous code control mentalities, and the experience of having looked at gaps in older search technologies.
  • Cost of supporting multiple code bases is enormous, so the effort of developing native support on multiple platforms becomes onerous.
  • For any technology, loss of technical gurus (particularly when there has been a culture of loose IP control, poor capture of know-how, and limited documentation) will quickly drive a serious reality check as the acquirer strives to understand what it has bought.
  • Brand name customers may not stick around to find out what is going to happen, particularly if the product was on the path to being replaced anyway. Legacy software may be in place because it is irreplaceable or simply due to the neglect of enterprises using it. It may be very hard for the acquiring buyer to determine which situation is the case. A change of product ownership may be just the excuse that some customers need to seek something better. Customers understand the small probability of having a quick and smooth integration of a just-acquired product into the product mix of a large, mature organization.
  • A highly diverse customer base, in many vertical markets, with numerous license configuration requirements for hardware and operating system infrastructures will be a nightmare to support for a company that has always standardized on one platform. Providing customer support for a variety of installation, tuning and on-going administration differences is just not sustainable without a lot of advance planning and staffing.

The Microsoft/Fast circumstance is just an illustration. You might take a look at what is also going on with SAP after its acquisition of Business Objects (BO) in this lengthy analysis at Information Week. In this unfortunate upheaval, BO’s prior acquisition of Inxight has been a particular loss to those who had embraced that fine analytics and visualization engine.

The bottom line is that customers who depend on technology of any kind, for keeping their own businesses running effectively and efficiently, must be aware of what is transpiring with their most valued vendor/suppliers. When there is any changing of the guardians of best-of-breed software tools, be prepared by becoming knowledgeable about what is really under the skin of the newly harvested onion. Then, make your own plans accordingly.

What is the Price and What is the Cost?

Enterprise software pricing runs the gamut from nominal to 100s of thousands of dollars. Unless software for enterprise search reaches a commodity status with a defined baseline of functional specifications, the marketplace will continue to be confused and highly segmented.

What buyers need to do first is to stop limiting their procurement selection choices based primarily on license prices. When enterprises begin their selection by considering prices first, many options are eliminated that may be functionally more appropriate and for which the total cost of ownership may be even less.

Product pricing correlates more to the market domain in which a vendor sells or aims to sell than to actual product value per installed user. Therefore, companies in the small to mid-range are particularly vulnerable to unreasonable licensing. I have written about this before but it bears repeating, the strength of the underlying technology has little to do with the price but can influence the total-cost-of-ownership (TCO) dramatically.

Buyers often believe high license price relates to top product value; in general you still need to add another 60-80% for services and support costs to get that value out. But let’s look at the business reality and corporate context for sellers of high-priced enterprise search.

Net sales of any company that is large is a significant determinant of its reputation and potential staying power in its industry. However, when actual sales for a search product line are a tiny fraction of total company revenue, potential buyers of enterprise search need to know that and factor it into their decision-making for these reasons:

  • The largest software companies are heavily vested in subscribing to analyst services that write about the industry. They are diligent in reporting their sales figures to those companies and publications that do annual surveys on various industry segments. The reporting is usually careful to note when revenues for a particular sector ( like search) are not broken out, but this often escapes the notice of buyers who only see that company X has enormous revenues compared to others. This leaves the impression that they are also a standout in the search sector.
  • The fact that a company offers many software products, of which search is only one, has often resulted from acquisition of a lot of products. Search may only be in the mix because it complements other products. The company may or may not have actually retained the technology gurus who originally designed, developed and supported the software. A lot of software quickly becomes stale once acquired by a third-party.
  • When a very large company offers many products, it focuses sales, account management, support and development on those with the largest revenue stream or growth potential. Marketing for marginal products may be sustained for a longer period to bring in “easy” business but unfortunately, for too long, search has been treated as a loss leader to attract revenues for other product lines. Where “search” fits into a mix of products, how well it will be serviced and supported over time may be difficult to discern.
  • The final situation that happens for very large software companies is that competition is an ever-present cause for shifting agendas. The largest software firms will often abandon technologies whose architecture, unique functions and even their customers do not fit their changing market interests. They will abandon products for which they have paid huge sums once the initial value of the procurement has been realized, when a product’s technology has been captured for embedding in other product suites, or if the product is no longer viewed as strategic.

In the next blog posting we’ll take a look at some other reasons that vendors make and then abandon their acquisitions. But in the meantime, here is a recommendation to buying decision-makers:

When you see a very long list of customer logos on the web sites of major software vendors there is important context that is not provided. Large corporations can and do buy competing products all the time. Some products get into enterprise-wide use and adoption for the long term while others are used briefly or in smaller applications. You can’t know whether a product is even in use in the company whose logo is displayed.

Because it is almost impossible for an outsider to find the actual buyer/user of a product in a large enterprise; the posted logos tell you little. Inside an enterprise one may discover endless tales of when, why and how competing products were acquired, many as part of package deals or through a subsidiary acquisition. What is also true is that stories of successful implementations or brand loyalty do not abound.

For you who are new to enterprise search, take control of your own destiny by educating yourself using a lower priced product with a good reputation for a niche application. Invest your budget instead in human resources (internal or 3rd party) to craft the solution you really need.

Start with a vision of appropriate scale, tackling a small domain of high value content that is currently hard to find in your organization.

Use the experience of implementing and leveraging this search product and engaging with the vendor to bring a deeper understanding of the technology and applications of search. Working with a vendor dedicated exclusively to search will have another cost benefit because of the focused attention you are more likely to receive. Delving deeply into planning and implementation for a targeted result will have a cost that brings multiple benefits moving forward to larger and more complex implementations – even if you move on to another product.

Search Industry in 2010

Just in from Information Week is this article (Exclusive: IBM Reorganizes Software Group ) that prompted me to launch 2010 with some thoughts on where we are heading with enterprise search this year. When IBM does something dramatic it impacts the industry because it makes others react.

I don’t make forecasts or try to guess whether strategic changes will succeed or fail but a couple of years ago, I blogged on IBM’s introduction of Yahoo OmniFind, a free offering and then followed up with these comments just a few months ago. IBM makes their competitors change, try to outsmart, outguess, or copy, just as Microsoft or Google changes cause ripples in the industry.

Meanwhile, OpenText, another large software company with search offerings, is not going to offer search outside of its other product suites. [More is likely to come out after the scheduled analyst meetings today but I’m not there and can’t brief you on deeper intent.] We have recently seen an announcement about FAST being delivered with new SharePoint offerings, the first major release of FAST announced since Microsoft acquired them almost two years ago. While FAST is still available as a standalone product from MS, it and other search engines may be steadily moving into being embedded in suites by their acquirers.

Certainly IBM has a lot of search components that they have acquired, so continuing to bind with other content offerings is a probable strategy. Oracle and Autonomy may soon come up with similar suite offerings embedding search once again. Oracle SES (Secure Enterprise Search) does not appear to have a lot of traction and it’s possible that supporting pure search offerings may be a burden for Autonomy with its stable of many acquired content products.

All of this leads me to think that, since enterprise search has gotten such a bad reputation as a failed technology, the big software houses are going to bury it in point solutions. Personally, I believe that enterprise search is a failed strategy and SMBs can still find search engines that will serve the majority of their enterprise needs for several years to come. The same holds true for divisions or groups within large corporations.

Guidance: select and adopt one or more search solutions that fit your budget for small scale needs, point solutions and enterprise content that everyone in the organization needs to access on a regular basis. Learn how these products work, what they can and cannot deliver, making incremental adjustments as needs change and evolve. Do not install and think you are done because you will never be done. Cultivate a few search experts to stick with the evolving landscape and give them the means to keep up with changes in the search landscape. It is going to keep morphing for a long time to come.

Contegra and dtSearch Announce Faceted Search for dtSearch

Contegra Systems and dtSearch announced a faceted search add-on for dtSearch Developer Customers. Faceted search enables dynamic filtering of search results by attributes. Built on the dtSearch Engine APIs, Contegra Systems’ Kaleido Search now makes faceted search available to content-rich applications and e-commerce sites. Kaleido Search offers the ability to group search results by facet, the ability to “expand and collapse” facet selections, on-demand summaries of selected facets, and more. Kaleido Search enables these faceted search features in the context of comprehensive solution for online data access that is customizable to suit any site. The dtSearch Engine can index over a terabyte of data in a single index, as well as create and instantly search an unlimited number of indexes. The software offers more than 25 search options, including Unicode support covering hundreds of international languages. Proprietary file format support highlights hits in popular file types.  A built-in Spider supports searching of local and remote, public and secure, dynamic and static web data, with WYSIWYG hit-highlighted displays. The dtSearch Engine API supports .NET, Java, C++, SQL, etc., including native 64-bit Windows/Linux support. http://www.contegrasystems.com, http://www.dtsearch.com

Perst Embedded Database Integrated with Jease Content Management Framework

Jease, a content management framework based on open source Java technologies, has added support for the Perst object-oriented, open source embedded database system from McObject. When used with Jease, Perst becomes the persistence engine for highly customized, content- and database-driven Web applications that leverage the productivity and efficiency of working with “plain old Java objects” (POJOs). Jease (the name combines “Java” and “ease”) provides building blocks for developers with even a little Java experience to assemble Web applications tailored to specific needs. The goal of Jease is to offer a flexible content management framework rather than a full-blown content management system. Other open source software components used by Jease include Apache Lucene for full-text indexing and search, and the ZK Ajax + Mobile Java framework. Perst and Perst Lite are part of McObject’s family of small footprint, high performance embedded database software products. The eXtremeDB in-memory embedded database from McObject is used  in devices including MP3 players, industrial automation solutions, digital TVs, telecom/network communications equipment and military/aerospace technology. Perst is available for Java and .NET, including Java ME and .NET Compact Framework. http://www.jease.org/, http://www.mcobject.com

In the end, good search may depend on good source.

As the world of search becomes more and more sophisticated (and that process has been underway for decades,) we may be approaching the limits of software’s ability to improve its ability to find what a searcher wants. If that is true, and I suspect that it is, we will finally be forced to follow the trail of crumbs up the content life cycle… to its source.

Indeed, most of the challenges inherent in today’s search strategy and products appears to grow from the fact that while we continually increase our demands for intelligence on the back end, we have done little if anything to address the chaos that exists on the front end. You name it, different word processing formats, spreadsheets, HTML tagged text, database delimited files, and so on are all dumped into what we think of as a coherent, easily searchable body of intellectual property. It isn’t and isn’t likely to become so any time soon unless we address the source.

Having spent some time in the library automation world, I can remember the sometimes bitter controversies over having just two major foundations for cataloging source material (Dewey and LC; add a third if you include the NICEM A/V scheme.) Had we known back then that the process of finding intellectual property would devolve into the chaos we now confront, with every search engine and database product essentialy rolling its own approach to rational search, we would have considered ourselves blessed. In the end, it seems, we must begin to see the source material, its physcial formats, its logical organization and its inclusion of rational cataloging and taxonomy elements as the conceptual raw material for its own location.

As long as the word processing world teaches that anyone creating anything can make it look like it should in a dozen different ways, ignoring any semblance of finding-aid inclusion, we probably won’t have a truly workable ability to find what we want without reworking the content or wading through a haystack of misses to find our desired hits.

Unfortunately, the solutions of yesteryear, including after-creation cataloging by a professional cataloger, probably won’t work now either, for cost if no other reason. We will be forced to approach the creators of valuable content, asking them for a minimum of preparation for searching their product, and providing the necessary software tools to make that possible.

We can’t act too soon because, despite the growth of software elegance and raw computer power, this situation will likely get worse as the sheer volume of valuable content grows. Regards, Barry Read more: Enterprise Search Practice Blog:  https://gilbane.com/search_blog/

W3C Publishes Drafts of XQuery 1.1, XPath 2.1

The World Wide Web Consortium (W3C) has published new Drafts of XQuery 1.1, XPath 2.1 and Supporting Documents. As part of work on XSLT 2.1 and XQuery 1.1, the XQuery and XSL Working Groups have published First Public Working Drafts of “XQuery and XPath Data Model 1.1,” “XPath and XQuery Functions and Operators 1.1,” “XSLT and XQuery Serialization 1.1” and “XPath 2.1.” In addition, the XQuery Working Group has updated drafts for “XQuery 1.1: An XML Query Language,” “XQueryX 1.1” and “XQuery 1.1 Requirements.” http://www.w3.org/News/2009#entry-8682

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑