Curated for content, computing, and digital experience professionals

Author: Lynda Moulton (Page 6 of 18)

What is the Price and What is the Cost?

Enterprise software pricing runs the gamut from nominal to 100s of thousands of dollars. Unless software for enterprise search reaches a commodity status with a defined baseline of functional specifications, the marketplace will continue to be confused and highly segmented.

What buyers need to do first is to stop limiting their procurement selection choices based primarily on license prices. When enterprises begin their selection by considering prices first, many options are eliminated that may be functionally more appropriate and for which the total cost of ownership may be even less.

Product pricing correlates more to the market domain in which a vendor sells or aims to sell than to actual product value per installed user. Therefore, companies in the small to mid-range are particularly vulnerable to unreasonable licensing. I have written about this before but it bears repeating, the strength of the underlying technology has little to do with the price but can influence the total-cost-of-ownership (TCO) dramatically.

Buyers often believe high license price relates to top product value; in general you still need to add another 60-80% for services and support costs to get that value out. But let’s look at the business reality and corporate context for sellers of high-priced enterprise search.

Net sales of any company that is large is a significant determinant of its reputation and potential staying power in its industry. However, when actual sales for a search product line are a tiny fraction of total company revenue, potential buyers of enterprise search need to know that and factor it into their decision-making for these reasons:

  • The largest software companies are heavily vested in subscribing to analyst services that write about the industry. They are diligent in reporting their sales figures to those companies and publications that do annual surveys on various industry segments. The reporting is usually careful to note when revenues for a particular sector ( like search) are not broken out, but this often escapes the notice of buyers who only see that company X has enormous revenues compared to others. This leaves the impression that they are also a standout in the search sector.
  • The fact that a company offers many software products, of which search is only one, has often resulted from acquisition of a lot of products. Search may only be in the mix because it complements other products. The company may or may not have actually retained the technology gurus who originally designed, developed and supported the software. A lot of software quickly becomes stale once acquired by a third-party.
  • When a very large company offers many products, it focuses sales, account management, support and development on those with the largest revenue stream or growth potential. Marketing for marginal products may be sustained for a longer period to bring in “easy” business but unfortunately, for too long, search has been treated as a loss leader to attract revenues for other product lines. Where “search” fits into a mix of products, how well it will be serviced and supported over time may be difficult to discern.
  • The final situation that happens for very large software companies is that competition is an ever-present cause for shifting agendas. The largest software firms will often abandon technologies whose architecture, unique functions and even their customers do not fit their changing market interests. They will abandon products for which they have paid huge sums once the initial value of the procurement has been realized, when a product’s technology has been captured for embedding in other product suites, or if the product is no longer viewed as strategic.

In the next blog posting we’ll take a look at some other reasons that vendors make and then abandon their acquisitions. But in the meantime, here is a recommendation to buying decision-makers:

When you see a very long list of customer logos on the web sites of major software vendors there is important context that is not provided. Large corporations can and do buy competing products all the time. Some products get into enterprise-wide use and adoption for the long term while others are used briefly or in smaller applications. You can’t know whether a product is even in use in the company whose logo is displayed.

Because it is almost impossible for an outsider to find the actual buyer/user of a product in a large enterprise; the posted logos tell you little. Inside an enterprise one may discover endless tales of when, why and how competing products were acquired, many as part of package deals or through a subsidiary acquisition. What is also true is that stories of successful implementations or brand loyalty do not abound.

For you who are new to enterprise search, take control of your own destiny by educating yourself using a lower priced product with a good reputation for a niche application. Invest your budget instead in human resources (internal or 3rd party) to craft the solution you really need.

Start with a vision of appropriate scale, tackling a small domain of high value content that is currently hard to find in your organization.

Use the experience of implementing and leveraging this search product and engaging with the vendor to bring a deeper understanding of the technology and applications of search. Working with a vendor dedicated exclusively to search will have another cost benefit because of the focused attention you are more likely to receive. Delving deeply into planning and implementation for a targeted result will have a cost that brings multiple benefits moving forward to larger and more complex implementations – even if you move on to another product.

Search Industry in 2010

Just in from Information Week is this article (Exclusive: IBM Reorganizes Software Group ) that prompted me to launch 2010 with some thoughts on where we are heading with enterprise search this year. When IBM does something dramatic it impacts the industry because it makes others react.

I don’t make forecasts or try to guess whether strategic changes will succeed or fail but a couple of years ago, I blogged on IBM’s introduction of Yahoo OmniFind, a free offering and then followed up with these comments just a few months ago. IBM makes their competitors change, try to outsmart, outguess, or copy, just as Microsoft or Google changes cause ripples in the industry.

Meanwhile, OpenText, another large software company with search offerings, is not going to offer search outside of its other product suites. [More is likely to come out after the scheduled analyst meetings today but I’m not there and can’t brief you on deeper intent.] We have recently seen an announcement about FAST being delivered with new SharePoint offerings, the first major release of FAST announced since Microsoft acquired them almost two years ago. While FAST is still available as a standalone product from MS, it and other search engines may be steadily moving into being embedded in suites by their acquirers.

Certainly IBM has a lot of search components that they have acquired, so continuing to bind with other content offerings is a probable strategy. Oracle and Autonomy may soon come up with similar suite offerings embedding search once again. Oracle SES (Secure Enterprise Search) does not appear to have a lot of traction and it’s possible that supporting pure search offerings may be a burden for Autonomy with its stable of many acquired content products.

All of this leads me to think that, since enterprise search has gotten such a bad reputation as a failed technology, the big software houses are going to bury it in point solutions. Personally, I believe that enterprise search is a failed strategy and SMBs can still find search engines that will serve the majority of their enterprise needs for several years to come. The same holds true for divisions or groups within large corporations.

Guidance: select and adopt one or more search solutions that fit your budget for small scale needs, point solutions and enterprise content that everyone in the organization needs to access on a regular basis. Learn how these products work, what they can and cannot deliver, making incremental adjustments as needs change and evolve. Do not install and think you are done because you will never be done. Cultivate a few search experts to stick with the evolving landscape and give them the means to keep up with changes in the search landscape. It is going to keep morphing for a long time to come.

Layering Technologies to Support the Enterprise with Semantic Search

Semantic search is a composite beast like many enterprise software applications. Most packages are made up of multiple technology components and often from multiple vendors. This raises some interesting thoughts as we prepare for Gilbane Boston 2009 to be held this week.

As part of a panel on semantic search, moderated by Hadley Reynolds of IDC, with Jeff Fried of Microsoft and Chris Lamb of the OpenCalais Initiative at Thomson Reuters, I wanted to give a high level view of semantic technologies currently in the marketplace. I contacted about a dozen vendors and selected six to highlight for the variety of semantic search offerings and business models.

One case study involves three vendors, each with a piece of the ultimate, customer-facing, product. My research took me to one company that I had reviewed a couple of years ago, and they sent me to their “customer” and to the customer’s customer. It took me a couple of conversations and emails to sort out the connections; in the end the relationships made perfect sense.

On one hand we have conglomerate software companies offering “solutions” to every imaginable enterprise business need. On the other, we see very unique, specialized point solutions to universal business problems with multiple dimensions and twists. Teaming by vendors, each with a solution to one dimension of a need, create compound product offerings that are adding up to a very large semantic search marketplace.

Consider an example of data gathering by a professional services firm. Let’s assume that my company has tens of thousands of documents collected in the course of research for many clients over many years. Researchers may move on to greater responsibility or other firms, leaving content unorganized except around confidential work for individual clients. We now want to exploit this corpus of content to create new products or services for various vertical markets. To understand what we have, we need to mine the content for themes and concepts.

The product of the mining exercise may have multiple uses: help us create a taxonomy of controlled terms, preparing a navigation scheme for a content portal, providing a feed to some business or text analytics tools that will help us create visual objects reflecting various configurations of content. A text mining vendor may be great at the mining aspect while other firms have better tools for analyzing, organizing and re-shaping the output.

Doing business with two or three vendors, experts in their own niches, may help us reach a conclusion about what to do with our information-rich pile of documents much faster. A multi-faceted approach can be a good way to bring a product or service to market more quickly than if we struggle with generic products from just one company.

When partners each have something of value to contribute, together they offer the benefits of the best of all options. This results in a new problem for businesses looking for the best in each area, namely, vendor relationship management. But it also saves organizations from dealing with huge firms offering many acquired products that have to be managed through a single point of contact, a generalist in everything and a specialist in nothing. Either way, you have to manage the players and how the components are going to work for you.

I really like what I see, semantic technology companies partnering with each other to give good-to-great solutions for all kinds of innovative applications. By the way, at the conference I am doing a quick snapshot on each: Cogito, Connotate (with Cormine and WorldTech), Lexalytics, Linguamatics, Sinequa and TEMIS.

Where and How Can You Look for Good Enterprise Search Interface Design?

Designing an enterprise search interface that employees will use on their intranet is challenging in any circumstance. But starting from nothing more than verbal comments or even a written specification is really hard. However, conversations about what is needed and wanted are informative because they can be aggregated to form the basis for the overarching design.

Frequently, enterprise stakeholders will reference a commercial web site they like or even search tools within social sites. These are a great starting point for a designer to explore. It makes a lot of sense to visit scores of sites that are publicly accessible or sites where you have an account and navigate around to see how they handle various design elements.

To start, look at:

  • How easy is it to find a search box?
  • Is there an option to do advanced searches (Boolean or parametric searching)?
  • Is there a navigation option to traverse a taxonomy of terms?
  • Is there a “help” option with relevant examples for doing different kinds of searches?
  • What happens when you search for a word that has several spellings or synonyms, a phrase (with or without quotes), a phrase with the word and in it, a numeral, or a date?
  • How are results displayed: what information is included, what is the order of the results and can you change them? Can you manipulate results or search within the set?
  • Is the interface uncluttered and easily understood?

The point of this list of questions is that you can use it to build a set of criteria for designing what your enterprise will use and adopt, enthusiastically. But this is only a beginning. By actually visiting many sites outside your enterprise, you will find features that you never thought to include or aggravations that you will surely want to avoid. From these experiences on external sites, you can build up a good list of what is important to include or banish from your design.

When you find sites that you think are exemplary, ask key stakeholders to visit them and give you their feedback, preferences and dislikes. Particularly, you want to note what confuses them or enthusiastic comments about what excites them.

This post originated because several press notices in the past month brought to my attention Web applications that have sophisticated and very specialized search applications. I think they can provide terrific ideas for the enterprise search design team and also be used to demonstrate to your internal users just what is possible.

Check out these applications and articles: on KNovel, particularly this KNovel pageThomasNet; EBSCOHost mentioned in this article about the “deep Web.”. All these applications reveal superior search capabilities, have long track records, and are already used by enterprises every day. Because they are already successful in the enterprise, some by subscription, they are worth a second look as examples of how to approach your enterprise’s search interface design.

Meta Tags and Trusted Resources in the Enterprise

A recent article about how Google Internet search does not use meta tags to find relevant content got me thinking about a couple of things.

First it explains why none of the articles I write for this blog about enterprise search appear in Google alerts for “enterprise search.” Besides being a personal annoyance, easily resolved if I invested in some Internet search optimization, it may explain why meta tagging is a hard sell behind the firewall.

I do know something about getting relevant content to show up in enterprise search systems and it does depend on a layer of what I call “value-added metadata” by someone who knows the subject matter in target content and the audience. Working with the language of the enterprise audience that relies on finding critical content to do their jobs, a meta tagger will bring out topical language known to be the lingua franca of the dominant searchers as well as the language that will be used by novice employee searchers. The key here is to recognize that in any specific piece of content its “aboutness” may never be explicitly spelled out in terminology by the author.

In one example, let’s consider some fundamental HR information about “holiday pay” or “compensation for holidays” or “compensation for time-off.” The strings in quotes were used throughout documents on the intranet of one organization where I consulted. When some complained about not being able to find this information using the company search system, my review of search logs showed a very large number of searches for “vacation pay” and almost no searches for “compensation” or “holidays” or “time off.” Thus, there was no way that using the search engine employees would stumble upon the useful information they are seeking – unless, meta tags make “vacation pay” a retrievable index pointer to these documents. The tagger would have analyzed the search logs, seen the high number of searches for that phrase and realized that it was needed as a meta tag.

Now, back to Google’s position on ignoring meta tags because writers and marketing managers were “gaming the system.” They were adding tags they thought would be popular to get people to look at content not related but for which they were seeking a huge audience.

I have heard the concern that people within enterprises might also hijack the usefulness of content they were posting in blogs or wikis to get more “eyeballs” in the organization. This is a foolish concern, in my opinion. First I have never seen evidence that this happens and don’t believe that any productive enterprise has people engaging in this obvious foolishness.

More importantly, professional growth and success depends on the perceptions of others, their belief in you and your work, and the value of your ideas. If an employee is so foolish as to misdirect fellow employees to useless or irrelevant content, he is not likely to gain or keep the respect of his peers and superiors. In the long run persistent, misleading or mischievous meta tagging will have just the opposite effect, creating a pathway to the door.

Conversely, the super meta tagger with astute insights into what people are looking for and how they are most likely to look for it, will be the valued expert we all need to care for and spoon feed us our daily content. Trusted resources rise to the top when they are appropriately tagged and become bedrock content when revealed through enterprise search on well-managed intranets.

Competition among Search Vendors

Is there any real competition when it comes to enterprise search? Articles like this one in ComputerWorld make good points but also foster the idea that this could be a differentiator for buyers: Yahoo deal puts IBM, Microsoft in enterprise search pickle, by Juan Carlos Perez, August 4, 2009.

I wrote about the IBM launch of the OmniFind suite of search products a couple of years ago with positive comments. The reality ended up being quite different as I noted later. Among the negatives were three that stand out in my mind. First, free (as in the IBM OmniFind Yahoo no-charge edition) is rarely attractive to serious enterprises looking for a well-supported product. Second, the substantial computing overhead for the free product was significant enough that some SMBs I know of were turned off; the costs associated with the hardware and support it would require offset “free.” Third, my understanding that the search architecture for the free product would provide seamless upgrades to IBM’s other OmniFind products was wrong. Each subsequent product adoption would require the same “rip and replace” that Steve Arnold describes in his report, Beyond Search. It is hard to believe that IBM got much traction out of this offering from the enterprise search market at large. Does anyone know if there was really any head-to-head competition between IBM and other search vendors over this product?

On the other hand, does the Microsoft Express Search offering appeal to enterprises other than the traditional Microsoft shop? If Microsoft Express Search went away, it would probably be replaced by some other Microsoft search variation with inconvenience to the customer who needs to rip and replace and left on his own to grumble and gripe. What else is new? The same thing would happen with IBM Yahoo OmniFind users and they would adapt.

I’ve noticed that free and cheap products may become heavily entrenched in the marketplace but not among organizations likely to upgrade any time soon. Once enterprises get immersed in a complex implementation (and search done well does require that) they won’t budge for a long, long time, even if the solution is less than optimal. By the time they are compelled to upgrade they are usually so wedded to their vendor that they will accept any reasonable offer to upgrade that the vendor offers. Seeking competitive options is really difficult for most enterprises to pursue without an overwhelmingly compelling reason.

This additional news item indicates that Microsoft is still trying to get their search strategy straightened out with another new acquisition, Applied Discovery Selects Microsoft FAST for Advanced E-Discovery Document Search. E-discovery is a hot market in legal, life sciences and financial verticals but firms like ISYS, Recommind, Temis, and ZyLab are already doing well in that arena. It will take a lot of effort to displace those leaders, even if Microsoft is the contender. Enterprises are looking for point solutions to business problems, not just large vendors with a boatload of poorly differentiated products. There is plenty of opportunity for specialized vendors without going toe-to-toe with the big folks.

Convergence of Enterprise Search and Text Analytics is Not New

Prompted by the news item about IBM’s bid for SPSS and similar acquisitions by Oracle, SAP and Microsoft made me think about the predictions of more business intelligence (BI) capabilities being conjoined with enterprise search. But why now and what is new about pairing search and BI? They have always been complementary, not only for numeric applications but also for text analysis. Another article by John Harney in KMWorld referred to the “relatively new technology of text analytics” for analyzing unstructured text. The article is a good summary of some newer tools but the technology itself has had a long shelf life, too long for reasons which I’ll explore later.

Like other topics in this blog this one requires a readjustment in thinking by technology users. One of the great things about digitizing text was the promise of ways in which it could be parsed, sorted and analyzed. With heavy adoption of databases that specialized in textual, as well as numeric and date data fields for business applications in the 1960s and 70s, it became much easier for non-technical workers to look at all kinds of data in new ways. Early database applications leveraged their data stores using command languages; the better ones featured statistical analysis and publication quality report builders. Three that I was familiar with were DRS from ADM, Inc., BASIS from Battelle Columbus Labs and INQUIRE from IBM.

Tools that accompanied database back-ends had the ability to extract, slice and dice the database content, including very large text fields to report: word counts, phrase counts (breaking on any delimiter), transaction counts, relationships among data elements across associated record types, ability to create relationships on the fly, report expert activity and working documents, and describe distribution of resources. These are just a few examples of how new content assets could be created for export in minutes. In particular, a sort command with DRS had histogram controls that were invaluable to my clients managing corporate document and records collections, news clippings files, photographs, patents, etc. They could evaluate their collections by topic, date ranges, distribution, source, and so on, at any time.

So, there existed years ago the ability to connect data structures and use a command language to formulate new data models that informed and elucidated how information was being used in the organization, or to illustrate where there were holes in topics related to business initiatives. What were the barriers to wide-spread adoption? Upon reflection, I came to realize that extracting meaningful content from database in new and innovative formats requires a level of abstract thinking for which most employees are not well-trained. Putting descriptive data into a database via a screen form, then performing a transaction on the object of that data on another form, and then adding more data about another similar but different object are isolated in the database user’s experience and memory. The typical user is not trained to think about how the pieces of data might be connected in the database and therefore is not likely to form new ideas of how it can all be extracted in a report with new information about the content. There is a level of abstraction that eludes most workers whose jobs consist of a lot of compartmentalized tasks.

It was exciting to encounter prospects that really grasped the power of these tools and were excited to push the limits of the command language and reporting applications, but they were scarce. It turned out that our greatest use came in applying text analytics to the extraction of valuable information from our customer support database. A rigorously disciplined staff populated it after every support call with not only demographic information about the nature of the call, linked to a customer record that had been created back at the first contact during the sales process (with appropriate updates along the way in the procurement process) but also a textual description of the entire transaction. Over time this database was linked to a “wish list” database and another “fixes” database and the entire networked structure provided extremely valuable reports that guided both development work and documentation production. We also issued weekly summary reports to the entire staff so everyone was kept informed about product conditions and customer relationships. The reporting tools provided transparency to all staff about company activity and enabled an early version of “social search collaboration.”

Current text analytics products have significantly more algorithmic horsepower than the old command languages. But making the most of their potential and transforming them into utilities that any knowledge worker can leverage will remain a challenge for vendors in the face of poor abstract reasoning among much of the work force. The tools have improved but maybe not in all the ways they need to for widespread adoption. Workers should not have to be dependent on IT folks to create that unique analysis report that reveals a pattern or uncovers product flaws described by multiple customers. We expect workers to multitask, have many aptitudes and skills, and be self-servicing in so many aspects of their work, but for them to flourish the tools fall short too often. I’m putting in a big plug for text analytics for the masses, soon, so that enterprise search begins to deliver more than personalized lists of results for one person at a time. Give more reporting power to the user.

Searching Email in the Enterprise

Last week I wrote about “personalized search” and then a chance encounter at a meeting triggered a new awareness of business behavior that makes my own personalized search a lot different than might work for others. A fellow introduced himself to me as the founder of a start-up with a product for searching email. He explained that countless nuggets of valuable information reside in email and will never be found without a product like the one his company had developed. I asked if it only retrieved emails that were resident in an email application like Outlook; he looked confused and said “yes.” I commented that I leave very little content in my email application but instead save anything with information of value in the appropriate file folders with other documents of different formats on the same topic. If an attachment is substantive, I may create a record with more metadata in my content management database so that I can use the application search engine to find information germane to projects I work on. He walked away with no comment, so I have no idea what he was thinking.

It did start me thinking about the realities of how individuals dispose of, store, categorize and manage their work related documents. My own process goes like this. My work content falls into four broad categories: products and vendors, client organizations and business contacts, topics of interest, and local infrastructure related materials. When material is not purposed for a particular project or client but may be useful for a future activity, it gets a metadata record in the database and is hyperlinked to the full-text. The same goes for useful content out on the Web.

When it comes to email, I discipline myself to dispose of all email into its appropriate folder as soon as I can. Sometimes this involves two emails, the original and my response. When the format is important I save it in the *.mht format (it used to be *.htm until I switched to Office 2007 and realized that doing so created a folder for every file saved); otherwise, I save content in *.txt format. I rename every email to include a meaningful description including topic, sender and date so that I can identify the appropriate email when viewing a folder. If there is an attachment it also gets an appropriate title and date, is stored in its native format and the associated email has “cover” in the file name; this helps associate the email and attachment. The only email that is saved in Outlook in personal folders is current activity where lots of back and forth is likely to occur until a project is concluded. Then it gets disposed of by deleting, or with the project file folders as described above. This is personal governance that takes work. Sometimes I hit a wall and fall behind on the filtering and disposing but I keep at it because it pays off in the long term.

So, why not relax and leave it all in Outlook, then let a search engine do the retrieval? Experience had revealed that most emails are labeled so poorly by senders and the content is so cryptic that to expect a search engine to retrieve it in a particular context or with the correct relevance would be impossible. I know this from the experience of having to preview dozens of emails stored in folders for projects that are active. I have decided to give myself the peace of mind that when the crunch is on, and I really need to go to that vendor file and retrieve what they sent me in March of last year, I can get it quickly in a way that no search engine could ever do. Do you realize how much correspondence you receive from business contacts using their “gmail” account with no contact information revealing their organization in the body and signed with a nickname like “Bob” and messages “like we’re releasing the new version in four weeks” or that just have a link to an important article on the web with “thought this would interest you?”

I did not have a chance to learn if my new business acquaintance had any sense of the amount of competition he has out there for email search, or what his differentiator is that makes a compelling case for a search product that only searches through email, or what happens to his product when Microsoft finally gets FAST search bundled to work with all Office products. OR, perhaps the rest of the world is storing all content in Outlook. Is this true? If so, he may have a winner.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑