Curated for content, computing, and digital experience professionals

Month: May 2007 (Page 2 of 4)

Will Steve Arnold Scare IT Into Taking Search in the Enterprise Seriously?

Steve Arnold of ArnoldIT struck twice in a big way last week, once as a contributor to the Bear, Stearns & Co. research report on Google and once as a principal speaker at Enterprise Search in New York. I’ve read a copy of the Bear Stearns report, which contains information that should make IT people pay close attention to how they manage searchable enterprise content. I can verify that this blog summary of Steve’s New York speech by Larry Digman sounds like vintage Arnold, to the point and right on it. Steve, not for the first time, is making points that analysts and other search experts routinely observe about the lack of serious infrastructure vested in making content valuable by enhancing its searchability.

First is the Bear Stearns report, summarized for the benefit of government IT folks with admonitions about how to act on the technical guidance it provides in this article by Joab Jackson in GCN. The report’s appearance in the same week as Microsoft’s acquisition of aQuantive is newsworthy in itself. Google really ups the ante with their plans to change the rules for posting content results for Internet searches. If Webmasters actually begin to do more sophisticated content preparation to leverage what Google is calling its Programmable Search Engine (PSE), then results using Google search will continue to be several steps ahead of what Microsoft is currently rolling out. In other words, while Microsoft is making its most expensive acquisition to tweak Internet searching in one area, Google is investing its capital in its own IP development to make search richer in another. Experience looking at large software companies tells me that IP strategically developed to be totally in sync with existing products have a much better chance of quick success in the marketplace than companies that do acquisitions to play catch up. So, even though Microsoft, in an acquiring mode, may find IP to acquire in the semantic search space (and there is a lot out there that hasn’t been commercialized), its ability to absorb and integrate it in time to head off this Google initiative is a real tough proposition. I’m with Bear Stearn’s guidance on this one.

OK, on to Arnold’s comments at Enterprise Search, in which he continues a theme to jolt IT folks. As, already noted, I totally agree that IT in most organizations is loath to call on information search professionals to understand the best ways to exploit search engine adoption for getting good search results. But I am hoping that the economic side of search, Web content management for an organization’s public facing content, may cause a shift. Already, I am experiencing Web content managers who are enlightened about how to make content more findable through good metadata and taxonomy strategies. They have figured out how to make good stuff rise to the top with guidance from outside IT. When sales people complain that their prospects can’t find the company’s products online, it tends to spur marketing folks to adjust their Web content strategies accordingly.

It may take a while, but my observation is that when employees see search working well on their public sites, they begin to push for equal quality search internally. Now that we have Google paying serious attention to metadata for the purpose of giving search results semantic context, maybe the guys in-house will begin to get it, too.

Thomson Learning– What’s next??

Earlier this year, I wrote that the announcement that Thomson Learning was for sale was an indictment of the current fundamentals of most learning market segments. From the perspective of Thomson senior management, the decision was to divest seems clear cut. Consider this comparative financial data:

Thomson Learning All Other Thomson Units

  • Organic Growth 4.0% 6.0%
  • Adj Ebitda 24.5% 29.2%
  • Operating Margin 12.9% 18.9%
  • Electronic Revenues 36.0% 80.0%
  • Recurring Revenues 24.0% 82.0%

(Source Thomson 4th Q Investor Presentation)

The percentages of electronic and recurring revenues are particularly at odds with CEO Harrington’s goal of integrating Thomson’s content with their customer’s work flows. After examining this data combined with declining unit volumes, growing price resistance, and increased government regulation, one wonders what motivated the private equity firms to pay the lofty multiples described in Thad McIlroy’s excellent post earlier this week.

Perhaps, they see the opportunity to create more new products that will blend content and technology to add value to the student’s learning experience. Vivid simulations and multimedia can help bring clarity to the explication of complex topics. Linking the appropriate content to solving problems improves student understanding while saving them lots of time and frustration. Making texts searchable and providing fresh links to appropriate Internet sites brings life and exploration opportunities to static textbook content.

Transitioning from a reliance on the sale of books and specific ancillary items to an intellectual property licensing model that is based upon usage metrics and attributes value to all aspects of course package (including the many package elements currently provided to faculty at no cost) would enable profound changes to the income statement. Revision cycles could be lengthened, sampling and selling costs reduced, and the percentage of recurring revenue increased substantially.

For several years, the potential of such changes have been obvious to industry executives and observers. Why then would the new owners be better able to institute these changes and transitions? The answer is simple, the short term costs of technology investments coupled with the transition to a recurring model would produce some “difficult quarters” for a publicly traded company. The opportunity to retool and restructure while private could create a company that would have excellent recurring revenues and better margins when reintroduced to public markets in a few years.

Should Thomson (and possibly Houghton-Mifflin) adopt this strategy, the impact on the rest of the industry could be profound. However, if these changes were to take place, authors, students, universities, and the publishing companies would eventually all be winners! Here’s hoping that this deal lends impetus to this industry transition.

Mapping Search Requirements

Last week I commented on the richness of the search marketplace. However, diversity presents the enterprise buyer with pressure to be more focused on immediate and critical search needs.

The Enterprise Search Summit is being held in New York this week. Two years ago I found it a great place to see the companies offering search products, where I could easily see them all, and still attend every session in two days. This year, 2007, there were over 40 exhibitors, most offering solutions for highly differentiated enterprise search problems. Few of the offerings will serve the end-to-end needs of a large enterprise but many would be sufficient for medium to small organizations. The two major search engine categories used to be Web content keyword searching, and structured searching. Not only is my attention as an analyst being requested by major vendors offering solutions for different types of search but new products are being announced weekly. Newcomers include those describing their products as data mining engines, search and reporting “platforms,” BI intelligence engines, semantic and ontological search engines. This mix challenges me to determine if a product really solves a type of enterprise search problem before I pay attention.

You, on the other hand, need to do another type of analysis before considering specific options. Classifying search categories, taking a faceted approach will help you narrow down the field. Here is a checklist for categorizing what and how content needs to be found:

  • Content types (e.g. HTML pages, PDFs, images)
  • Content repositories (e.g. database applications, content management systems, collaboration applications, file locations)
  • Types of search interfaces and navigation (e.g. simple search box, metadata, taxonomy)
  • Types of search (e.g. keyword, phrase, date, topical navigation)
  • Types of results presentation (e.g. aggregated, federated, normalized, citation)
  • Platforms (e.g. hosted, intranet, desktop)
  • Type of vendor (e.g. search-only, single purpose application with embedded search, software as service – SaS )
  • Amount of content by type
  • Number and type of users by need (personas)

Then use any tools or resources at hand to harvest an understanding of the mapping results to learn who needs what type of content, in what format and its criticality to business requirements. Prioritizing the facets produces a multidimensional view of enterprise search requirements. This will go a long way to narrowing down the vendor list and gives you a tool to keep discussions focused.

There are terrific options in the marketplace and they will only become richer in features and complexity. Your job is to find the most appropriate solution for the business search problem you need to solve today, at a cost that matches your budget. You also want a product that can be implemented rapidly with immediate benefit linking to a real business proposition.

MadCap Software and across Systems Integrate Content Creation and Translation

MadCap Software and across Systems announced a strategic partnership to combine technical content creation with advanced translation and localization. Through integrated software from MadCap and across, technical documentation professionals will be able to publish multilingual user manuals, online Help systems, and other corporate content for the international market from a single source. MadCap provides XML software for creating multi-channel publishing, including its product Flare for delivering context-sensitive online Help and print documentation, and Blaze, MadCap’s answer to Adobe’s FrameMaker for publishing large documents, which will be launched later this year. MadCap will also announce MadCap Lingo — an XML based integrated Help authoring tool and translation environment. MadCap Lingo offers complete Unicode support for all left-to-right language. Through their strategic partnership, the two companies will enable integration between Lingo, Flare and Blaze, and the across Language Server, a comprehensive corporate platform for the entire translation process. Providing a centralized translation memory and terminology system, it serves to control the whole translation workflow, and to network all corresponding systems and persons involved. From the project manager up to the translator and proofreader, all participants work in a consistent client/server-based work environment. http://www.across.net/, http://www.madcapsoftware.com/

A New eCollegey in Higher Ed Publishing??

Pearson made an interesting acquisition yesterday. Their acquisition of eCollege continues their corporate foray into Student Information Systems and Course Management. Last year, Pearson acquired PowerSchool and Chancery Software yielding a very strong position in Student Information Systems for the K-12 market. Clearly, they like these learning infrastructure markets for several good reasons.
1. At present, they seem to be solid businesses with only a few competitors that are poised to grow at rates exceeding their traditional textbook businesses.
2. The acquired customer base brings them many new customers and brings them closer to the students (and parents) who use their instructional products. The information about these students and the ability to reach them with additional product offerings is not to be underestimated in this digital world.
3. As the range of course materials such as content modules, learning software, simulations, educational websites, etc. continues to grow, the value of the course infrastructure technology will increase as well as provide a strategic advantage for integration with their broad range of course materials.
Last week at the Digital Book conference in New York, several speakers agreed that college textbook publishers will look more and more like software publishers over the next ten years. The reasons for this transition will center on using technology to: 1. deliver appropriate content to the student when it is needed to solve homework problems and prepare for tests; 2. integrate traditional material with innovative simulations and learning modules available from communities like MERLOT; 3. add life to static published content by enabling further exploration via web links and domain specific search engines and content repositories.
Pearson is wise to acquire successful software and technology companies to give them the pockets of technical expertise that would take many years to develop within the company. While there may be some culture clashes, this strategy should serve Pearson well and position them to maintain or expand their leadership position in educational publishing.

Thomson Learning Sold for Big Bucks!

Well, Thomson Learning has finally been sold (subject to rote “due diligence”) to private equity firms. Everyone figured it would be private equity firms that would make the purchase, partly because these firms are buying just about everything these days except your old underwear, and also because the higher education textbook market is so concentrated that even George Bush’s “I’ve never seen a merger I didn’t like” administration would have had trouble fobbing this one off. Too many children would have been left behind.

The big surprise was the price. A whopping $7.75 billion, over 3 times the annual sales of the division, and apparently roughly 15 times cash flow (see . The same article points out that “by comparison, the average cash flow multiple paid in leveraged buyouts of $500 million or more last year was around eight times cash flow, with media deals typically in the low-double digits, according to buyout industry statistics.” The price is also some 50% more than company officials originally stated they thought they could fob the division off for.

Would we say there’s a little too much cash out there looking for comfy homes? Or would we wonder why this Thomson division, much maligned by management when the sale was first announced, is suddenly as valuable as DaimlerChrysler? (http://www.nytimes.com/2007/05/15/automobiles/15chrysler-web.html)

I guess we’re stuck with Thomson’s overriding stated view that higher education just wasn’t getting with the program fast enough in an online, electronic sort of way, and so the division had to be jettisoned. (Although Thomson CEO Richard J. Harrington admitted after the sale announcement that the company had no complaints about the educational unit’s financial performance. Textbooks are, by and large, a high-margin product ).

On the other hand, memory serves to remind us that Thomson was previously determined in a fierce way to get the heck out of the news business, and now it’s about to merge with Reuters.

What I’m most cognizant of is that Thomson shares had been languishing in the mid-$30s for years before the announcement of the bold move to get rid of textbooks. Now those shares are in the $40s. A lot of senior Thomson executives have made a whole lot of cash from these recent maneuvers (not to mention the Thomson family). No senior Thomson executive was left behind (as for the the operating staff; it is not polite to ask).
(To glimpse the stock chart:

SiberLogic Announces SiberSafe DITA Edition for FrameMaker 7.2 Application Pack for DITA

SiberLogic announced the integration of SiberSafe DITA Edition with Adobe’s FrameMaker 7.2 Application Pack for DITA. With the Application Pack configured, SiberSafe automatically adjusts its integrated menu options to deliver sophisticated DITA content management from within the familiar FrameMaker environment. FrameMaker users can open a document and retrieve topic-based content along with associated dependencies such as xref targets, link targets, conref targets, and referenced images. Content reuse is streamlined and straightforward via SiberSafe’s support for content references (conrefs). And SiberLogic’s functionality is available directly from the FrameMaker menu: authoring and review assignments are automatically distributed via workflow email; each contributor has a list of tasks and knows how and when to execute them; and managers can keep track of progress and resource allocation. With additional features such as collaborative review, task analysis, and translation management, the FrameMaker/SiberSafe DITA integration aims to reduce the complexity of DITA-based technical documentation processes to a single integrated platform. http://www.siberlogic.com/framemaker/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑