The Gilbane Advisor

Curated for content, computing, and digital experience professionals

Page 316 of 918

Reflecting on BI Shifts and Google Big Moves in the Search Market

Sometimes it pays to be behind in reading industry news. The big news last week was Google’s new patents and plans to enhance search results using metadata and taxonomy embedded in content. This was followed by the news that Business Objects plans to acquire Inxight, a Xerox PARC spin-off that has produced a product line with terrific data visualization tools, highly valued in the business analytics (BI) marketplace.

I had planned to write about the convergence of the enterprise search and BI markets this week until I caught up with industry news from April and early May. This triggered a couple of insights into these more recent announcements.

In April an Information Week article noted that Google has, uncharacteristically, contributed two significant enhancements to MySQL: improved replication procedures across multiple systems and expanded mirroring. Writer Babcock also noted that “Google doesn’t use MySQL in search” but YouTube does. I believe Google will come to be more tied to MySQL as they begin to deploy new search algorithms that take advantage of metadata and taxonomies. These need good text database structures to be managed efficiently and leveraged effectively to produce quality results from search on the scale that Google does it. Up to now Google results presentation has been influenced more by transaction processing than semantic and textual context. Look for more Google enhancements to MySQL to help it effectively manage all that meaningful text. The open source question is will more enhancements be released by Google for all to use? A lot of enterprises would benefit from being able to depend on continual enhancements to MySQL so they could (continue to) use it instead of Oracle or MS-SQL server as the database back-end for text searching.

The other older news (Information Week, May 7th) was that Business Objects was touting “business intelligence for ‘all individuals’” with some new offerings. BO’s acquisition announcement just last week, that they plan to acquire Inxight, only strengthens their position in this market. Inxight has been on the cusp of BI and enterprise search for several years and this portends more convergence of products in these growing markets. Twenty-five years ago when I was selling text software applications, a key differentiator was strong report building tool sets to support “slicing and dicing” database content in any desired format. It sounds like robust, intuitive reporting tools for all enterprise users of content applications is still a dream but much closer to reality for the high-end market.

With all the offerings and consolidation in BI and search, the next moves will surely begin to push some offerings with search/BI to a price point that small-medium businesses (SMBs) can afford. We know that Microsoft sees the opening (Information Week, May 14th) and let’s hope that others do as well.

Adobe Unveils ColdFusion 8 Public Beta

Adobe Systems Incorporated (Nasdaq:ADBE) announced the public beta of Adobe ColdFusion 8 software. ColdFusion 8, designed for developers building dynamic Web sites and Internet applications, addresses day-to-day development challenges to increase developer productivity, integrate with complex enterprise environments, and deliver rich and engaging application experiences for users. The ColdFusion 8 public beta is a feature complete preview. ColdFusion 8 leverages Adobe Flex technology and Ajax-based components. The new ColdFusion 8 development environment also features advanced Eclipse-based wizards and debugging. The ColdFusion 8 Server Monitor lets developers identify bottlenecks and tune the server for better performance. ColdFusion 8 integrates with a broad range of platforms and systems, including integration with .NET assemblies, support for Microsoft Windows Vista and new J2EE servers including JBoss. ColdFusion 8 also delivers significant performance gains over ColdFusion MX 7 and earlier versions of the product. Additionally, ColdFusion 8 applications interact with Adobe PDF documents and forms. The ColdFusion 8 public beta is immediately available at Adobe Labs at http://labs.adobe.com or through Adobe’s hosting partner, http://www.hostmysite.com/cf8

Curl Rich Internet Application Platform Adds Macintosh Support with Public Beta of Curl Run Time Environment

Curl, Inc. announced the availability of the public beta version of the Curl Run Time Environment (RTE) for Macintosh. The Curl RTE, a key component of the Curl Rich Internet Application (RIA) Platform, is the engine that executes Curl applications and displays their user interfaces. The Mac Beta release of the RTE is intended for customers looking to run their Curl Windows and Linux-developed applications on the Macintosh. The Curl RTE is part of the Curl RIA platform that allows developers to implement, complex enterprise Web-based applications. In addition to the RTE, the Curl platform consists of two other main components: the Curl Language, an object-oriented programming language that integrates rich text formatting, GUI layout and presentation scripting; and the Curl Integrated Development Environment, which includes tools for developing and debugging Curl applications and a Visual Layout Editor and numerous code examples. The Mac Beta RTE obeys standard Macintosh user interface conventions and supports the full range of features that are supported by the Curl Windows and Linux RTE products. The Curl RTE can run on Power PC and Intel Macintoshes with operating systems of OS 10.3 and later. The Beta version can execute applications developed for the most recent version of the Curl RIA platform, Version 5.0. The Mac Beta RTE can be downloaded free of charge. http://www.curl.com/

Will Steve Arnold Scare IT Into Taking Search in the Enterprise Seriously?

Steve Arnold of ArnoldIT struck twice in a big way last week, once as a contributor to the Bear, Stearns & Co. research report on Google and once as a principal speaker at Enterprise Search in New York. I’ve read a copy of the Bear Stearns report, which contains information that should make IT people pay close attention to how they manage searchable enterprise content. I can verify that this blog summary of Steve’s New York speech by Larry Digman sounds like vintage Arnold, to the point and right on it. Steve, not for the first time, is making points that analysts and other search experts routinely observe about the lack of serious infrastructure vested in making content valuable by enhancing its searchability.

First is the Bear Stearns report, summarized for the benefit of government IT folks with admonitions about how to act on the technical guidance it provides in this article by Joab Jackson in GCN. The report’s appearance in the same week as Microsoft’s acquisition of aQuantive is newsworthy in itself. Google really ups the ante with their plans to change the rules for posting content results for Internet searches. If Webmasters actually begin to do more sophisticated content preparation to leverage what Google is calling its Programmable Search Engine (PSE), then results using Google search will continue to be several steps ahead of what Microsoft is currently rolling out. In other words, while Microsoft is making its most expensive acquisition to tweak Internet searching in one area, Google is investing its capital in its own IP development to make search richer in another. Experience looking at large software companies tells me that IP strategically developed to be totally in sync with existing products have a much better chance of quick success in the marketplace than companies that do acquisitions to play catch up. So, even though Microsoft, in an acquiring mode, may find IP to acquire in the semantic search space (and there is a lot out there that hasn’t been commercialized), its ability to absorb and integrate it in time to head off this Google initiative is a real tough proposition. I’m with Bear Stearn’s guidance on this one.

OK, on to Arnold’s comments at Enterprise Search, in which he continues a theme to jolt IT folks. As, already noted, I totally agree that IT in most organizations is loath to call on information search professionals to understand the best ways to exploit search engine adoption for getting good search results. But I am hoping that the economic side of search, Web content management for an organization’s public facing content, may cause a shift. Already, I am experiencing Web content managers who are enlightened about how to make content more findable through good metadata and taxonomy strategies. They have figured out how to make good stuff rise to the top with guidance from outside IT. When sales people complain that their prospects can’t find the company’s products online, it tends to spur marketing folks to adjust their Web content strategies accordingly.

It may take a while, but my observation is that when employees see search working well on their public sites, they begin to push for equal quality search internally. Now that we have Google paying serious attention to metadata for the purpose of giving search results semantic context, maybe the guys in-house will begin to get it, too.

Thomson Learning– What’s next??

Earlier this year, I wrote that the announcement that Thomson Learning was for sale was an indictment of the current fundamentals of most learning market segments. From the perspective of Thomson senior management, the decision was to divest seems clear cut. Consider this comparative financial data:

Thomson Learning All Other Thomson Units

  • Organic Growth 4.0% 6.0%
  • Adj Ebitda 24.5% 29.2%
  • Operating Margin 12.9% 18.9%
  • Electronic Revenues 36.0% 80.0%
  • Recurring Revenues 24.0% 82.0%

(Source Thomson 4th Q Investor Presentation)

The percentages of electronic and recurring revenues are particularly at odds with CEO Harrington’s goal of integrating Thomson’s content with their customer’s work flows. After examining this data combined with declining unit volumes, growing price resistance, and increased government regulation, one wonders what motivated the private equity firms to pay the lofty multiples described in Thad McIlroy’s excellent post earlier this week.

Perhaps, they see the opportunity to create more new products that will blend content and technology to add value to the student’s learning experience. Vivid simulations and multimedia can help bring clarity to the explication of complex topics. Linking the appropriate content to solving problems improves student understanding while saving them lots of time and frustration. Making texts searchable and providing fresh links to appropriate Internet sites brings life and exploration opportunities to static textbook content.

Transitioning from a reliance on the sale of books and specific ancillary items to an intellectual property licensing model that is based upon usage metrics and attributes value to all aspects of course package (including the many package elements currently provided to faculty at no cost) would enable profound changes to the income statement. Revision cycles could be lengthened, sampling and selling costs reduced, and the percentage of recurring revenue increased substantially.

For several years, the potential of such changes have been obvious to industry executives and observers. Why then would the new owners be better able to institute these changes and transitions? The answer is simple, the short term costs of technology investments coupled with the transition to a recurring model would produce some “difficult quarters” for a publicly traded company. The opportunity to retool and restructure while private could create a company that would have excellent recurring revenues and better margins when reintroduced to public markets in a few years.

Should Thomson (and possibly Houghton-Mifflin) adopt this strategy, the impact on the rest of the industry could be profound. However, if these changes were to take place, authors, students, universities, and the publishing companies would eventually all be winners! Here’s hoping that this deal lends impetus to this industry transition.

Mapping Search Requirements

Last week I commented on the richness of the search marketplace. However, diversity presents the enterprise buyer with pressure to be more focused on immediate and critical search needs.

The Enterprise Search Summit is being held in New York this week. Two years ago I found it a great place to see the companies offering search products, where I could easily see them all, and still attend every session in two days. This year, 2007, there were over 40 exhibitors, most offering solutions for highly differentiated enterprise search problems. Few of the offerings will serve the end-to-end needs of a large enterprise but many would be sufficient for medium to small organizations. The two major search engine categories used to be Web content keyword searching, and structured searching. Not only is my attention as an analyst being requested by major vendors offering solutions for different types of search but new products are being announced weekly. Newcomers include those describing their products as data mining engines, search and reporting “platforms,” BI intelligence engines, semantic and ontological search engines. This mix challenges me to determine if a product really solves a type of enterprise search problem before I pay attention.

You, on the other hand, need to do another type of analysis before considering specific options. Classifying search categories, taking a faceted approach will help you narrow down the field. Here is a checklist for categorizing what and how content needs to be found:

  • Content types (e.g. HTML pages, PDFs, images)
  • Content repositories (e.g. database applications, content management systems, collaboration applications, file locations)
  • Types of search interfaces and navigation (e.g. simple search box, metadata, taxonomy)
  • Types of search (e.g. keyword, phrase, date, topical navigation)
  • Types of results presentation (e.g. aggregated, federated, normalized, citation)
  • Platforms (e.g. hosted, intranet, desktop)
  • Type of vendor (e.g. search-only, single purpose application with embedded search, software as service – SaS )
  • Amount of content by type
  • Number and type of users by need (personas)

Then use any tools or resources at hand to harvest an understanding of the mapping results to learn who needs what type of content, in what format and its criticality to business requirements. Prioritizing the facets produces a multidimensional view of enterprise search requirements. This will go a long way to narrowing down the vendor list and gives you a tool to keep discussions focused.

There are terrific options in the marketplace and they will only become richer in features and complexity. Your job is to find the most appropriate solution for the business search problem you need to solve today, at a cost that matches your budget. You also want a product that can be implemented rapidly with immediate benefit linking to a real business proposition.

MadCap Software and across Systems Integrate Content Creation and Translation

MadCap Software and across Systems announced a strategic partnership to combine technical content creation with advanced translation and localization. Through integrated software from MadCap and across, technical documentation professionals will be able to publish multilingual user manuals, online Help systems, and other corporate content for the international market from a single source. MadCap provides XML software for creating multi-channel publishing, including its product Flare for delivering context-sensitive online Help and print documentation, and Blaze, MadCap’s answer to Adobe’s FrameMaker for publishing large documents, which will be launched later this year. MadCap will also announce MadCap Lingo — an XML based integrated Help authoring tool and translation environment. MadCap Lingo offers complete Unicode support for all left-to-right language. Through their strategic partnership, the two companies will enable integration between Lingo, Flare and Blaze, and the across Language Server, a comprehensive corporate platform for the entire translation process. Providing a centralized translation memory and terminology system, it serves to control the whole translation workflow, and to network all corresponding systems and persons involved. From the project manager up to the translator and proofreader, all participants work in a consistent client/server-based work environment. http://www.across.net/, http://www.madcapsoftware.com/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑