Curated for content, computing, and digital experience professionals

Month: October 2008 (Page 1 of 3)

CM Pros: Forum, Summit, and Board Nominations

CM Pros (the Content Management Professionals Association) has replaced their listserv with a new forum and even non-members will be able to view much of the content – members can contribute.

They also announced that the early registration discounts for the Summit ends November 4th, as it does for our conference.

They also announced nominations for CM Pros Board of Directors:

CM Pros seeks enthusiastic candidates to run for three open seats on the Board of Directors. To qualify as a candidate, you must be a CM Pros member in good standing. Nominations open soon and voting is scheduled for December. Consider nominating yourself or someone else you believe would make a great candidate. If you are passionate about content management this is your opportunity to contribute to and gain from the continued growth of the profession and the organization.

http://www.cmprofessionals.org/

MadCap Unveils DITA Product Roadmap

MadCap Software announced its roadmap for supporting the Darwin Information Typing Architecture (DITA) standard. With MadCap, authors will have a complete authoring and publishing suite of tools for creating, managing, translating and publishing DITA content. The products will use MadCap’s XML editor, which provides an graphical user interface for creating featured documentation that hides the XML being generated below. In the first phase of its DITA initiative, MadCap Software will add DITA support to four products: MadCap Flare, MadCap Blaze, MadCap Analyzer for reporting, MadCap Lingo. With MadCap Flare and Blaze, authors will be able to import DITA projects and topics as raw XML content, and using the XML editor, change the style sheets to get the desired look and structure. Authors will then have the option to publish the output as DITA content; print formats, such as Microsoft Word, DOCX and XPS or Adobe FrameMaker, PDF and AIR; and a range of HTML and XHTML online formats MadCap Analyzer will work directly with DITA topics and projects to allow authors to analyze and report on the content. Similarly, MadCap Lingo will import data directly from DITA topics and projects, so that it can be translated. The translated material can be published as DITA content or exported to a Flare or Blaze project. In the second phase, MadCap will enable authors to natively create and edit DITA topics in Flare and Blaze, as well as MadCap X-Edit, MadCap’s software family for creating short documents, contributing content to other documents, and reviewing content. Like Flare and Blaze, X-Edit will also support the ability to import and publish DITA information. In the third phase, MadCap will add DITA support to its forthcoming MadCap Team Server. This will make it possible to manage and share DITA content across teams and projects, as well as schedule DITA publishing. http://www.madcapsoftware.com

Multi-channel Publishing: Can anyone do it?

By David Lipsey, Managing Director, Entertainment & Media, FTI

Can anyone deliver customized content to its customers – in print, on the Web in rich applications, in social networking or to wireless media? To make matters more challenging, what if your customers are two-to-five year olds? Well, Sesame Workshop recently had to address this test to keep its brand relevant to precocious preschoolers. In fact, this non-profit organization behind Sesame Street took the bold view that multi-channel publishing is the future of the Workshop, and recognized that online will become its primary channel of distribution down the line. At the upcoming Gilbane Boston Conference (link to information on session), I will moderate a panel of multi-channel publishing experts, including the VP charged with Sesame Workshop’s internet initiative. We will provide you with the latest in content delivery, opportunities to serve more users and more applications, and insights to show that yes, almost anyone can do it. Please join me, Joe Bachana from DPCI (an industry leader in his own right) and the ever-innovative O’Reilly Press for a didactic and enlightening discussion that will get you mulling over ideas for enhancing your brand experience for customers.

IBM Announces New Enterprise Content Management (ECM) Solutions

IBM announced new Enterprise Content Management (ECM) solutions that are designed to help organizations achieve greater business agility and workplace effectiveness. Using a services-oriented environment, clients can now deploy solution applications within days instead of months. IBM Agile ECM Software Portfolio, Key enhancements include: IBM FileNet P8 4.5 is a unified ECM platform that combines content with Business Process Management (BPM) and compliance capabilities. Highlights include: IBM FileNet Business Process Manager 4.5 is an offering for managing content-centric business processes. It supports business process modeling and simulation, promotes business user and IT collaboration, and provides tools for agile ECM application development, utilizing Smart SOA and Web 2.0 technologies such as mashups. IBM FileNet Content Manager 4.5 is designed to provide clients with a scalable, single content catalog that embeds IBM’s content- centric BPM and compliance capabilities into an ECM platform operating on content in multiple repositories. The FileNet Content Manager enables integration with Lotus Quickr, Microsoft Office 2007, and Microsoft SharePoint. It also manages all types of digitized content across multiple platforms, databases and applications. The new offering provides active content capabilities which allow content in CM8 to participate in IBM FileNet BPM 4.5 processes. IBM Content Manager on Demand (CMOD) 8.4.1 provides enterprise report management. The latest CMOD offering includes integration to IBM FileNet P8 for federated Records Management and BPM applications. In addition, the new IBM ECM products use the same technology as IBM’s recently released compliance and discovery offerings. IBM’s records management offerings are a prescriptive set of deployment and management practices, tools and best practices to eliminate deployment complexity, mitigate risks associated with lack of best practices and industry skill and knowledge shortages. IBM’s eDiscovery products help clients take the cost out of electronic discovery management. IBM FileNet P8 integration with IBM Content Analyzer enables organizations to increase the return on their enterprise content investment by analyzing unstructured content together with structured data to gain valuable information. http://www.ibm.com

MultiCorpora Unveils MultiTrans 4.4

MultiCorpora announced its newest version of MultiTrans. The newest version 4.4 of MultiTrans delivers WordAlign technology which allows users to instantly retrieve translated terminology from previously translated documents. This advancement in language technology was made possible through collaborative development efforts with the Canadian National Research Center. The newest version enables components of machine translation to be integrated into its software suite. This offers additional translation options for organizations who consider machine translation as part of their business model. MultiCorpora has also leveraged Oracle’s technology to recycle translations from over 250 file formats and shorten file conversion speeds. These new MultiTrans features dovetail with the turn-key, fully integrated workflow processes previously released in version 4.3 of MultiTrans (2007). http://www.multicorpora.com

When We Are Missing Good Metadata in Enterprise Search

This blog has not focused on non-profit institutions (e.g. museums, historical societies) as enterprises but they are repositories of an extraordinary wealth of information. The past few weeks I’ve been trying, with mixed results, to get a feel for the accessibility of this content through the public Web sites of these organizations. My queries leave me with a keen sense of why search on company intranets also fail.

Most sizable non-profits want their collections of content and other information assets exposed to the public. But each department manages its own content collections with software that is unique to their specific professional methods and practices. In the corporate world the mix will include human resources (HR), enterprise resource management (ERP) systems, customer relationship management (CRM), R & D document management systems and collaboration tools. Many corporations have or “had” library systems that reflected a mix of internally published reports and scholarly collections that support R & D and special areas such as competitive intelligence. Corporations struggle constantly with federating all this content in a single search system.

Non-profit organizations have similar disparate systems constructed for their special domain, museums or research institutions. One area that is similar between the corporate and non-profit sector is libraries, operating with software whose interfaces hearken back to designs of the late 1980s or 90s. Another by-product of that era was the catalog record in a format devised by the Library of Congress for the electronic exchange of records between library systems. It was never intended to be the format for retrieval. It is similar to the metadata in content management systems but is an order of magnitude more complex and arcane to the typical person doing searching. Only librarians and scholars really understand the most effective ways to search most library systems; therein lies the “public access” problem. In a corporation a librarian often does the searching.

However, a visitor to a museum Web site would expect to quickly find a topic for which the museum has exhibit materials, printed literature and other media, all together. This calls for nomenclature that is “public friendly” and reflects the basic “aboutness” of all the materials in museum departments and collections. It is a problem when each library and curatorial department uses a different method of categorizing. Libraries typically use Library of Congress Subject Headings. What makes this problematic is that topics are so numerous. The number of possible subject headings is designed for the entire population of all Library of Congress holdings, not a special collection of a few tens of thousands of materials. Almost no library systems search for words “contained in” the subject headings if you try to browse just the Subject index. If I am searching Subjects for all power generation materials and a heading such as electric power generation is used, it will not be found because the look-up mechanism only looks for headings that “begin with” power generation.

Let’s cut to the chase; mountains of metadata in the form of library cataloging are locked inside library systems within non-profit institutions. It is not being searched at the search box when you go to a museum Web site because it is not accessible to most “enterprise” or “web site” search engines. Therefore, a separate search must be done in the library system using a more complex approach to be truly thorough.

We have a big problem if we are to somehow elevate library collections to the same level of importance as the rest of a museum’s collections and integrate the two. Bigger still is the challenge of getting everything indexed with a normalized vocabulary for the comfort of all audiences. This is something that takes thought and coordination among professionals of diverse competencies. It will not be solved easily but it must be done for institutions to thrive and satisfy all their constituents. Here we have yet another example of where enterprise search will fail to satisfy, not because the search engine is broken but because the underlying data is inappropriately packaged for indexes to work as expected. Yet again, we come to the realization that we need people to recognize and fix the problem.

Gilbane Speaks on Multilingualism

Readers of this content globalization blog will be interested in hearing about Frank’s adventures in Finland this week at the Kites Symposium. Check out the entry on our main blog. About Kites:

Kites Association develops and promotes multilingual communication, multi-cultural interaction and their technical content management to improve the competitive edge of the Finnish economic life and the public administration.

Multilingualism and Information Technology

This is the title of the presentation I was asked to give at the Kites Symposium of Multilingual Communication and Content Management in Finland this week. The main point I will be making is that multilingual content will only become easily and widely available when multilingual technology is deeply integrated in information technologies. I doubt that this will be considered controversial by anyone, but both the market demand and the technology has reached a point where companies are looking for slow steady growth to be accelerated. Although this demand is naturally higher in Europe, the potential for reaching new, or deeper into existing, markets ensure that even small to mid-size U.S. companies will be looking to incorporate multilingual technologies as soon as the cost and ease of doing so allows for it (abstract appended below). For more on how companies are thinking about this, see the recent report by our Content Globalization practice Multilingual Communications as a Business Imperative: Why Organizations Need to Optimize the Global Content Value Chain.

As Leonor says, machine translation, which has been around for years, is going to play a large role in multilingual applications in spite of its limited capabilities. For example, you may have noticed the Google translate feature at the top of this page and a couple of our other blogs. This was free, took no more than 5 minutes to install, and is very useful – try it out.

Here is the abstract for my presentation:

Language technologies are becoming integral to content and information technologies. This is a slow process, but inevitable. There is no question of the requirement for multilingual functionality. Those who might have thought or hoped that we would be a monolingual world in the foreseeable future must re-adjust their view when looking at the behavior (good and bad) across the globe today. Even in the U.S., where most of the population has always had a narrow view of language, organizations are awakening to the need for multilingual capability. This awakening is sure to continue because of global commercial opportunities. And because of the inexpensive global access provided by the Web, multilingual requirements are increasingly important for even very small businesses. Meeting the full market demand for multilingual requirements at the scale necessary won’t be possible without multilingual technologies becoming an integral component of mainstream information technologies.

While language technologies are not new and processes for managing translation and localization are well established, there is still much to learn about how to integrate language and other information technologies. First, the number of organizations, and people within organizations, with deep experience in translation processes and technologies is still relatively small. Second, there is fragmentation in the supplier market, within customer organizations, and along the “Global Content Value Chain”, that together contribute to slower growth. Third, development of all information technologies continues to accelerate, challenging even forward thinking organizations with large IT budgets.

Because of the central importance of multilingualism, all organizations need to understand as much as possible, what and how language technologies are being used today, how they are, or are not, integrated with other technologies and applications, and how and when emerging language and information technologies will affect commercial and information dissemination strategies.

The information technologies most immediately relevant to multilingual applications are content management technologies, including authoring, editing, publishing, search, and content management. Recent research on the use of language and content technologies by organizations with deep experience using both kinds of technologies, reveals that there is insufficient integration and interoperability across authoring, content management, localization/translation, and publishing. Much can be learned from analyzing how some organizations have successfully dealt with this constraint.

Language and semantic technologies continue to improve, both organically because of a renewed interest in their possibilities, and because of increases in readily available computing power. In addition to small expert niche companies, very large developer organizations such as Google and Microsoft are investing heavily in language technologies. Machine translation is one example, and one that is increasingly seen as having a serious role to play in many, if not all, translation applications. However, to fully achieve pervasive multilingual capability technology integration needs to progress from the integration of individual software applications, to the incorporation into large mainstream enterprise applications, widely deployed client tools, and software infrastructures.

Technology integration is not the only barrier to market growth. Yet, as more of these technologies are integrated, it will become easier to implement multilingual solutions, they will be less costly, easier to use, and procurement will be simplified.

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑