Curated for content, computing, and digital experience professionals

Month: September 2008 (Page 1 of 3)

Socialtext Delivers Socialtext 3.0

Socialtext released Socialtext 3.0, a trio of applications including Socialtext People and Socialtext Dashboard, as well as a major upgrade to its Socialtext Workspace enterprise wiki offering. These products are built on a modular and integrated platform that delivers connected collaboration with context to individuals, workgroups, organizations and extranet communities. People are able to discover, create, and utilize social networks, collaborate in shared workspaces, and work productively, with personalized widget-based dashboards. The company also announced Socialtext Signals, a Twitter-style microblogging interface that goes beyond simple “tweets” by integrating both automated and manual updates with social networking context, expanding the company’s business communications offerings for the enterprise. As with its proven Workspace wiki and weblog product, Socialtext will make all of its offerings available on a hosted ASP as well as an on-premise appliance basis. The entire Socialtext 3.0 trio of products is available immediately on the hosted service, and will be made available to appliance customers starting in October 2008. Socialtext 3.0 profile integration with LDAP or Microsoft Active Directory systems enable rapid population. REST APIs for workspace and profile content are now complemented with a Widget architecture and user interface for the creation of enterprise mashups. Productized Connectors are available with Microsoft Sharepoint and IBM Lotus Connections. You can immediately experience this new release in a free trial at

Machine Translation (Finally) Comes of Age

In our Multilingual Communications as a Business Imperative report, we noted the fact that machine translation (MT) has long been the target of “don’t let this happen to you” jokes throughout the globalization industry. Unpredictable results and poor quality allowed humor to become the focus of MT discussions, making widespread adoption risky at best.

On the other hand, we also noted that scientists, researchers, and technologists have been determined to unlock MT potential since the 1950’s to solve the same core challenges the industry struggles with today: cost savings, speed, and linguist augmentation. Although the infamous report on Languages and Machines from the Automatic Language Processing Advisory Committee (ALPAC) published in 1966 discussed these challenges in some depth (albeit from a U.S. perspective), it sent a resounding message that “there is no emergency in the field of translation.” Research funding suffered; researcher Margaret King described the impact as effectively “killing machine translation research in the States.”

Borrowing from S.E. Hinton, that was then, this is now. Technology advancements and pure computing power have made machine translation not only viable, but also potentially game-changing. A global economy, the volume and velocity of content required to run a global business, and customer expectations is steadily shifting enterprise postures from “not an option” to “help me understand where MT fits.” Case in point — participants in our study identified MT as one of the top three valuable technologies for the future.

There’s lots of game-changing news for our readers to digest.

  • An excellent place to start is with our colleagues at Multilingual Magazine, who dedicated the April-May issue to this very subject. Don Osborn over at the Multidisciplinary Perspectives blog provides an excellent summary, posing the question: “Is there a paradigm shift on machine translation?”
  • Language Weaver predicts a potential $67.5 billion market for digital translation, fueled by MT. CEO Mark Tapling explains why.
  • SYSTRAN, one of the earliest MT software developers provides research and education here.
  • And finally (for today), there’s no way to deny the Google impact — here’s their FAQ about the beta version of Google Translate. TAUS weighs in on the subject here.

Mary and I will be at Localization World Madison to provide practical advice and best practices for making the enterprise business case for multilingual communications investments as part of a Global Content Value Chain. But we’re also looking forward to the session focused on MT potential, issues, and vendor approaches. The full grid is here. Join us!

CM Pros Summit in Boston

The Content Management Professionals Association (CM Pros) will once again be holding their annual Fall Summit in conjunction with Gilbane Boston in December. There are details over on our Events blog which I won’t duplicate here, or even better, go right to the source at If you are a member we hope to see you, and if you are not you can find out about joining on the CM Pros site at

Webinar: New Generation Knowledge Management

Tuesday, October 7th, 2008
11:00am PT / 2:00pm ET

Organizations are faced with critical knowledge management issues including knowledge capture, IP retention, search and discovery, and fostering innovation. The failure to properly address these issues results in companies wasting millions of dollars through inefficient information discovery and poor collaboration techniques. Today’s knowledge management systems must blend social media technologies with enterprise search, access, and discovery tools to give users a 360-degree view of their information assets. This blend is the foundation for new generation knowledge management.
Moderated by Andy Moore, Publisher of KMWorld Magazine, join Senior Analyst Leonor Ciarlone and Phil Green, CTO at Inmagic for a discussion on perspectives from Gilbane’s report on Collaboration and Social Media 2008, the power of Social Knowledge Networks, and an introduction to Inmagic® Presto.
Space is limited, register here!

MuleSource Integrates Intel XML Software Suite

MuleSource announced a collaboration with Intel Corporation to deliver a new offering that provides off-the-shelf integration between Mule and the Intel XML Software Suite. Called Mule Xpack for Intel XML Software Suite – the new offering is a set of instructions and Mule extensions that help to improve XML processing performance for SOA deployments. Taking a new approach to accelerating XML traffic, MuleSource teamed with Intel in a collaboration to bring the Intel XML Software Suite to the Mule ESB, enhancing and offloading XML processing. The Mule Xpack provides Mule integration support for the Intel XML Software Suite, which can be used to support three categories of XML operations: XML Parsing – reads XML documents and makes the data available for manipulation and processing to applications and programming languages; XSLT Transformation – facilitates efficient XML transformations in a variety of formats and can be applied to a full range of XML documents; XPath Evaluations – evaluates an XML Path (XPath) expression over an XML document DOM tree or a derived instance of source and returns a node, node set, string, number or Boolean value. Intel XML Software Suite is a software library providing APIs for C++ and Java on Linux and Windows operating systems, delivering performance for XML processing on industry standard servers and application environments. Designed to take advantage of the Intel Core microarchitecture, Intel XML Software Suite provides thread safe and efficient memory utilization, scalable stream-to-stream processing, and large XML file processing capabilities.

Taxonomy, Yes, but for What?

The term taxonomy crept into the search lexicon by stealth and is now firmly entrenched. The very early search engines, circa 1972-73, presented searchers with the retrieval option of selecting content using controlled vocabularies from a standardized thesaurus of terminology in a particular discipline. With no neat graphical navigation tools, searches were crafted on a typewriter-like device, painfully typed in an arcane syntax. A stray hyphen, period or space would render the query un-computable, so after deciphering the error message, the searcher would try again. Each minute and each result cost money, so errors were a real expense.

We entered the Web search era bundling content into a directory structure, like the “Yellow Pages,” or organizing query results into “folders” labeled with broad topics. The controlled vocabulary that represented directory topics or folder labels became known as a taxonomic structure, with the early ones at NorthernLight and Yahoo crafted by experts with knowledge of the rules of controlled vocabulary, thesaurus development and maintenance. Google derailed that search model with its simple “search box” requiring only a word or phrase to grab heaps of results. Today we are in a new era. Some people like searching by typing keywords in a box, while others prefer the suggestions of a directory or tree structure. Building taxonomic structures for more than e-commerce sites is now serious business for searches within enterprises where many employees prefer to navigate through the terminology to browse and discover the full scope of what is there.

Taxonomies for navigation are but one purpose for them to be used in search. Depending on the application domain, richness of the subject matter, scope and depth of topics, these lists can become quite large and complex. The more cross-references (e.g. cell phones USE wireless phones) are embedded in the list, the more likely the searcher’s preferred term will be present. There is a diminishing return, however; if the user has to navigate to a system’s preferred term too often; the entire process of searching becomes unwieldy and abandoned. On the other hand, if the system automates the smooth transition from one term to another, the richness and complexity of a taxonomy can be an asset.

In more sophisticated applications of taxonomies, the thesaurus model of relationships becomes a necessity. When a search engine, has embedded algorithms that can interpret explicit term relationships, it indexes content according to a taxonomy and all its cross-references. Taxonomy here informs the index engine. It requires substantial maintenance and governance of a much more granular nature than for navigation. To work well, a large corpus of terminology needs to be built to assure that what the content says and means, and what the searcher expects are a match in results. If the results of a search give back unsatisfactory results due to a poor taxonomy, trust in the search system fails rapidly and the benefits of whatever effort was put into building a taxonomy are lost.

I bring this up because the intent of any taxonomy is the first step in deciding whether to start building one. Either model is an on-going commitment but the latter is a much larger investment in sophisticated human resources. The conditions that must be met to have any taxonomy succeed must be articulated in selling the project and value proposition.

Webinar: Structured Content for Leadership: Differentiate with Advanced Practices

Thursday, October 2, 2:00 pm ET
Second in a series of webinars on developing a strategic roadmap for structured content
This online panel discussion with industry experts focuses on emerging applications that can truly differentiate an organization. Topics are based on the “Leadership” view of the ROI Blueprint developed by JustSystems with support from Gilbane. You might be surprised to hear how structured content is delivering value in unexpected ways in unexpected places within the enterprise.
Participants are:

  • Yas Etessam, VMware
  • Bill Trippe, Gilbane
  • Dale Waldt, aXtive Minds

This webinar is a companion to the first session on September 11, in which we examined applications in wide practice, and the third covering innovation on October 23. The series is sponsored by JustSystems.
Register for one or both of the October webinars. A recording of the first event is available if you want to get up to speed on the larger discussion of enterprise value of structured content.

Multilingual Communications Report Resonates

We’ve had an overwhelmingly positive response to our Multilingual Communications as a Business Imperative report, for which we’re grateful – and thrilled! I can summarize the response as “peer sharing works!” And not only works, but spurs conversation, new ideas, and without a doubt, more sharing. For the Globalization Practice team, it’s true validation of the people perspective of Web 2.0.

It would be a long list to point out all the countries represented through report downloads and additional conversations we’ve had since July, but here’s just a sample. We’ve heard from content and translation management professionals from all across the USA in addition to:

  • Austria
  • Belguim
  • Canada
  • Chile
  • China
  • Finland
  • France
  • Germany
  • India
  • Indonesia
  • Ireland
  • Israel
  • Japan
  • Korea
  • Netherlands
  • New Zealand
  • Russia
  • Singapore
  • Slovenia
  • South Africa
  • South Korea
  • Spain
  • Sweden
  • Switzerland
  • United Kingdom

What resonates most? Unwaveringly first is the need to look at multilingual communications creation, management, and delivery in a new way; as less a cost center and more an integral part of business value. Next – the inherent connection readers have with our definition of operational champions and the stories told by those that shared challenges and strategies in the report’s Best Practices Profiles section. Of course those links have pros and cons; the former obviously cementing the growing need for community sharing and the latter validating the struggles of educating senior management and making the business case for focused investment.

Those “on the ground floor” clearly want more – and we aim to provide it. As Frank documented in our Events blog on Fall Speaking Gigs, we’re focused on sharing our experiences and more importantly, learning from yours. Particularly exciting for our team is the Content Globalization track we’ve put together for Gilbane Boston, December 2-4. The full conference schedule is here. Join us!

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑