The Gilbane Advisor

Curated for content, computing, data, information, and digital experience professionals

Page 330 of 939

2007 Gilbane Survey on the WCM User Experience

This is just a quick reminder to our Analyst on Demand subscribers that the results of our survey on the usability of commercially available content management solutions will be available in early September. The data will come directly from the feedback of the solution providers’ customers. Vendors covered will include Interwoven, Tridion, Vignette, FatWire, Percussion, RedDot, EMC/Documentum, CrownPeak, Mediasurface, PaperThin, Oracle, Day, Hot Banana, Clickability, Acumium, and others.

Poll of the Week: Got Process Bottlenecks?

We have not heard of an organization that doesn’t.

Content management and translation management each have their own set of process bottlenecks. Put them together and what do you get? An endless migraine, a major headache, a dull pain, and for the very few, a nuisance. Here’s some of the phrases we hear when we talk to our clients about the content and translation lifecycle:

  • “Undesired repetition and unpredictable outcomes.”
  • “A cost we don’t really have a handle on.”
  • “We’d have to survey each workgroup to figure it out.”
  • “Redundant, cumbersome, and expensive.”

Hence, the poll of the week. We’re gearing up for the Global Content Management track at Gilbane Boston, November 27-29. Our goal is to spend more time discussing the elimination of process bottlenecks rather than bemoaning their existence.

Help us shape the list for our sessions and discussions in Boston by taking our poll of the week. Got process bottlenecks? We want to know about them.

I’ve Got Infrastructure on My Mind

It’s been a rough few weeks for infrastructure.
Of course the collapse of the I-35 bridge over the Mississippi in Minneapolis last Wednesday is on all of our minds – how could the inspections fail and the road fall down? Significantly, there’s some videos that capture the moment, which hopefully will provide clues for determining the cause.
Closer to home, we had an eight-hour traffic jam on the I-93 loop in Braintree (a major highway south of Boston) a week ago Monday. A storm grate was thrown loose by a passing truck at the start of the morning rush, and landed on a near-by car. (Fortunately the driver survived.) Reportedly, the Massachusetts State Highway Department spent the rest of the day checking and welding shut all the grates on that highway. The next day, the same loose storm drain problem cropped up on a major road in Newton, near where I live. This time motorists were asked to dial a special code from their cell phones to report problems.
And then this morning New Yorkers awoke to a monsoon and a flooded mass transit system. The official M.T.A. web site could not keep up with the requests for information, and crashed when it was needed most.
You’ve gotta hand it to those hardy folks (and the New York Times) for that snarky, Big Apple attitude. Here’re a few priceless ones that I gleaned from NYTimes.Com during the day.

“Our transit system is not only frail but if it’s this vulnerable to rain attacks, then how vulnerable is it to terrorist attacks?”

“I walked from the Upper West Side all the way to work in Midtown, but thanks to Starbucks was able to stop in and cool myself every four blocks or so.”

(Now there’s somebody with brand loyalty!)

“And only the rats had no transportation problems.”

All this content about our physical infrastructure (user generated and otherwise) has the potential to bring social computing to a whole new level.
This got me thinking about the role of collaboration technologies for supporting our physical infrastructure. It’s great to be able to talk back — and let off a little steam. It’s even better to be able to call-in, and tell the authorities about the problem before there’s another horrible accident. But what else is possible? Could the bridge inspectors in Minneapolis have shared their observation reports, measurements, and perhaps photographs of the bridge’s structure over the past few inspection cycles, and had some semi-automated ways to detect the problems before the disaster? Unfortunately we’ll never know.
While I certainly don’t have it all figured out, I can begin to see some bread crumbs towards the workable solution we all want and need. We can no longer rely on human intelligence alone. Our worlds are much too complex and interdependent. We need to augment our understandings, and our abilities to take actions, by a variety of automated, concent-centric tools, such as semantic technologies. (With my colleagues Lynda Moulton and Frank Gilbane, we’re picking up coverage of this area, the ability to inject “meaning” and “context” into an enterprise environment. Be sure to check out the semantic technologies track at our upcoming conference in November.)
I’ve seen a couple of promising developments this month. SchemaLogic is finally reporting some progress in the publishing space, enabling publishers such as Associated Press to automatically repurpose content by synchronizing tags and managing metadata schemas. While pretty geeky, this is very neat! Now we need to see how this approach to managing semantics within the enterprise will impact collaboration and social computing.
Then project and portfolio management (PPM) systems — heretofore heavyweight (often mainframe) applications that are used to track resources for complex, engineering-driven projects — are being redeployed as Web 2.0 environments. In particular,eProject is now transforming its Web-based PPM environment into a broader collaborative tools suite. Seeking to capitalize on it’s expanded mission of bringing a PPM model to the Web, eProject’s also renaming itself in the process.
Where do these bread-crumbs lead? As a first step, we need to focus on how our collaboration infrastructure (fueled by our information architecture) can augment the work of people responsible for our physical infrastructure (ourselves included). At the end of the day, we need to be able to rely on this collaboration infrastructure to help us sense and respond to the challenges of simply getting from one place to another.

Recommendation to IT Directors: Constantly Track WCM Applications and Their Feature Sets

In recent conversations with several of Gilbane’s Analyst On Demand and Technology Acquisition Advisory clients, I have observed two careless practices that have prevented enterprises from being able to assess both the feature-functionality of their existing WCM applications and their requirements for selecting solutions to replace those applications. Both relate to a lack of documentation.
In the first case, it’s the absence of a master list of the WCM-related applications that have been developed in-house over the years. One company has “about 50” such applications, and geographically-dispersed individuals throughout the enterprise can tell me what some of them are, but no one can refer me to anyone or any system that has the complete listing. Discrete ongoing development projects exist for many of these applications, a few of which live buried deep in departmental silos. Needless to say, the functionality of applications within these silos is known only to a few people, is never re-used in other initiatives, and in fact often gets duplicated by newer siloed projects.
The second shortcoming is the non-documentation of feature-functions within the applications themselves. Even when applications are well known throughout the organization, their complete functionality sets are known to no one. This results in duplicate development, redundant purchases, and negative ROI — although no one knows just how negative.
At a minimum, enterprises should maintain master lists of both their WCM-related applications and the functionality within each one. To make effective use of such documentation, companies should establish effective dissemination processes. Examples range from the inclusion of key individuals in change control board meetings (for companies with predictive-style development methods) to informal cross-functional communication, especially between disparate technology groups, but also between IT and the business units whose requirements drive application development.

Gilbane Boston Conference Program Available – Registration Open

Activity for our 4th Gilbane Boston conference at the Westin Copley November 27 -29 is ramping up quickly. The conference schedule and session descriptions have been posted. The early list of exhibitors and sponsors is also available. And, online registration is open. We’ll be updating the site on a regular basis from now on, usually daily, so bookmark the pages that interest you to keep up-to-date.

Massachusetts adopts Open XML

The state of Massachusetts has approved Microsoft’s Open XML format for state documents. Some of you may remember there was quite a fight over the state’s decision to adopt the OASIS ODF (Open Document Format) backed by Sun and IBM a couple of years ago. The decision excluded XML output from Microsoft because they controlled it.

We covered much of the controversy here, and in our conferences where we hosted a few debates. Our opinion hasn’t changed. Here is a statement from the State’s IT Division website on their official position:

The Commonwealth continues on its path toward open, XML-based document formats without reflecting a vendor or commercial bias in ETRM v4.0. Many of the comments we received identify concerns regarding the Open XML specification. We believe that these concerns, as with those regarding ODF, are appropriately handled through the standards setting process, and we expect both standards to evolve and improve. Moreover, we believe that the impact of any legitimate concerns raised about either standard is outweighed substantially by the benefits of moving toward open, XML-based document format standards. Therefore, we will be moving forward to include both ODF and Open XML as acceptable document formats.

Multilingual Communication: The Spoken Word

In a global economy, corporate employees increasingly need to communicate in foreign languages, whether in sales, internal meetings, customer support etc. I spoke with Janne Nevasuo, CEO of AAC Global, one of the relatively few localization and translation companies which also offers language, culture and communications skills training. A year ago it was acquired by Sanoma-WSOY, a major stock-listed European media corporation with operations in over 20 countries.

KP: How long have you been in the language training business?
JN: We started with language training already 38 years ago, so we have a very long experience. We offer language training services only to corporate customers, and currently train about 20,000 people every year. For the past 20 years, our language training business has been growing about 15% annually.

KP: So you started with training, and moved to translation later?
JN: Yes, we added translation and localization services, as our corporate training customers started to ask for help in translations. As we have always focused only on corporate customers, it was a very natural growth path for us, helping our customers to handle all their multilingual needs.

KP: What are the main languages you give training for?
JN: English is by far the biggest language, and has been that for practically all the time we have been in business. About 70% of our training is on corporate English, as English is the “universal second language” in business. Demand for Russian is growing continuously.

KP: That is interesting, as so many people now speak English and learn it at school!
JN: That is just the point: school English is not enough for corporate use. Companies need to get their message through to their customers, employees, and partners in several different situations: presentations, meetings, negotiations etc. One can only imagine both the direct and indirect losses accruing from miscommunications and misunderstandings, when people cannot communicate efficiently in English.

KP: So which do you see as the biggest trends in language training?
JN: First of all, corporate language training is actually “substance training”, i.e. training employees about the company’s product or service in a foreign language, and about handling different situations, such as negotiations or presentations, in a foreign language. So corporate language training is rather far removed from language learning at schools; we focus on the substance, key terminology and message.

Another important trend is that language training needs to become part of everyday work and daily processes. The learning should happen without the student actually realizing that he or she is learning, and it should happen during the actual work, using actual materials and doing actual tasks. Nobody has time to go to even a one-day separate course.

New technologies are brining us more efficient solutions for this, such as the extensive terminology tools AAC Global offers. I would like to point out, though, that this does not mean only teach-yourself language learning, as it does not work for everybody. Innovative solutions combining self-paced and tutored learning are needed.

KP: Is language training bought only by big companies?
JN: Certainly not. Companies of all sizes need to communicate in foreign languages, so we serve companies from small to huge global companies. A very important thing to understand is this: nowadays more and more employees in a company need to communicate in a foreign language, regardless of their task. 10 years ago there were a few designated people in the company, typically in the export department, who needed to speak another language. Now practically everyone needs a foreign language, whether in sales, support, business intelligence, marketing… and also when communicating with the company’s own people and partners in other countries.

According to research we have done, people spend up to 1 hour per day looking for the right term or doing a translation. There is thus a lot of room for efficiencies in daily work processes to help people become more multilingual. Actually in large corporations, language training is also part of their HR process, so that the HR department participates in getting just the right kind of language training to each employee.

KP: In previous blog entries, Leonor and Mary talked about the emerging markets. How do you see them?
JN: We have worked especially with Russia and the former Eastern bloc countries. The need for training corporate English is enormous there; typically the companies there have a few people who are fluent in corporate English, but then there is a large gap. Many young people have studied English at school, but still need training in corporate practices and terminologies. Still, these are the same needs as in all other countries.

DITA and Dynamic Content Delivery

Have you ever waded through a massive technical manual, desperately searching for the section that actually applied to you? Or have you found yourself performing one search after another, collecting one-by-one the pieces of the answer you need from a mass of documents and web pages? These are all examples of the limitations of static publishing; that is, the limitations of publishing to a wide audience when people’s needs and wants are not all the same. Unfortunately, this classic “one size fits all” approach can end up fitting no one at all.

In the days when print publishing was our only option, and we thought only in terms of producing books, we really had no choice but to mass-distribute information and hope it met most people’s needs. But today, with Web-based technology and new XML standards like DITA, we have other choices.

DITA (Darwin Information Typing Architecture) is the hottest thing to have hit the technical publishing world in a long time. With its topic-based approach to authoring, DITA frees us from the need to think in terms of “books”, and lets us focus on the underlying information. With DITA’s modular, reusable information elements, we can not only publish across different formats and media – but also flexibly recombine information in almost any way we like.

Initial DITA implementations have focused primarily on publishing to pre-defined PDF, HTML and Help formats – that is, on static publishing. But the real promise of DITA lies in supporting dynamic, personalized content delivery. This alternative publishing model – which I’ll call dynamic content delivery – involves “pulling” rather than “pushing” content, based on the needs of each individual user.
In this self-service approach to publishing, end users can assemble their own “books” using two kinds of interfaces (or a hybrid of the two):

  • Information Shopping Cart – in which the user browses or searches to choose the content (DITA Topics) that she considers relevant, and then places this information in a shopping cart. When done “shopping”, she can organize her document’s table of contents, select a stylesheet, and automatically publish the result to HTML or PDF.
    This approach is appropriate when users are relatively knowledgeable about the content, and where the structure of their output documents can be safely left up to them. Examples include engineering research, e-learning systems, and customer self-service applications.
  • Personalization Wizard – in which the user answers a number of pre-set questions in a wizard-like interface, and the appropriate content is automatically extracted to produce a final document in HTML or PDF. This approach is appropriate for applications that need to produce a personalized but highly standard manual, such as a product installation guide or regulated policy manual. In this scenario, the document structure and stylesheet are typically preset.

In a hybrid interface, we could use a personalization wizard to dynamically assemble required material in a fixed table of contents – but then use the information shopping cart approach to allow the user to add supplementary material. Or, depending on the application, we might do the same thing but assemble the initial table of contents as a suggestion or starting point only. The first method might be appropriate for a user manual; the second might be better for custom textbooks.

Dynamic content delivery is made possible by the kind of topic-based authoring embraced by DITA. A topic is a piece of content that covers a specific subject, has an identifiable purpose, and can stand on its own (i.e., does not require a specific context in order to make sense). Topics don’t start with “as stated above” or end with “as further described below,” and they don’t implicitly refer to other information that isn’t contained within them. In a word, topics are fully reusable, in the sense that they can be used in any context where the information provided by the topic is needed.

The extraction and assembly of relevant topics is made possible by another relatively new standard called XQuery, which is able to both find the right information based on user profiles, filter the results accordingly, and automatically transform results into output formats like HTML or PDF. Of course, this approach is only feasible if the XQuery engine is extremely fast – which led us to build our own dynamic content delivery solution offering around Mark Logic, an XQuery-based content delivery platform optimized for real-time search and transformation.

The dynamic content delivery approach is an answer to the hunger for relevant, personalized information that pervades today’s organizations. Avoiding the pitfalls of the classic “one size fits all” publishing of the past, it instead allows a highly personalized and relevant interaction with “an audience of one.” I invite you to read more about this in a whitepaper I wrote that is available on our website (www.FlatironsSolutions.com).

« Older posts Newer posts »

© 2026 The Gilbane Advisor

Theme by Anders NorenUp ↑