Curated for content, computing, and digital experience professionals

Year: 2007 (Page 12 of 45)

Recommendation to IT Directors: Constantly Track WCM Applications and Their Feature Sets

In recent conversations with several of Gilbane’s Analyst On Demand and Technology Acquisition Advisory clients, I have observed two careless practices that have prevented enterprises from being able to assess both the feature-functionality of their existing WCM applications and their requirements for selecting solutions to replace those applications. Both relate to a lack of documentation.
In the first case, it’s the absence of a master list of the WCM-related applications that have been developed in-house over the years. One company has “about 50” such applications, and geographically-dispersed individuals throughout the enterprise can tell me what some of them are, but no one can refer me to anyone or any system that has the complete listing. Discrete ongoing development projects exist for many of these applications, a few of which live buried deep in departmental silos. Needless to say, the functionality of applications within these silos is known only to a few people, is never re-used in other initiatives, and in fact often gets duplicated by newer siloed projects.
The second shortcoming is the non-documentation of feature-functions within the applications themselves. Even when applications are well known throughout the organization, their complete functionality sets are known to no one. This results in duplicate development, redundant purchases, and negative ROI — although no one knows just how negative.
At a minimum, enterprises should maintain master lists of both their WCM-related applications and the functionality within each one. To make effective use of such documentation, companies should establish effective dissemination processes. Examples range from the inclusion of key individuals in change control board meetings (for companies with predictive-style development methods) to informal cross-functional communication, especially between disparate technology groups, but also between IT and the business units whose requirements drive application development.

Gilbane Boston Conference Program Available – Registration Open

Activity for our 4th Gilbane Boston conference at the Westin Copley November 27 -29 is ramping up quickly. The conference schedule and session descriptions have been posted. The early list of exhibitors and sponsors is also available. And, online registration is open. We’ll be updating the site on a regular basis from now on, usually daily, so bookmark the pages that interest you to keep up-to-date.

Massachusetts adopts Open XML

The state of Massachusetts has approved Microsoft’s Open XML format for state documents. Some of you may remember there was quite a fight over the state’s decision to adopt the OASIS ODF (Open Document Format) backed by Sun and IBM a couple of years ago. The decision excluded XML output from Microsoft because they controlled it.

We covered much of the controversy here, and in our conferences where we hosted a few debates. Our opinion hasn’t changed. Here is a statement from the State’s IT Division website on their official position:

The Commonwealth continues on its path toward open, XML-based document formats without reflecting a vendor or commercial bias in ETRM v4.0. Many of the comments we received identify concerns regarding the Open XML specification. We believe that these concerns, as with those regarding ODF, are appropriately handled through the standards setting process, and we expect both standards to evolve and improve. Moreover, we believe that the impact of any legitimate concerns raised about either standard is outweighed substantially by the benefits of moving toward open, XML-based document format standards. Therefore, we will be moving forward to include both ODF and Open XML as acceptable document formats.

Multilingual Communication: The Spoken Word

In a global economy, corporate employees increasingly need to communicate in foreign languages, whether in sales, internal meetings, customer support etc. I spoke with Janne Nevasuo, CEO of AAC Global, one of the relatively few localization and translation companies which also offers language, culture and communications skills training. A year ago it was acquired by Sanoma-WSOY, a major stock-listed European media corporation with operations in over 20 countries.

KP: How long have you been in the language training business?
JN: We started with language training already 38 years ago, so we have a very long experience. We offer language training services only to corporate customers, and currently train about 20,000 people every year. For the past 20 years, our language training business has been growing about 15% annually.

KP: So you started with training, and moved to translation later?
JN: Yes, we added translation and localization services, as our corporate training customers started to ask for help in translations. As we have always focused only on corporate customers, it was a very natural growth path for us, helping our customers to handle all their multilingual needs.

KP: What are the main languages you give training for?
JN: English is by far the biggest language, and has been that for practically all the time we have been in business. About 70% of our training is on corporate English, as English is the “universal second language” in business. Demand for Russian is growing continuously.

KP: That is interesting, as so many people now speak English and learn it at school!
JN: That is just the point: school English is not enough for corporate use. Companies need to get their message through to their customers, employees, and partners in several different situations: presentations, meetings, negotiations etc. One can only imagine both the direct and indirect losses accruing from miscommunications and misunderstandings, when people cannot communicate efficiently in English.

KP: So which do you see as the biggest trends in language training?
JN: First of all, corporate language training is actually “substance training”, i.e. training employees about the company’s product or service in a foreign language, and about handling different situations, such as negotiations or presentations, in a foreign language. So corporate language training is rather far removed from language learning at schools; we focus on the substance, key terminology and message.

Another important trend is that language training needs to become part of everyday work and daily processes. The learning should happen without the student actually realizing that he or she is learning, and it should happen during the actual work, using actual materials and doing actual tasks. Nobody has time to go to even a one-day separate course.

New technologies are brining us more efficient solutions for this, such as the extensive terminology tools AAC Global offers. I would like to point out, though, that this does not mean only teach-yourself language learning, as it does not work for everybody. Innovative solutions combining self-paced and tutored learning are needed.

KP: Is language training bought only by big companies?
JN: Certainly not. Companies of all sizes need to communicate in foreign languages, so we serve companies from small to huge global companies. A very important thing to understand is this: nowadays more and more employees in a company need to communicate in a foreign language, regardless of their task. 10 years ago there were a few designated people in the company, typically in the export department, who needed to speak another language. Now practically everyone needs a foreign language, whether in sales, support, business intelligence, marketing… and also when communicating with the company’s own people and partners in other countries.

According to research we have done, people spend up to 1 hour per day looking for the right term or doing a translation. There is thus a lot of room for efficiencies in daily work processes to help people become more multilingual. Actually in large corporations, language training is also part of their HR process, so that the HR department participates in getting just the right kind of language training to each employee.

KP: In previous blog entries, Leonor and Mary talked about the emerging markets. How do you see them?
JN: We have worked especially with Russia and the former Eastern bloc countries. The need for training corporate English is enormous there; typically the companies there have a few people who are fluent in corporate English, but then there is a large gap. Many young people have studied English at school, but still need training in corporate practices and terminologies. Still, these are the same needs as in all other countries.

DITA and Dynamic Content Delivery

Have you ever waded through a massive technical manual, desperately searching for the section that actually applied to you? Or have you found yourself performing one search after another, collecting one-by-one the pieces of the answer you need from a mass of documents and web pages? These are all examples of the limitations of static publishing; that is, the limitations of publishing to a wide audience when people’s needs and wants are not all the same. Unfortunately, this classic “one size fits all” approach can end up fitting no one at all.

In the days when print publishing was our only option, and we thought only in terms of producing books, we really had no choice but to mass-distribute information and hope it met most people’s needs. But today, with Web-based technology and new XML standards like DITA, we have other choices.

DITA (Darwin Information Typing Architecture) is the hottest thing to have hit the technical publishing world in a long time. With its topic-based approach to authoring, DITA frees us from the need to think in terms of “books”, and lets us focus on the underlying information. With DITA’s modular, reusable information elements, we can not only publish across different formats and media – but also flexibly recombine information in almost any way we like.

Initial DITA implementations have focused primarily on publishing to pre-defined PDF, HTML and Help formats – that is, on static publishing. But the real promise of DITA lies in supporting dynamic, personalized content delivery. This alternative publishing model – which I’ll call dynamic content delivery – involves “pulling” rather than “pushing” content, based on the needs of each individual user.
In this self-service approach to publishing, end users can assemble their own “books” using two kinds of interfaces (or a hybrid of the two):

  • Information Shopping Cart – in which the user browses or searches to choose the content (DITA Topics) that she considers relevant, and then places this information in a shopping cart. When done “shopping”, she can organize her document’s table of contents, select a stylesheet, and automatically publish the result to HTML or PDF.
    This approach is appropriate when users are relatively knowledgeable about the content, and where the structure of their output documents can be safely left up to them. Examples include engineering research, e-learning systems, and customer self-service applications.
  • Personalization Wizard – in which the user answers a number of pre-set questions in a wizard-like interface, and the appropriate content is automatically extracted to produce a final document in HTML or PDF. This approach is appropriate for applications that need to produce a personalized but highly standard manual, such as a product installation guide or regulated policy manual. In this scenario, the document structure and stylesheet are typically preset.

In a hybrid interface, we could use a personalization wizard to dynamically assemble required material in a fixed table of contents – but then use the information shopping cart approach to allow the user to add supplementary material. Or, depending on the application, we might do the same thing but assemble the initial table of contents as a suggestion or starting point only. The first method might be appropriate for a user manual; the second might be better for custom textbooks.

Dynamic content delivery is made possible by the kind of topic-based authoring embraced by DITA. A topic is a piece of content that covers a specific subject, has an identifiable purpose, and can stand on its own (i.e., does not require a specific context in order to make sense). Topics don’t start with “as stated above” or end with “as further described below,” and they don’t implicitly refer to other information that isn’t contained within them. In a word, topics are fully reusable, in the sense that they can be used in any context where the information provided by the topic is needed.

The extraction and assembly of relevant topics is made possible by another relatively new standard called XQuery, which is able to both find the right information based on user profiles, filter the results accordingly, and automatically transform results into output formats like HTML or PDF. Of course, this approach is only feasible if the XQuery engine is extremely fast – which led us to build our own dynamic content delivery solution offering around Mark Logic, an XQuery-based content delivery platform optimized for real-time search and transformation.

The dynamic content delivery approach is an answer to the hunger for relevant, personalized information that pervades today’s organizations. Avoiding the pitfalls of the classic “one size fits all” publishing of the past, it instead allows a highly personalized and relevant interaction with “an audience of one.” I invite you to read more about this in a whitepaper I wrote that is available on our website (www.FlatironsSolutions.com).

The Marginal Influence of E-commerce Search and Taxonomies on Enterprise Search Technologies

As we gear up for Gilbane Boston 2007, the number of possible topics to include in the tracks related to search seems boundless. The search business is in a transitional state but in spite of disarray is still pivotal in its impact on business and current culture. The sessions will reflect the diversity in the market.

One trend is quite clear; the amount of money and effort being expended for Web search or site search on commercial Web sites is a winner in the “search technology” revenues war with annual revenues measuring well into the $billions. On the other hand, a recent Gartner study described the 2006 revenues for enterprise search as below $400M. This figure comes from reading an excellent article, Enterprise Search: Seek and Maybe You’ll Find, by Ben DuPont in Intelligent Enterprise. Check it out.

The distinctions between search on the Web and search within the enterprise are numerous but here are two. First, Internet Web search revenue is all about marketing. Yes, we use it to discover, learn, find facts, and become more informed. But when companies supplying search technology to expose you to their content on the Internet they do so to facilitate commerce. If it falls into the hands of organizations that have other intent, libraries or government agencies, so be it.

As we all know, when we are at work, seeking to discover, learn or find facts to do our jobs better, we need a different kind of search. Thus, we seek a clear search winner built just for our enterprise with all of its idiosyncrasies. The problem is that what is inside does not look like the rest of the world’s content as it is aggregated for commercial views. Enterprises are unique and operate sometimes chaotically, or, at best, with nuanced views of what information is most important.

The second distinction relates to taxonomies, and the increase in their development and use. I’ve seen a dramatic increase in job postings for “taxonomists” and have managed several projects for enterprises over the years to build these controlled lists of terms for categorizing content. What is noteworthy about recent job opportunities is that most seem to be for customer facing Web sites. Historically, organizations with substantial internal content (e.g. research reports, patents, laboratory findings, business documents) hired professionals to categorize materials for a narrowly defined audience of specialists. The terminology was often highly unique, could number in the hundreds or thousands of terms, even for a relatively small enterprise. This is no longer a common practice.

Slow financial growth in enterprise search markets is no surprise. Like many tools designed and marketed for departments not directly tied to revenue generation, search goes begging for solid vertical markets. Search’s companion technologies are also struggling to find a lucrative toehold for use within the organization. Content management systems integrated with rich and efficient taxonomy building and maintenance functions are hard to find.

I am confident that tools in CMS products for building and maintaining complex taxonomies will not improve until enterprises find a solid business reason to put professional human resources into doing content management, taxonomy development, search, and text analytics on their most important knowledge assets. This is a tough business proposition compared to the revenues being driven on the Internet. What businesses need to keep in mind is that without the ability to leverage their internal knowledge content assets better, smarter and faster, there won’t be innovative products in the pipeline to generate commerce. Losing track of your valuable intellectual resources is not a good long term strategy. Once you begin committing to solid content resource management strategies, enterprise technology products will improve to meet your needs.

Links and Connections: Finding the Context

A provocative conversation broke out on one of the discussion groups I monitor last week. “I’m curious how you and others you know are using ‘LinkedIn.com.'” the person asked. “For me, I like who’s in my network, [and] keep asking others to join; but overall I find it to be very static.”
A static network — now there’s a new concept! But there’s a good deal of truth to wondering how these links work within the business environment. Sure I too have a modest network; I check out my Linked-in account once or twice a month to see who’s doing what. For me, this substitutes (poorly) for the water-cooler conversations earlier in my career, when I was surrounded by lots of co-workers. There was always the lunch-time gossip and the hallway exchanges . . . did I know that so and so was working on this new skunk-works project? Had I heard that another sales team just surpassed its revenue goals or that a particular key customer now had a new set of requirements?
While linking-in through Linked-in is a poor substitute for the chatter of the co-located workplace, it’s at least the beginning of a business conversation. It maintains its professional aura, boundaries, and rules, in part by continuing to stove-pipe its connections, and not (yet) mashing up its links and membership.
Not so with Facebook, now trying to take the “digital natives” (those who grew up with the Internet ant the Web) into the workplace. This move — blending the power of networks with mashups — is raising a number of eyebrows. “Friend? Not? It’s One or the Other” Rob Pegoraro, the personal technology columnist wrote provocatively in the Washington Post last week.

You could stay in touch with your drinking buddies at MySpace, then schmooze with your business partners at LinkedIn.
But life isn’t always that neat. And when the private and professional overlap at these sites, you can spend more time worrying about your image than building your network.

To be sure Facebook has a slew of privacy setting — at least 135 according to Pegoraro — but having to define how I want to expose some of my activities to one group of friends and other actions to business colleagues adds complexity to what should be cast as a rather fluid interchange.
What’s missing to my way of thinking is not simply privacy but context. We all have our business personas and our personal personas. We have certain expectations when in a business context, others when in a social context, and still others when “being personal.” Many of our social networks are, in fact, rather complex.
To make these networks useful within a collaborative (and online) business environment, we need to be able to add (and manage) our business contexts. We need to be able to describe (and map) the business purposes for our social networks.

Adobe Ships ColdFusion 8

Adobe announced the availability of the shipping version of ColdFusion 8. Designed for developers building dynamic Web sites and Internet applications, ColdFusion 8 targets day-to-day development challenges to increase developer productivity and integrate with complex enterprise environments. ColdFusion 8 uses Adobe Flex and Ajax-based components, and includes advanced Eclipse-based wizards and debugging to help developers build applications and identify and fix problems, while a new Server Monitor quickly identifies bottlenecks and tunes the server for better performance. It integrates with .NET assemblies, support for Windows Vista and new J2EE servers, for enhanced flexibility, interoperability, and scalability, and interacts with Adobe PDF documents and forms for a printable, portable way to capture information. ColdFusion 8 is available in two versions: ColdFusion 8 Standard and ColdFusion 8 Enterprise Edition. Each license is valid for 2CPUs. Adobe is also offering discounted upgrade pricing for customers who own ColdFusion MX 6.X or ColdFusion MX 7. www.adobe.com/products/coldfusion

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑