The Gilbane Advisor

Curated for content, computing, and digital experience professionals

Page 309 of 918

I’ve Got Infrastructure on My Mind

It’s been a rough few weeks for infrastructure.
Of course the collapse of the I-35 bridge over the Mississippi in Minneapolis last Wednesday is on all of our minds – how could the inspections fail and the road fall down? Significantly, there’s some videos that capture the moment, which hopefully will provide clues for determining the cause.
Closer to home, we had an eight-hour traffic jam on the I-93 loop in Braintree (a major highway south of Boston) a week ago Monday. A storm grate was thrown loose by a passing truck at the start of the morning rush, and landed on a near-by car. (Fortunately the driver survived.) Reportedly, the Massachusetts State Highway Department spent the rest of the day checking and welding shut all the grates on that highway. The next day, the same loose storm drain problem cropped up on a major road in Newton, near where I live. This time motorists were asked to dial a special code from their cell phones to report problems.
And then this morning New Yorkers awoke to a monsoon and a flooded mass transit system. The official M.T.A. web site could not keep up with the requests for information, and crashed when it was needed most.
You’ve gotta hand it to those hardy folks (and the New York Times) for that snarky, Big Apple attitude. Here’re a few priceless ones that I gleaned from NYTimes.Com during the day.

“Our transit system is not only frail but if it’s this vulnerable to rain attacks, then how vulnerable is it to terrorist attacks?”

“I walked from the Upper West Side all the way to work in Midtown, but thanks to Starbucks was able to stop in and cool myself every four blocks or so.”

(Now there’s somebody with brand loyalty!)

“And only the rats had no transportation problems.”

All this content about our physical infrastructure (user generated and otherwise) has the potential to bring social computing to a whole new level.
This got me thinking about the role of collaboration technologies for supporting our physical infrastructure. It’s great to be able to talk back — and let off a little steam. It’s even better to be able to call-in, and tell the authorities about the problem before there’s another horrible accident. But what else is possible? Could the bridge inspectors in Minneapolis have shared their observation reports, measurements, and perhaps photographs of the bridge’s structure over the past few inspection cycles, and had some semi-automated ways to detect the problems before the disaster? Unfortunately we’ll never know.
While I certainly don’t have it all figured out, I can begin to see some bread crumbs towards the workable solution we all want and need. We can no longer rely on human intelligence alone. Our worlds are much too complex and interdependent. We need to augment our understandings, and our abilities to take actions, by a variety of automated, concent-centric tools, such as semantic technologies. (With my colleagues Lynda Moulton and Frank Gilbane, we’re picking up coverage of this area, the ability to inject “meaning” and “context” into an enterprise environment. Be sure to check out the semantic technologies track at our upcoming conference in November.)
I’ve seen a couple of promising developments this month. SchemaLogic is finally reporting some progress in the publishing space, enabling publishers such as Associated Press to automatically repurpose content by synchronizing tags and managing metadata schemas. While pretty geeky, this is very neat! Now we need to see how this approach to managing semantics within the enterprise will impact collaboration and social computing.
Then project and portfolio management (PPM) systems — heretofore heavyweight (often mainframe) applications that are used to track resources for complex, engineering-driven projects — are being redeployed as Web 2.0 environments. In particular,eProject is now transforming its Web-based PPM environment into a broader collaborative tools suite. Seeking to capitalize on it’s expanded mission of bringing a PPM model to the Web, eProject’s also renaming itself in the process.
Where do these bread-crumbs lead? As a first step, we need to focus on how our collaboration infrastructure (fueled by our information architecture) can augment the work of people responsible for our physical infrastructure (ourselves included). At the end of the day, we need to be able to rely on this collaboration infrastructure to help us sense and respond to the challenges of simply getting from one place to another.

Recommendation to IT Directors: Constantly Track WCM Applications and Their Feature Sets

In recent conversations with several of Gilbane’s Analyst On Demand and Technology Acquisition Advisory clients, I have observed two careless practices that have prevented enterprises from being able to assess both the feature-functionality of their existing WCM applications and their requirements for selecting solutions to replace those applications. Both relate to a lack of documentation.
In the first case, it’s the absence of a master list of the WCM-related applications that have been developed in-house over the years. One company has “about 50” such applications, and geographically-dispersed individuals throughout the enterprise can tell me what some of them are, but no one can refer me to anyone or any system that has the complete listing. Discrete ongoing development projects exist for many of these applications, a few of which live buried deep in departmental silos. Needless to say, the functionality of applications within these silos is known only to a few people, is never re-used in other initiatives, and in fact often gets duplicated by newer siloed projects.
The second shortcoming is the non-documentation of feature-functions within the applications themselves. Even when applications are well known throughout the organization, their complete functionality sets are known to no one. This results in duplicate development, redundant purchases, and negative ROI — although no one knows just how negative.
At a minimum, enterprises should maintain master lists of both their WCM-related applications and the functionality within each one. To make effective use of such documentation, companies should establish effective dissemination processes. Examples range from the inclusion of key individuals in change control board meetings (for companies with predictive-style development methods) to informal cross-functional communication, especially between disparate technology groups, but also between IT and the business units whose requirements drive application development.

Gilbane Boston Conference Program Available – Registration Open

Activity for our 4th Gilbane Boston conference at the Westin Copley November 27 -29 is ramping up quickly. The conference schedule and session descriptions have been posted. The early list of exhibitors and sponsors is also available. And, online registration is open. We’ll be updating the site on a regular basis from now on, usually daily, so bookmark the pages that interest you to keep up-to-date.

Massachusetts adopts Open XML

The state of Massachusetts has approved Microsoft’s Open XML format for state documents. Some of you may remember there was quite a fight over the state’s decision to adopt the OASIS ODF (Open Document Format) backed by Sun and IBM a couple of years ago. The decision excluded XML output from Microsoft because they controlled it.

We covered much of the controversy here, and in our conferences where we hosted a few debates. Our opinion hasn’t changed. Here is a statement from the State’s IT Division website on their official position:

The Commonwealth continues on its path toward open, XML-based document formats without reflecting a vendor or commercial bias in ETRM v4.0. Many of the comments we received identify concerns regarding the Open XML specification. We believe that these concerns, as with those regarding ODF, are appropriately handled through the standards setting process, and we expect both standards to evolve and improve. Moreover, we believe that the impact of any legitimate concerns raised about either standard is outweighed substantially by the benefits of moving toward open, XML-based document format standards. Therefore, we will be moving forward to include both ODF and Open XML as acceptable document formats.

Multilingual Communication: The Spoken Word

In a global economy, corporate employees increasingly need to communicate in foreign languages, whether in sales, internal meetings, customer support etc. I spoke with Janne Nevasuo, CEO of AAC Global, one of the relatively few localization and translation companies which also offers language, culture and communications skills training. A year ago it was acquired by Sanoma-WSOY, a major stock-listed European media corporation with operations in over 20 countries.

KP: How long have you been in the language training business?
JN: We started with language training already 38 years ago, so we have a very long experience. We offer language training services only to corporate customers, and currently train about 20,000 people every year. For the past 20 years, our language training business has been growing about 15% annually.

KP: So you started with training, and moved to translation later?
JN: Yes, we added translation and localization services, as our corporate training customers started to ask for help in translations. As we have always focused only on corporate customers, it was a very natural growth path for us, helping our customers to handle all their multilingual needs.

KP: What are the main languages you give training for?
JN: English is by far the biggest language, and has been that for practically all the time we have been in business. About 70% of our training is on corporate English, as English is the “universal second language” in business. Demand for Russian is growing continuously.

KP: That is interesting, as so many people now speak English and learn it at school!
JN: That is just the point: school English is not enough for corporate use. Companies need to get their message through to their customers, employees, and partners in several different situations: presentations, meetings, negotiations etc. One can only imagine both the direct and indirect losses accruing from miscommunications and misunderstandings, when people cannot communicate efficiently in English.

KP: So which do you see as the biggest trends in language training?
JN: First of all, corporate language training is actually “substance training”, i.e. training employees about the company’s product or service in a foreign language, and about handling different situations, such as negotiations or presentations, in a foreign language. So corporate language training is rather far removed from language learning at schools; we focus on the substance, key terminology and message.

Another important trend is that language training needs to become part of everyday work and daily processes. The learning should happen without the student actually realizing that he or she is learning, and it should happen during the actual work, using actual materials and doing actual tasks. Nobody has time to go to even a one-day separate course.

New technologies are brining us more efficient solutions for this, such as the extensive terminology tools AAC Global offers. I would like to point out, though, that this does not mean only teach-yourself language learning, as it does not work for everybody. Innovative solutions combining self-paced and tutored learning are needed.

KP: Is language training bought only by big companies?
JN: Certainly not. Companies of all sizes need to communicate in foreign languages, so we serve companies from small to huge global companies. A very important thing to understand is this: nowadays more and more employees in a company need to communicate in a foreign language, regardless of their task. 10 years ago there were a few designated people in the company, typically in the export department, who needed to speak another language. Now practically everyone needs a foreign language, whether in sales, support, business intelligence, marketing… and also when communicating with the company’s own people and partners in other countries.

According to research we have done, people spend up to 1 hour per day looking for the right term or doing a translation. There is thus a lot of room for efficiencies in daily work processes to help people become more multilingual. Actually in large corporations, language training is also part of their HR process, so that the HR department participates in getting just the right kind of language training to each employee.

KP: In previous blog entries, Leonor and Mary talked about the emerging markets. How do you see them?
JN: We have worked especially with Russia and the former Eastern bloc countries. The need for training corporate English is enormous there; typically the companies there have a few people who are fluent in corporate English, but then there is a large gap. Many young people have studied English at school, but still need training in corporate practices and terminologies. Still, these are the same needs as in all other countries.

DITA and Dynamic Content Delivery

Have you ever waded through a massive technical manual, desperately searching for the section that actually applied to you? Or have you found yourself performing one search after another, collecting one-by-one the pieces of the answer you need from a mass of documents and web pages? These are all examples of the limitations of static publishing; that is, the limitations of publishing to a wide audience when people’s needs and wants are not all the same. Unfortunately, this classic “one size fits all” approach can end up fitting no one at all.

In the days when print publishing was our only option, and we thought only in terms of producing books, we really had no choice but to mass-distribute information and hope it met most people’s needs. But today, with Web-based technology and new XML standards like DITA, we have other choices.

DITA (Darwin Information Typing Architecture) is the hottest thing to have hit the technical publishing world in a long time. With its topic-based approach to authoring, DITA frees us from the need to think in terms of “books”, and lets us focus on the underlying information. With DITA’s modular, reusable information elements, we can not only publish across different formats and media – but also flexibly recombine information in almost any way we like.

Initial DITA implementations have focused primarily on publishing to pre-defined PDF, HTML and Help formats – that is, on static publishing. But the real promise of DITA lies in supporting dynamic, personalized content delivery. This alternative publishing model – which I’ll call dynamic content delivery – involves “pulling” rather than “pushing” content, based on the needs of each individual user.
In this self-service approach to publishing, end users can assemble their own “books” using two kinds of interfaces (or a hybrid of the two):

  • Information Shopping Cart – in which the user browses or searches to choose the content (DITA Topics) that she considers relevant, and then places this information in a shopping cart. When done “shopping”, she can organize her document’s table of contents, select a stylesheet, and automatically publish the result to HTML or PDF.
    This approach is appropriate when users are relatively knowledgeable about the content, and where the structure of their output documents can be safely left up to them. Examples include engineering research, e-learning systems, and customer self-service applications.
  • Personalization Wizard – in which the user answers a number of pre-set questions in a wizard-like interface, and the appropriate content is automatically extracted to produce a final document in HTML or PDF. This approach is appropriate for applications that need to produce a personalized but highly standard manual, such as a product installation guide or regulated policy manual. In this scenario, the document structure and stylesheet are typically preset.

In a hybrid interface, we could use a personalization wizard to dynamically assemble required material in a fixed table of contents – but then use the information shopping cart approach to allow the user to add supplementary material. Or, depending on the application, we might do the same thing but assemble the initial table of contents as a suggestion or starting point only. The first method might be appropriate for a user manual; the second might be better for custom textbooks.

Dynamic content delivery is made possible by the kind of topic-based authoring embraced by DITA. A topic is a piece of content that covers a specific subject, has an identifiable purpose, and can stand on its own (i.e., does not require a specific context in order to make sense). Topics don’t start with “as stated above” or end with “as further described below,” and they don’t implicitly refer to other information that isn’t contained within them. In a word, topics are fully reusable, in the sense that they can be used in any context where the information provided by the topic is needed.

The extraction and assembly of relevant topics is made possible by another relatively new standard called XQuery, which is able to both find the right information based on user profiles, filter the results accordingly, and automatically transform results into output formats like HTML or PDF. Of course, this approach is only feasible if the XQuery engine is extremely fast – which led us to build our own dynamic content delivery solution offering around Mark Logic, an XQuery-based content delivery platform optimized for real-time search and transformation.

The dynamic content delivery approach is an answer to the hunger for relevant, personalized information that pervades today’s organizations. Avoiding the pitfalls of the classic “one size fits all” publishing of the past, it instead allows a highly personalized and relevant interaction with “an audience of one.” I invite you to read more about this in a whitepaper I wrote that is available on our website (www.FlatironsSolutions.com).

The Marginal Influence of E-commerce Search and Taxonomies on Enterprise Search Technologies

As we gear up for Gilbane Boston 2007, the number of possible topics to include in the tracks related to search seems boundless. The search business is in a transitional state but in spite of disarray is still pivotal in its impact on business and current culture. The sessions will reflect the diversity in the market.

One trend is quite clear; the amount of money and effort being expended for Web search or site search on commercial Web sites is a winner in the “search technology” revenues war with annual revenues measuring well into the $billions. On the other hand, a recent Gartner study described the 2006 revenues for enterprise search as below $400M. This figure comes from reading an excellent article, Enterprise Search: Seek and Maybe You’ll Find, by Ben DuPont in Intelligent Enterprise. Check it out.

The distinctions between search on the Web and search within the enterprise are numerous but here are two. First, Internet Web search revenue is all about marketing. Yes, we use it to discover, learn, find facts, and become more informed. But when companies supplying search technology to expose you to their content on the Internet they do so to facilitate commerce. If it falls into the hands of organizations that have other intent, libraries or government agencies, so be it.

As we all know, when we are at work, seeking to discover, learn or find facts to do our jobs better, we need a different kind of search. Thus, we seek a clear search winner built just for our enterprise with all of its idiosyncrasies. The problem is that what is inside does not look like the rest of the world’s content as it is aggregated for commercial views. Enterprises are unique and operate sometimes chaotically, or, at best, with nuanced views of what information is most important.

The second distinction relates to taxonomies, and the increase in their development and use. I’ve seen a dramatic increase in job postings for “taxonomists” and have managed several projects for enterprises over the years to build these controlled lists of terms for categorizing content. What is noteworthy about recent job opportunities is that most seem to be for customer facing Web sites. Historically, organizations with substantial internal content (e.g. research reports, patents, laboratory findings, business documents) hired professionals to categorize materials for a narrowly defined audience of specialists. The terminology was often highly unique, could number in the hundreds or thousands of terms, even for a relatively small enterprise. This is no longer a common practice.

Slow financial growth in enterprise search markets is no surprise. Like many tools designed and marketed for departments not directly tied to revenue generation, search goes begging for solid vertical markets. Search’s companion technologies are also struggling to find a lucrative toehold for use within the organization. Content management systems integrated with rich and efficient taxonomy building and maintenance functions are hard to find.

I am confident that tools in CMS products for building and maintaining complex taxonomies will not improve until enterprises find a solid business reason to put professional human resources into doing content management, taxonomy development, search, and text analytics on their most important knowledge assets. This is a tough business proposition compared to the revenues being driven on the Internet. What businesses need to keep in mind is that without the ability to leverage their internal knowledge content assets better, smarter and faster, there won’t be innovative products in the pipeline to generate commerce. Losing track of your valuable intellectual resources is not a good long term strategy. Once you begin committing to solid content resource management strategies, enterprise technology products will improve to meet your needs.

Links and Connections: Finding the Context

A provocative conversation broke out on one of the discussion groups I monitor last week. “I’m curious how you and others you know are using ‘LinkedIn.com.'” the person asked. “For me, I like who’s in my network, [and] keep asking others to join; but overall I find it to be very static.”
A static network — now there’s a new concept! But there’s a good deal of truth to wondering how these links work within the business environment. Sure I too have a modest network; I check out my Linked-in account once or twice a month to see who’s doing what. For me, this substitutes (poorly) for the water-cooler conversations earlier in my career, when I was surrounded by lots of co-workers. There was always the lunch-time gossip and the hallway exchanges . . . did I know that so and so was working on this new skunk-works project? Had I heard that another sales team just surpassed its revenue goals or that a particular key customer now had a new set of requirements?
While linking-in through Linked-in is a poor substitute for the chatter of the co-located workplace, it’s at least the beginning of a business conversation. It maintains its professional aura, boundaries, and rules, in part by continuing to stove-pipe its connections, and not (yet) mashing up its links and membership.
Not so with Facebook, now trying to take the “digital natives” (those who grew up with the Internet ant the Web) into the workplace. This move — blending the power of networks with mashups — is raising a number of eyebrows. “Friend? Not? It’s One or the Other” Rob Pegoraro, the personal technology columnist wrote provocatively in the Washington Post last week.

You could stay in touch with your drinking buddies at MySpace, then schmooze with your business partners at LinkedIn.
But life isn’t always that neat. And when the private and professional overlap at these sites, you can spend more time worrying about your image than building your network.

To be sure Facebook has a slew of privacy setting — at least 135 according to Pegoraro — but having to define how I want to expose some of my activities to one group of friends and other actions to business colleagues adds complexity to what should be cast as a rather fluid interchange.
What’s missing to my way of thinking is not simply privacy but context. We all have our business personas and our personal personas. We have certain expectations when in a business context, others when in a social context, and still others when “being personal.” Many of our social networks are, in fact, rather complex.
To make these networks useful within a collaborative (and online) business environment, we need to be able to add (and manage) our business contexts. We need to be able to describe (and map) the business purposes for our social networks.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑