Smart Content and the Pull of Search Engine Optimization

One of the conclusions of our report Smart Content in the Enterprise (forthcoming next week) is how a little bit of enrichment goes a long way. It’s important to build on your XML infrastructure, enrich your content a little bit (to the extent that your business environment is able to support), and expect to iterate over time.

Consider what happened at Citrix, reported in our case study Optimizing the Customer Experience at Citrix: Restructuring Documentation and Training for Web Delivery. The company had adopted DITA for structured publishing several years ago. Yet just repurposing the content in product manuals for print and electronic distribution, and publishing the same information as HTML and PDF documents, did not change the customer experience.

A few years ago, Citrix information specialists had a key insight: customers expected to find support information by googling the web. To be sure, there was a lot of content about various Citrix products out in cyberspace, but very little of it came directly from Citrix. Consequently the most popular solutions available via web-wide searching were not always reliable, and the detailed information from Citrix (buried in their own manuals) was rarely found.

What did Citrix do? Despite limited resources, the documentation group began to add search metadata to the product manuals. With DITA, there was already a predefined structure for topics, used to define sections, chapters, and manuals. Authors and editors could simply include additional tagged metadata that identified and classified the contents – and thus expose the information to Google and other web-wide search engines.

Nor was there a lot of time or many resources for up-front design and detailed analysis. To paraphrase a perceptive information architect we interviewed, “Getting started was a lot like throwing the stuff against a wall to see what sticks.” At first tags simply summarized existing chapter and section headings. Significantly, this was a good enough place to start.

Specifically, once Citrix was able to join the online conversation with its customers, it was also able to begin tracking popular search terms. Then over time and with successive product releases, the documentation group was able to add additional tagged metadata and provide ever more focused (and granular) content components.

What does this mean for developing smart content and leveraging the benefits of XML tagging? Certainly the more precise your content enrichment, the more findable your information is going to be. When considering the business benefits of search engine optimization, the quality of your tagging can always improve over time. But as a simple value proposition, getting started is the critical first step.

Early Access to Gilbane’s XML Report

If you’ve been reading our recent posts on Gilbane’s new research on XML adoption, you might be wondering how to get the report in advance of its availability from Gilbane later this month.

Smart Content in the Enterprise: How Next Generation XML Applications Deliver New Value to Multiple Stakeholders is currently offered by several of the study sponsors: IBM, JustSystems, MarkLogic, MindTouch, Ovitas, Quark, and SDL.

We’ll also be discussing our research in real time during a webinar hosted by SDL on November 4. Look for details within the next few weeks.

New Paper – Looking at Website Governance

I am delighted that I’ve just completed my first solo paper here as an analyst: Looking Outside the CMS Box for Enterprise Website Governance. I say solo, but I ought to start by saying I’m grateful for having had a great deal of support from Mary Laplante as my reform from vendor to analyst continues. 

This paper has allowed me to pick at a subject that I’ve long had in the back of my mind, both in terms of CMS product strategy and of what we, as content management professionals, need to be cognizant of as we get swept up in engaging web experiences – that of corporate content governance. 

When I write and talk about web engagement or the web experience, I often refer to the first impression – that your website meets all of your audience, prospects, customers or citizens. They don’t all see your shiny headquarters building, meet the friendly receptionist or see that you have todays copy of The Times on the coffee table – but they do see your website. 

Mistakes such as a misspelling, an outdated page or a brand inconsistency all reflect badly on your attention to detail. This tarnishes the professionalism of your services, the reliability of your products, and attention you will pay to meeting consumer needs.

Of course, when those lapses are related to compliance issues (such as regulatory requirements and accessibility standards), they can be even more damaging, often resulting in financial penalties and a serious impact on your reputation.

I see this governance as the foundation for any content driven business application, but in this paper we focus on website governance and aim to answer the following questions:

  • What are the critical content governance risks and issues facing the organization? 
  • Is your CMS implementation meeting these challenges? 
  • What solutions are available to address governance needs that are not addressed by CMS? 

The paper is  now available for download from our Beacon library page and from Magus, who sponsored it.

Magus are also presenting business seminars on website governance and compliance  on October 12 in Washington, DC, and October 14 in New York. My colleague Scott Liewehr will be presenting at those events, drawing on the analysis in the Beacon as part of that seminar program. You can learn more about those events and register on the Magus website.

 

Focusing on Smart Content — in the Main Blog

If you’re only reading this XML blog, be sure to check out my recent blog post Focusing on Smart Content, which I published in the main Gilbane blog.

Focusing on Smart Content

This summer, Dale Waldt, Mary Laplante, and I have been busy wrapping up our multi-vendor report about “Smart Content in the Enterprise: How Next Generation XML Applications Deliver New Value to Multiple Stakeholders.” We’ll be publishing the report in it’s entirely in a few weeks. We are grateful to our sponsors – IBM, JustSystems, MarkLogic, Mindtouch, Ovitas, Quark, and SDL – for supporting our research and enabling us to make headway on this important trend for the future of content technologies on the web. Here’s the link to access some of the case studies that are part of this report.

XML as a tagging standard for content is almost as old as the web itself. XML applications have long proven their significant value—reducing costs, growing revenue, expediting business processes, mitigating risk, improving customer service, and increasing customer satisfaction. But for all the benefits, managers of successful XML implementations have struggled with attempts to bring XML content and applications out of their documentation departments and into their larger enterprises.

So much XML content value remains untapped. What does it take to break out of the XML application silo? What is the magic formula for an enterprise business case that captures and keeps the attention of senior management? These are the issues we set out to address.

We believe that the solution needs to be based on “smart content.” When we tag content with extensive semantic and/or formatting information, we make it “smart” enough for applications and systems to use the content in interesting, innovative, and often unexpected ways. Organizing, searching, processing, discovery, and presentation are greatly improved, which in turn increases the underlying value of the information that customers access and use.

We started this discussion late last year.  We now have the solution-oriented case studies and the additional analysis to reinforce our perspective about the drivers for the digital revolution at hand. We look forward to the continuing conversations with all of you who are seeking to transform the content-related capabilities of your business operations by championing XML applications.

Summer Webinar Recap: In Case You Missed Them

Here’s a quick rundown of summer educational events in which we participated with our partners. View these archived webcasts that you might have missed,  or refresh your subject matter expertise as your organization heads into fall business activities.

Publishing Production Outsourcing: Wolters Kluwer’s Formula for Success 

Integration Calculus: CMS + TMS = Turbo-Accelerated Creation of Multilingual Product Documentation

Making Quality Part of Your Content DNA

Content, Context, and Conversation: The Three Kings of Consumer Engagement

Fall webcasts include insights from Gilbane’s 2010 research on XML and multilingual marketing content, customer success stories, eBook challenges, web engagement, and web content governance. Look for announcements on our home page and blogs, in our weekly NewsShark, and through our social media channels.

Adobe to Acquire Day Software

Yesterday, it was announced that another CMS poster child of the late 90’s is to be acquired as Adobe Systems Incorporated and Day Software Holding AG announced the two companies have entered into a definitive agreement for Adobe to acquire all of the publicly held registered shares of Day Software in a transaction worth approximates US$240 million.

This follows Adobe’s acquisition of Omniture late last year and clearly demonstrates their intent in entering the web experience management (WEM) market place that we cover with interest here at Gilbane – as we anticipate they bring together the audience insight gained through the web analytics of Omniture and Day’s CRX content platform.  

This will presumably add momentum to Day’s own move into the WEM space with their recent product marketing strategy, as they have reinvented themselves to be closer to the marketer with recent attention paid to functionality such as personalization, analytics, variant testing and messaging around using their repository for marketing campaigns and asset management.   We await with interest firm integration plans. 

In addition Day are a longtime advocate of CMS repository standards (JCR and CMIS), something that is also close to our heart at Gilbane. This announcement has also sent tremors through the Open Source community, as they wonder about Adobe’s commitment to the Apache projects like Sling and Jackrabbit that Day have been so supportive of.    

Whilst Adobe and Day have been very quick to state that they will maintain Day’s commitment to these community projects, it’s hard not think that this commitment inside Day is cultural and we wonder whether this can realistically be maintained as the acquisition matures and Day is brought into the fold. 

The acquisition also raises questions about what this means for Alfresco’s two year relationship with Adobe that runs pretty deep with OEM integration to Adobe LiveCycle – and Erik Larson (Senior Director of Product Management at Adobe) has publically stated the intention to integrate Day and LifeCycle to create a ‘full suite of enterprise technologies’.  It will be important for the Adobe customers that have adopted the Alfresco based integration, to understand how this will affect them going forward. 

One other area that I am sure my colleagues here at Gilbane in the Publishing Technologies practice will be watching with interest is the impact this will have on Adobe’s digital publishing offering.  

As we’ve seen with previous acquisitions, it’s best to be cautious over what the future might hold. From a WEM product strategy perspective bringing Ominture and Day together makes a great deal of sense to us. The commitment to standards and open source projects is probably safe for now, it has been a part of the Day identity and value proposition for as long as I can remember and one of the most exciting things could be what this acquisition means for digital publishing. 

Let’s wait and see… 

Suggested further reading:

Into the Engagement Tier…

Recently I wrote an article for my blog – Taking the W out of CMS – exploring content management and content delivery as separate disciplines and this is a follow up to that article.

To summarize that article – firstly, to know me professionally, is to know that when it comes to the tribes of CMS folks, I am firmly in the WCM tepee.

Secondly, I disagreed the first time this discussion rolled around, as the millennium clicked over – we were all going to use portal platforms and content management functionality would be in our application server infrastructure (we don’t and it didn’t).

Thirdly, the difference between the systems we are building for tomorrow and then – our digital engagement activities were single threaded following a website groove and the end was very much the driver for the means.

For the mainstream CMS industry it was a web site centric world and in most projects and applications the term ‘CMS’ was interchangeable with ‘WCM’. Today we have a fragmented communication channel; it’s the age of the ‘splinternet’ (in this context, a term coined by Josh Bierhoff), delivering relevant content consistently to multiple places.

This not just devices – our websites are less the single and only web destination, folks consume information about our products and services from other web destinations like Facebook and Twitter (to name two). Plus, of course the needs of customer, consumer and citizen engagement means that we can chuck in multiple touch points, in e-mail, call centres and real life.

We used to get ourselves worked up about ‘baking’ or ‘frying’ content management/delivery applications, about decoupled systems that produce pages and dynamic content – but (as I said in response to a comment on my original blog post) today’s consumer wants super dynamic content fresh caught that day, prepared their way, hot off the griddle – Teppanyaki served to share – family style.

So, we have a new level of complexity and requirements for our systems to support our digital marketers and communicators. A level of complexity of requirements that sits between our content repository and our consumer, which used to be the section of the RFP that simply said “must produce compliant HTML”.

When talking about delivery of content, this is typically where our requirement starts to gain some uniqueness between projects.

The question is, so you have your well-ordered, neatly filed, approved content – but what are you going to use it for?

A requirement for an approval process supported by workflow is fairly ubiquitous – but if you are a membership organisation that engages its audience over email or a consumer packaged goods company with fifty products and a YouTube channel – your Engagement Tier requirements are going to be quite diverse.

This diversity in requirements means two things to me.

1. As an industry we are very good at understanding, defining and capturing CMS requirements – but how are we at identifying, understanding and communicating an organisations engagement needs?

2. If there are diverse requirements, then there are different solutions – and right now it’s is a blend of dynamic web content delivery, marketing automation, campaign management, email, web analytics (etc. etc.) – There is no silver vendor bullet – no leader, no wave, no magic quadrant – its different strokes for different folks.

It’s this that I want to explore, how do we define those needs and how do we compare tools?

So, into the Engagement Tier – my colleagues here at Gilbane challenged me to draw it. Hmm.. right now it’s a box of content, a big arrow and then the consumer.

I am going to need to work on that…