Curated for content, computing, and digital experience professionals

Month: April 2007 (Page 1 of 5)

Siderean and Inxight Federal Systems Announce Partnership to Deliver Relational Navigation to Federal Government

Siderean Software announced that it has entered a reseller agreement with Inxight Federal Systems. Effective immediately, Siderean will be added to Inxight’s GSA-approved price list. Inxight’s software structures unstructured data by “reading” text and extracting important entities, such as people, places and organizations. It also extracts facts and events involving these entities, such as travel events, purchase events, and organizational relationships. Siderean’s Seamark Navigator then builds on this newly structured data, providing an relational navigational interface that allows users to put multi-source content in context to help improve discovery, access and participation across the information flow. Seamark Navigator uses the Resource Description Framework (RDF) and Web Ontology Language (OWL) standards developed by the World Wide Web Consortium (W3C). Siderean’s Seamark Navigator will provide an important add-on to Inxight’s metadata harvesting and extraction solutions. Inxight’s government customers will now be able to leverage Siderean’s relational navigation solutions to access more relevant and timely results derived from the full context and scope of information. As users refine their searches, Siderean dynamically displays additional navigation options and gives users summaries of those items that best match search criteria. Siderean also enables users to illuminate unseen relationships between sets of information and leverage human knowledge to explore information interactively. http://www.siderean.com, http://www.inxightfedsys.com

Gilbane Group update

Of course I had every intention to blog about the highlights of Gilbane San Francisco, but our attention has already moved to our upcoming Washington, and even our Fall Boston conference. Here are some quick notes on what’s new:

  • There was a lot of activity at Gilbane San Francisco, but what got the most press was the second part of our opening keynote where we had Google and Microsoft facing off over enterprise search. A little web and blog searching will turn up some of the reaction.
  • Join us and CMS Watch in Washington DC June 5-6 for Gilbane Washington.  That’s right, a little over 2 weeks away. The instructions for submitting proposals can be found at: https://gilbane.com/speaker_guidelines.html. We will be covering our usual topic areas with a focus on content and enterprise web technologies (including versions 1.0, 2.0, 3.0 etc.). If you’ve never been to our Boston conference you can view all the info from the 2006 event at: http://gilbaneboston.com/06/. For this year’s conference see: http://gilbaneboston.com
  • We have 2 webinars coming up in the next 2 weeks:
    • Webinar: Bill Trippe talks with Minette Norman of Autodesk. Wednesday, April 25, 2:00 pm EST. Registration is open. Sponsored by Idiom.
    • Webinar: Bill Trippe and Michelle Huff from Oracle discuss multi-website content management. Wednesday, May 2, 12:00 pm EST. Registration is open. Sponsored by Oracle.
  • See our latest case study: Building an Enterprise-Class System for Globalization: Autodesk’s Worldwide Initiative by Senior Analyst Bill Trippe
  • See our latest white paper: Strategic eMarketing: Converting Leads into Profits, by Lead Analyst Leonor Ciarlone
  • Remember we have 8 blogs in addition to this one (see the links in the left column) all of which have recent content, even the CTO Blog, which was quiet for some time has a new entry from Eric Severson.
  • Reminder: All our blogs support multiple types of tagging as well as comments and trackbacks. Subscriptions to all of them are available via FeedBurner which provides additional features. We are adding additional FeedBurner plugins, for example, as of yesterday you can even “Twit” items from our News Blog if you are into Twitter (see http://feeds.feedburner.com/ContentManagementNews). I am not sure how useful this is, but was easy and free to add and I know some of you Twitter.

Managing Content for Compliance — May 4 in Washington, DC

I’ll be giving a talk on “Managing Content for Compliance: A Framework” at the annual IT Compliance Institute Conference — Friday, May 4th in Crystal City Virginia.

Sneak peek at my recommended actions:

  • Secure senior leadership
  • Develop policies and procedures
  • Develop information architecture and systems
  • Expect to iterate.

No real magic — just a lot of hard work! Fortunately, the smart use of relevant content technologies will help.

Adobe to Open Source Flex

Adobe Systems Incorporated (Nasdaq: ADBE) announced plans to release source code for Adobe Flex as open source. This initiative will let developers worldwide participate in the growth of the framework for building cross-operating system rich Internet applications (RIAs) for the Web and enabling new Apollo applications for the desktop. The open source Flex SDK and documentation will be available under the Mozilla Public License (MPL). Available since June 2006, the free Adobe Flex SDK includes the technologies developers need to build effective Flex applications, including the MXML compiler and the ActionScript 3.0 libraries that make up the Flex framework. This announcement expands on Adobe’s contribution of source code for the ActionScript Virtual Machine to the Mozilla Foundation under the Tamarin project, the use of the open source WebKit engine in the “Apollo” project, and the release of the full PDF 1.7 specification for ISO standardization. Using the MPL for open sourcing Flex will allow full and free access to source code. Developers will be able to freely download, extend, and contribute to the source code for the Flex compiler, components and application framework. Adobe also will continue to make the Flex SDK and other Flex products available under their existing commercial licenses, allowing both new and existing partners and customers to choose the license terms that best suit their requirements. Starting this summer with the pre-release versions of the next release of the Flex product line, code named “Moxie,” Adobe will post daily software builds of the Flex SDK on a public download site with a public bug database. The release of open source Flex under the MPL will occur in conjunction with the final release of Moxie, currently scheduled for the second half of 2007. http://www.adobe.com/go/opensourceflex

Turning Around a Bad Enterprise Search Experience

Many organizations have experimented with a number of search engines for their enterprise content. When the search engine is deployed within the bounds of a specific content domain (e.g. a QuickPlace site) the user can assume that the content being searched is within that site. However, an organization’s intranet portal with a free-standing search box comes with a different expectation. Most people assume that search will find content anywhere in the implied domain, and for most of us we believe that all content belonging to that domain (e. g. a company) is searchable.

I find it surprising how many public Web sites for media organizations (publishers) don’t appear to have their site search engines pointing to all the sub-sites indicated in site maps. I know from my experience at client sites that the same is often true for enterprise searching. The reasons are numerous and diverse, commentary for another entry. However, one simple notation under or beside the search box can clarify expectations. A simple link to a “list of searchable content” will underscore the caveat or at least tip the searcher that the content is bounded in some way.

When users in an organization come to expect that they will not find, through their intranet, what they are seeking but know to exist somewhere in the enterprise, they become cynical and distrustful. Having a successful intranet portal is all about building trust and confidence that the search tool really works or “does the job.” Once that trust is broken, new attempts to change the attitudes by deploying a new search engine, increasing the license to include more content, or doing better tuning to return more reliable results is not going to change minds without a lot of communication work to explain the change. I know that the average employee believes that all the content in the organization should be brought together in some form of federated search but now know it isn’t. The result is that they confine themselves to embedded search within specific applications and ignore any option to “search the entire intranet.”

It would be great to see comments from readers who have changed a Web site search experience from a bad scene to one with a positive traffic gain with better search results. Let us know how you did it so we can all learn.

Translation and Web Content Management Under One Roof – SDL Tridion

The integration of content and translation management workflows has a great deal of value for globalization projects. And as we’ve discussed, there are various market approaches to streamlining these increasingly complex processes. With the announcement of SDL International’s intended acquisition of Tridion (set to close by end of May,) buyers officially have an additional approach — translation and Web content management under one roof.

In this case, the opportunity is clearly for marketers who struggle to meet growing corporate and consumer demand for a multi-site, multi-lingual Web presence that drives revenue and protects brand (for the former) and delivers localized customer experiences (for the latter.) The time is right for this marriage, as globalization continues to climb toward the top of the CIO’s “must-have” strategy list.

SDL and Tridion are undoubtedly headed toward a cohesive integration of their respective TMS and Web CMS technologies, which makes a great deal of sense for those organizations wishing to standardize on one platform for Web site translation and management. As we would expect, API-level workflow integration is at the top of the priority list, according to executives from both companies. There’s quite a bit of potential for more, when one considers the ability of SDL’s Author Assistant to enhance the value of content at its source, i.e. during content creation, as well as the power of Tridion’s Communications Statistics module to drive process improvements based on data culled from user activities. Safe to say it will be interesting to watch the evolution of this combined product line for its impact on the Web content lifecycle.
As we’ve seen in the ECM and BPM suite market, the trend toward vendor consolidation changes the landscape dramatically and spurs the inevitable “suite versus best-of breed” debate. Within the globalization market, we expect this acquisition to follow suit — after all, the marriage crosses the “dotted line” by solidifying the value of content and translation management integration.

At the end of the day however, the buyer defines the purchasing decision that makes the most sense, based on the most pressing — or painful — business requirements. As it stands now, Tridion will be a separate division within SDL and operate autonomously. R5 will be sold as a module within the SDL product set and renamed SDL Tridion R5. In parallel, SDL TMS will be sold as a Tridion module.

In effect, this strategy leaves decision-making in the hands of the buyer, as it should be. Hence, the immediate goal for this marriage is to demonstrate just how compelling the promise of a “total solution” will be. The CMPros community is already weighing in on the potential; Gilbane readers: join the conversation! We’d like to continue this discussion with your feedback.

DITA and Dynamic Content Delivery

Have you ever waded through a massive technical manual, desperately searching for the section that actually applied to you? Or have you found yourself performing one search after another, collecting one-by-one the pieces of the answer you need from a mass of documents and web pages? These are all examples of the limitations of static publishing; that is, the limitations of publishing to a wide audience when people’s needs and wants are not all the same. Unfortunately, this classic “one size fits all” approach can end up fitting no one at all.

In the days when print publishing was our only option, and we thought only in terms of producing books, we really had no choice but to mass-distribute information and hope it met most people’s needs. But today, with Web-based technology and new XML standards like DITA, we have other choices.

DITA (Darwin Information Typing Architecture) is the hottest thing to have hit the technical publishing world in a long time. With its topic-based approach to authoring, DITA frees us from the need to think in terms of “books”, and lets us focus on the underlying information. With DITA’s modular, reusable information elements, we can not only publish across different formats and media – but also flexibly recombine information in almost any way we like.

Initial DITA implementations have focused primarily on publishing to pre-defined PDF, HTML and Help formats – that is, on static publishing. But the real promise of DITA lies in supporting dynamic, personalized content delivery. This alternative publishing model – which I’ll call dynamic content delivery – involves “pulling” rather than “pushing” content, based on the needs of each individual user.
In this self-service approach to publishing, end users can assemble their own “books” using two kinds of interfaces (or a hybrid of the two):

Information Shopping Cart – in which the user browses or searches to choose the content (DITA Topics) that she considers relevant, and then places this information in a shopping cart. When done “shopping”, she can organize her document’s table of contents, select a stylesheet, and automatically publish the result to HTML or PDF.

This approach is appropriate when users are relatively knowledgeable about the content, and where the structure of their output documents can be safely left up to them. Examples include engineering research, e-learning systems, and customer self-service applications.

Personalization Wizard – in which the user answers a number of pre-set questions in a wizard-like interface, and the appropriate content is automatically extracted to produce a final document in HTML or PDF.

This approach is appropriate for applications that need to produce a personalized but highly standard manual, such as a product installation guide or regulated policy manual. In this scenario, the document structure and stylesheet are typically preset.

In a hybrid interface, we could use a personalization wizard to dynamically assemble required material in a fixed table of contents – but then use the information shopping cart approach to allow the user to add supplementary material. Or, depending on the application, we might do the same thing but assemble the initial table of contents as a suggestion or starting point only. The first method might be appropriate for a user manual; the second might be better for custom textbooks.

Dynamic content delivery is made possible by the kind of topic-based authoring embraced by DITA. A topic is a piece of content that covers a specific subject, has an identifiable purpose, and can stand on its own (i.e., does not require a specific context in order to make sense). Topics don’t start with “as stated above” or end with “as further described below,” and they don’t implicitly refer to other information that isn’t contained within them. In a word, topics are fully reusable, in the sense that they can be used in any context where the information provided by the topic is needed.

The extraction and assembly of relevant topics is made possible by another relatively new standard called XQuery, which is able to both find the right information based on user profiles, filter the results accordingly, and automatically transform results into output formats like HTML or PDF. Of course, this approach is only feasible if the XQuery engine is extremely fast – which led us to build our own dynamic content delivery solution offering around Mark Logic, an XQuery-based content delivery platform optimized for real-time search and transformation.

The dynamic content delivery approach is an answer to the hunger for relevant, personalized information that pervades today’s organizations. Avoiding the pitfalls of the classic “one size fits all” publishing of the past, it instead allows a highly personalized and relevant interaction with “an audience of one.” I invite you to read more about this in a whitepaper I wrote that is available on our website (www.FlatironsSolutions.com).

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑