DPCI, a provider of integrated technology solutions for organizations that need to publish content to Web, print, and mobile channels, announced that it has partnered with Mark Logic Corporation to deliver XML-based content publishing solutions. The company’s product, MarkLogic Server, allows customers to store, manage, search, and dynamically deliver content. Addressing the growing need for XML-based content management systems, DPCI and Mark Logic have been collaborating on several projects including one that required integration with Amazon’s Kindle reading device. Built specifically for content, MarkLogic Server provides a single solution for search and content delivery that allows customers to build digital content products: rrom task-sensitive online content delivery applications that place content in users’ workflows to digital asset distribution systems that automate content delivery; from custom publishing applications that maximize content re-use and repurposing to content assembly solutions to integrate content. http://www.marklogic.com, http://www.databasepublish.com
Category: Publishing & media (Page 31 of 53)
WoodWing Software has released Enterprise 6, the latest version of the company’s content publishing platform. Equipped with a new editing application called “Content Station”, Enterprise 6 offers article planning tools, direct access to any type of content repository, and integrated Web delivery functionality. Content Station allows users to create articles for delivery to the Web, print, and mobile devices, and offers out-of-the-box integration with the open-source Web content management system Drupal. Content Station works with Enterprise’s new server plug-ins to allow users to search, select, and retrieve content stored in other third-party repositories such as digital asset management systems, archives, and wire systems. Video, audio, and text files can then be collected into “dossiers”, edited, and set for delivery to a variety of outputs, all from a single user-interface. A built-in XML editor lets authors create documents intended solely for digital output. The content planning application lets managers assign content to users both inside and outside of the office. Enterprise’s Web publishing capabilities feature a direct integration with Drupal. Content authors click on a single button to preview or deliver content directly to Drupal and get information such as page views, ratings, and comments back from the Web CMS. And if something needs to be pulled from the site, editors can simply click “Unpublish”. They don’t have to contact a separate Web editor or navigate through another system’s interface. The server plug-in architecture also allows for any other Web content management system to be connected. http://www.woodwing.com/
Following on Dale’s inauguration day post, Will XML Help this President?, we have today’s invigorating news that President Obama is committed to more Internet-based openness. The CNET article highlights some of the most compelling items from the two memoes, but I am especially heartened by this statement from the memo on the Freedom of Information Act (FOIA):
I also direct the Director of the Office of Management and Budget to update guidance to the agencies to increase and improve information dissemination to the public, including through the use of new technologies, and to publish such guidance in the Federal Register.
The key phrases are "increase and improve information dissemination" and "the use of new technologies." This is keeping in spirit with the FOIA–the presumption is that information (and content) created by or on behalf of the government is public property and should be accessible to the public. This means that the average person should be able to easily find government content and be able to readily consume it–two challenges that the content technology industry grapples with every day.
The issue of public access is in fact closely related to the issue of long-term archiving of content and information. One of the reasons I have always been comfortable recommending XML and other standards-based technology for content storage is that the content and data would outlast any particular software system or application. As the government looks to make government more open, they should and likely will look at standards-based approaches to information and content access.
Such efforts will include core infrastructure, including servers and storage, but also a wide array of supporting hardware and software falling into three general categories:
- Hardware and software to support the collection of digital material. This ranges from hardware and software for digitizing and converting analog materials, software for cataloging digital materials with the inclusion of metadata, hardware and software to support data repositories, and software for indexing the digital text and metadata.
- Hardware and software to support the access to digital material. This includes access tools such as search engines, portals, catalogs, and finding aids, as well as delivery tools allowing users to download and view textual, image-based, multimedia, and cartographic data.
- Core software for functions such as authentication and authorization, name administration, and name resolution.
Standards such as PDF-A have emerged to give governments a ready format for long-term archiving of routine government documents. But a collection of PDF/A documents does not in and of itself equal a useful government portal. There are many other issues of navigation, search, metadata, and context left unaddressed. This is true even before you consider the wide range of content produced by the government–pictorial, audio, video, and cartographic data are obvious–but also the wide range of primary source material that comes out of areas such as medical research, energy development, public transportation, and natural resource planning.
President Obama’s directives should lead to interesting and exciting work for content technology professionals in the government. We look forward to hearing more.
Adobe Systems Incorporated (Nasdaq:ADBE) announced the Adobe Technical Communication Suite 2 software, an upgrade of its solution for authoring, reviewing, managing, and publishing rich technical information and training content across multiple channels. Using the suite, technical communicators can create documentation, training materials and Web-enabled user assistance containing both traditional text and 3D designs along with rich media, including Adobe Flash Player compatible video, AVI, MP3 and SWF file support. The enhanced suite includes Adobe FrameMaker 9, the latest version of Adobe’s technical authoring and DITA publishing solution, Adobe RoboHelp 8, a major upgrade to Adobe’s help system and knowledge base authoring tool, Adobe Captivate 4, an upgrade to Adobe’s eLearning authoring tool, and Photoshop CS4, a new addition to the suite. The suite also includes Adobe Acrobat 9 Pro Extended and Adobe Presenter 7. Adobe Technical Communication Suite 2 is a complete solution that offers improved productivity along with support for standards-based authoring including support for Darwin Information Typing Architecture (DITA), an XML-based standard for authoring, producing and delivering technical information. It enables the creation of rich content and publishing through multiple channels, including XML/HTML, print, PDF, WSF, WebHelp, Adobe FlashHelp, Microsoft HTML Help, OracleHelp, JavaHelp and Adobe AIR. FrameMaker 9 offers a new user interface. It supports hierarchical books and DITA 1.1, and makes it easier to author topic-based content. In addition, FrameMaker 9 provides a capability to aggregate unstructured, structured and DITA content in a seamless workflow. Using a PDF based review workflow, authors can import and incorporate feedback. Adobe RoboHelp 8 allows technical communicators to author XHTML-compliant professional help content. The software also supports Lists and Tables, a new CSS editor, Pages and Templates, and a new search functionality. The Adobe Technical Communication Suite 2 is immediately available in North America. Estimated street price for the suite is US$1899. FrameMaker 9, RoboHelp 8 and Captivate 4 are available as standalone products as well. Estimated street price for FrameMaker 9 and RoboHelp 8 is US$999 for each, US$799 for Captivate 4. http://www.adobe.com
Today I will address a question I have grappled with for years, can non-structured authoring tools, e.g., word processors, can be used effectively to create structured content? I have been involved for some time in projects for various state legislatures and publishers trying to use familiar word processing tools to create XML content. So far, based on my experiences, I think the answer is a definite “maybe”. Let me explain and offer some rules for your consideration.
First understand that there is a range of validation and control possible in structured editing, from supporting a very loose data model to very strict data models. A loose data model might enforce a vocabulary of element type names but very little in the way of sequence and occurrence rules or data typing that would be required in a strict data model. Also remember that the rules expressed in your data model should be based on your business drivers such as regulatory compliance and internal policy. Therefore:
Rule number 1: The stricter your data model and business requirements are, the more you need a real structured editor. IMHO only very loose data models can effectively be supported in unstructured authoring tools.
Also, unstructured tools use a combination of formatting oriented structured elements and styles to emulate a structured editing experience. Styles tend to be very flat and have limited processing controls that can be applied to them. For instance, a heading style in an unstructured environment usually is applied only to the bold headline which is followed by a new style for the paragraphs that follow. In a structured environment, the heading and paragraphs would have a container element, perhaps chapter, that clearly indicates the boundaries of the chapter. Therefore structured data is less ambiguous than unstructured data. Ambiguity is easier for humans to deal with than computers which like everything explicitly marked up. It is important to know who is going to consume, process, manage, or manipulate the data. If these processes are mostly manual ones, then unstructured tools may be suitable. If you hope to automate a lot of the processing, such as page formatting, transforms to HTML and other formats, or reorganizing the data, then you will quickly find the limitations of unstructured tools. Therefore:
Rule Number 2: Highly automated and streamline processes usually required content to be created in a true structured editor. And very flexible content that is consumed or processed mostly by humans may support the use of unstructured tools.
Finally, the audience for the tools may influence how structured the content creation tools can be. If your user audience includes professional experts, such as legislative attorneys, you may not be able to convince them to use a tool that behaves differently than the word processor they are used to. They need to focus on the intellectual act or writing and how that law might affect other laws. They don’t want to have to think about the editing tool and markup it uses the way some production editors might. It is also good to remember that working under tight deadlines also impacts how much structure can be “managed” by the authors. Therefore:
Rule Number 3: Structured tools may be unsuitable for some users due to the type of writing they perform or the pressures of the environment in which they work.
By the way, a structured editing tool may be an XML structured editor, but it could also be a Web form, application dialog, Wiki, or some other interface that can enforce the rules expressed in the data model. But this is a topic for another day. </>
Here at Gilbane Boston, we just heard from Michael Edson, Director, Web and New Media Strategy, Office of the CIO, Smithsonian Institution. His talk described the Smithsonian Institution’s current Web and New Media strategy process and the cultural, technical, and organizational implications of the vision of a Smithsonian Commons–a critical-mass of content, services, and tools designed to fuel innovation and stimulate engagement with the world’s scientific and cultural knowledge.
Many of the efforts are nascent, but this project on Flickr gives you a nice idea idea of the potential for this kind of effort.
I always took footnotes for granted. You need them as you’re writing, you insert an indicator at the right place and it points the reader to an amplification, a citation, an off-hand comment, or something — but it’s out of the way, a distraction to the point you’re trying to make.
Some documents don’t need them, but some require them (e.g., scholarly documents, legal documents). In those documents, the footnotes contain such important information that, as Barry Bealer suggests in When footnotes are the content, “the meat [is] in the footnotes.”
The web doesn’t make it easy to represent footnotes. Footnotes on the Web argues that HTML is barely up to the task of presenting footnotes in any effective form.
But if you were to recreate the whole thing from scratch, without static paper as a model, how would you model footnotes?
In a document, a footnote is composed of two pieces of related information. One is the point that you’re trying to make, typically a new point. The other is some pre-existing reference material that presumably supports your point. If it is always the new material that points at the existing, supporting material, then we’re building an information taxonomy bottom up — with the unfortunate property that entering at higher levels will prevent us from seeing lower levels through explicitly-stated links.
To be fair, there are good reasons for connections to be bidirectional. Unidirectional links are forgivable for the paper model, with its inherently temporal life. But the WWW is more malleable, and bidirectional links don’t have to be published at the same time as the first end of the link. In this sense, HTML’s linking mechanism, the ‘<a href=”over_there”>’ construct is fundamentally broken. Google’s founders exploited just this characteristic of the web to build their company on a solution to a problem that needn’t have been.
And people who have lived through the markup revolution from the days of SGML and HyTime know that it shouldn’t have been.
But footnotes still only point bottom up. Fifteen to twenty years on, many of the deeper concepts of the markup revolution are still waiting to flower.
Clickability announced the immediate availability of the Clickability Media Solution. The Media Solution provides a centrally managed SaaS content repository that enables large media companies to share digital content across their entire organization and publish it to multiple devices and web channels. Companies of all sizes can leverage the multi-tenant content repository as a hub for innovative content sharing, syndication and distribution strategies. Additional benefits to companies large and small include reduced operational costs through greater efficiencies and the ability to build active social media communities. The solution is designed to maximize the value of every piece of content in a customer’s repository. Content can be tagged and annotated for search and reuse. Assets can be linked and shared across channels and publications. The repository allows companies to create targeted microsites or regional portals that rely on metadata to automatically populate with appropriate content and contextual links. Clickability also offers interactive features, which includes social networking, blogging, video serving, ticketing, personalized calendars, site customization and an on-demand ad server that ties ads to specific sections and pages of a site. http://www.clickability.com