Curated for content, computing, and digital experience professionals

Category: Publishing & media (Page 30 of 52)

New Workshop on Implementing DITA

As part of our Gilbane Onsite Technology Strategy Workshop Series, we are happy to announce a new workshop, Implementing DITA.

Course Description

DITA, the Darwin Information Typing Architecture is an emerging standard for content creation, management, and distribution. How does DITA differ from other XML applications? Will it work for my vertical industry’s content? From technical documentation, to training manuals, from scientific papers to statutory publishing. DITA addresses one of the most challenging aspects of XML implementation, developing a data model that can be user and shared with information partners. Even so, DITA implementation requires effective process, software, and content management strategies to achieve the benefits promised by the DITA business case, cost-effective, reusable content. This seminar will familiarize you with DITA concepts and terminology, describe business benefits, implementation challenges, and best practices for adopting DITA. How DITA enables key business processes will be explored, including content management, formatting & publishing, multi-lingual localization, and reusable open content. Attendees will be able to participate in developing an effective DITA content management strategy.

Audience

This is an introductory course suitable for anyone looking to better understand DITA standard, terminology, processes, benefits, and best practices. A basic understanding of computer processing applications and production processes is helpful. Familiarity with XML concepts and publishing helpful, but not required. No programming experience required.

Topics Covered

  • The Business Drivers for DITA Adoption

  • DITA Concepts and Terminology

  • The DITA Content Model

  • Organizing Content with DITA Maps

  • Processing, Storing & Publishing DITA Content

  • DITA Creation, Management & Processing Tools

  • Multi-lingual Publishing with DITA

  • Extending DITA to work with Other Data Standards

  • Best Practices & Pitfalls for DITA Implementation

For more information and to customize a workshop just for your organization, please contact Ralph Marto by email or at +617.497.9443 x117

eZ Systems Releases Apache Solr-based Open Source Enterprise Search Solution eZ Find 2.0

eZ Systems released Apache Solr-based Open Source Enterprise Search solution. eZ Find 2.0, the Open Source search extension for eZ Publish, has a number of new features such as: tuning of relevance rankings, facets for drill-down search result navigation, and spell checking and suggestions on search phrases. eZ Find already included features such as relevance ranking, native support for eZ Publish access rights, keyword highlighting, sophisticated multi-language support, and the ability to search multiple sites containing millions of objects. eZ Find 2.0 is compatible with eZ Publish 4.0 and the upcoming eZ Publish 4.1. eZ Find is free to download and install on eZ Publish sites. It is also a certified extension supported under eZ Publish Premium support and maintenance agreements. http://ez.no/ezfind/download

Webinar Wednesday: 5 Predictions for Publishers in 2009

Please join me on a webinar sponsored by Mark Logic on Wednesday 2/18/09 at 2pm EST. I’ll be covering my five top predictions for 2009 (and beyond). The predictions come largely from a forthcoming research study "Digital Platforms and Technologies for Book Publishers: Implementations Beyond eBook," that Bill Trippe and I are writing. Here are the predictions:

  1. The Domain Strikes Back – Traditional publishers leverage their domain expertise to create premium, authoritative digital products that trump free and informed internet content.
  2. Discoverability Overcomes Paranoia – Publishers realize the value in being discovered online, as research shows that readers do buy whole books and subscriptions based on excerpts and previews.
  3. Custom, Custom, Custom – XML technology enables publishers to cost-effectively create custom products, a trend that has rapidly accelerated in the last six to nine months, especially in the educational textbook segment.
  4. Communities Count – and will exert greater influence on digital publishing strategies, as providers engage readers to help build not only their brands but also their products.
  5. Print on Demand – increases in production quality and cost-effectiveness, leading to larger runs, more short-run custom products and deeper backlists.

I look forward to your questions and comments! Register today at http://bit.ly/WApEW

Winds of Change at Tools of Change

O’Reilly’s Tools of Change conference in New York City this week was highly successful, both inside and outside the walls of the Marriott Marquis. The sessions were energetic, well-attended, and–on the whole–full of excellent insight and ideas about the digital trends taking a firm hold of nearly all sectors of the publishing business. Outside the walls, especially on Twitter, online communities were humming with news and commentary on the the conference. (You almost could have followed the entire conference just by following the #toc hash tag at Twitter and accessing the online copies of the presentations.)

But if you had done that, you would have missed the fun of being there. There were some superb keynotes and some excellent general sessions. Notable among the keynotes were Tim O’Reilly himself, Neelan Choksi from Lexcycle (Stanza), and Cory Doctorow. The general sessions  covered a fairly broad spectrum of topics but were heavy on eBooks and community. Because of my own and my clients’ interests, I spent most of my time in the eBook sessions. The session eBooks I: Business Models and Strategy was content-rich. To begin with, you heard straight from senior people at major publishers with significant eBook efforts (Kenneth Brooks from Cengage Learning, Leslie Hulse from Harper Collins Publishers, and Cynthia Cleto from Springer Science+Business Media). Along with their insight, the speakers–and moderator Michael Smith from IDPF–assembled an incredibly valuable wiki of eBook business and technical material to back up their talk. I also really enjoyed a talk from Gavin Bell of Nature, The Long Tail Needs Community, where he made a number of thoughtful points about how publishers need to think longer and harder about how reading engages and changes people and specifically how a publisher can build community around those changes and activities.

There were a few soft spotsin the schedule. Jeff Jarvis’ keynote, What Would Google do with Publishing?, was more about plumping his new book (What Would Google Do?) than anything else, but was also weirdly out of date, even though the book is hot off the presses, with 20th century points like “The link changes everything” and “If you’re not searchable, you won’t be found.” (Publishers are often, somewhat unfairly, accused of being Luddite, but they are not that Luddite.) There were also a couple of technical speakers who didn’t seem to make the necessary business connections to the technical points they were making, which would have been helpful to those members of the audience who were less technical and more publishing-product and -process oriented. But these small weaknesses were easily outshone by the many high points, the terrific overall energy, and the clear enthusiasm of the attendees.

One question I have for the O’Reilly folks is to ask how they will keep the energy going. They have a nascent Tools of Change community site. Perhaps they could enlist some paid community managers to seed and moderate conversations, and also tie community activities to other O’Reilly products such as the books and other live and online events.

O’Reilly has very quickly established a very strong conference and an equally strong brand around the conference. With the publishing industry so engulfed in digital change now, I have to think this kind of conference and community can only continue to grow.

On Stimulating Open Data Initiatives

Yesterday the big stimulus bill cleared the conference committee that resolves the Senate and House versions. If you remember your civics that means it will be likely to pass in the chambers and then be signed into law by the president.

Included in the bill are billions of dollars for digitizing important information such as medical records or government information. Wow! That is a lot of investment! The thinking is that inaccessible information locked in paper or proprietary formats cost us billions each year in productivity. Wow! That’s a lot of waste! Also, that access to the information could spawn a billions of dollars of new products and services, and therefore income and tax revenue. Wow! That’s a lot of growth!

Many agencies and offices have striven to expose useful official information and reports at the federal and state level. Even so, there is a lot of data still locked away, or incomplete or in difficult to use forms. A while ago a Senate official once told me that they do not maintain a single, complete, accurate, official copy of the US Statutes internally. Even if this is no longer true, the public often relies on the “trusted” versions that are available only through paid online services. Many other data types, like many medical records, only exist in paper.

There are a lot of challenges, such as security and privacy issues, even intellectual property rights issues. But there are a lot of opportunities too. There are thousands of data sources that could be tapped into that are currently locked in paper or proprietary formats.

I don’t think the benefits will come at the expense of commercial services already selling this publicly owned information as some may fear. These online sites provide a service, often emphasizing timeliness or value adds like integrating useful data from different sources, in exchange for their fees. I think a combination of free government open data resources and delivery tools, plus innovative commercial products will emerge. Maybe some easily obtained data may become commoditized, but new ways of accessing and integrating information will emerge. The big information services probably have more to fear from startups than from free government applications and data.

As it happens, I saw a demo yesterday of a tool that took all the activity of a state legislature and unified it under one portal. This allows people to track a bill and all related activity in a single place. For free! The bill working its way through both chambers is connected to related hearing agendas and minutes, which are connected to schedules, with status and other information captured in a concise dashboard-like screen format (there are other services you can pay for which fund the site). Each information component came from a different office and was originally in it’s own specialized format. What we were really looking at was a custom data integration application done with AJAX technology integrating heterogeneous data in a unified view. Very powerful, and yet scalable. The key to its success was strong integration of data, the connections that were used to tie the information together. The vendor collected and filtered the data, converted to a common format, added the linkage and relationship information to provide an integrated view into data. All source data is stored separately and maintained by different offices. Five years ago it would have been a lot more difficult to create the service. Technology has advanced, and the data are increasingly available in manageable forms.

The government produces a lot of information that affect us daily that we, as taxpayers and citizens, actually own, but have limited or no access to. These include statutes and regulations, court cases, census data, scientific data and research, agricultural reports, SEC filings, FDA drug information, taxpayer publications, forms, patent information, health guidelines, etc., etc., etc. The list is really long. I am not even scratching the surface! It also includes more interactive and real-time data, such as geological and water data, whether information, and the status of regulation and legislation changes (like reporting on the progress of the stimulus bill as it worked it way through both chambers). All of these can be made more current, expanded for more coverage, integrated with related materials, validated for accuracy. There are also new opportunities to open up the process of using forums and social media tools for collecting feedback from constituents and experts (like the demo mentioned above). Social media tools may both give people an avenue to express their ideas to their elected officials, as well as be a collection tool to gather raw data that can be analyzed for trends and statistics, which in turn becomes new government data that we can use.

IMHO, this investment in open government data is a powerful catalyst that could actually create or change many jobs or business models. If done well, it could provide significant positive returns, streamline government, open access to more information, and enable new and interesting products and applications. </>

DPCI Announces Partnership with Mark Logic to Deliver XML-Based Content Publishing Solutions

DPCI, a provider of integrated technology solutions for organizations that need to publish content to Web, print, and mobile channels, announced that it has partnered with Mark Logic Corporation to deliver XML-based content publishing solutions. The company’s product, MarkLogic Server, allows customers to store, manage, search, and dynamically deliver content. Addressing the growing need for XML-based content management systems, DPCI and Mark Logic have been collaborating on several projects including one that required integration with Amazon’s Kindle reading device. Built specifically for content, MarkLogic Server provides a single solution for search and content delivery that allows customers to build digital content products: rrom task-sensitive online content delivery applications that place content in users’ workflows to digital asset distribution systems that automate content delivery; from custom publishing applications that maximize content re-use and repurposing to content assembly solutions to integrate content. http://www.marklogic.com, http://www.databasepublish.com

WoodWing Releases Enterprise 6 Content Publishing Platform

WoodWing Software has released Enterprise 6, the latest version of the company’s content publishing platform. Equipped with a new editing application called “Content Station”, Enterprise 6 offers article planning tools, direct access to any type of content repository, and integrated Web delivery functionality. Content Station allows users to create articles for delivery to the Web, print, and mobile devices, and offers out-of-the-box integration with the open-source Web content management system Drupal. Content Station works with Enterprise’s new server plug-ins to allow users to search, select, and retrieve content stored in other third-party repositories such as digital asset management systems, archives, and wire systems. Video, audio, and text files can then be collected into “dossiers”, edited, and set for delivery to a variety of outputs, all from a single user-interface. A built-in XML editor lets authors create documents intended solely for digital output. The content planning application lets managers assign content to users both inside and outside of the office. Enterprise’s Web publishing capabilities feature a direct integration with Drupal. Content authors click on a single button to preview or deliver content directly to Drupal and get information such as page views, ratings, and comments back from the Web CMS. And if something needs to be pulled from the site, editors can simply click “Unpublish”. They don’t have to contact a separate Web editor or navigate through another system’s interface. The server plug-in architecture also allows for any other Web content management system to be connected. http://www.woodwing.com/

Open Government Initiatives will Boost Standards

Following on Dale’s inauguration day post, Will XML Help this President?,  we have today’s invigorating news that President Obama is committed to more Internet-based openness. The CNET article highlights some of the most compelling items from the two memoes, but I am especially heartened by this statement from the memo on the Freedom of Information Act (FOIA):

I also direct the Director of the Office of Management and Budget to update guidance to the agencies to increase and improve information dissemination to the public, including through the use of new technologies, and to publish such guidance in the Federal Register.

The key phrases are "increase and improve information dissemination" and "the use of new technologies." This is keeping in spirit with the FOIA–the presumption is that information (and content) created by or on behalf of the government is public property and should be accessible to the public.  This means that the average person should be able to easily find government content and be able to readily consume it–two challenges that the content technology industry grapples with every day.

The issue of public access is in fact closely related to the issue of long-term archiving of content and information. One of the reasons I have always been comfortable recommending XML and other standards-based technology for content storage is that the content and data would outlast any particular software system or application. As the government looks to make government more open, they should and likely will look at standards-based approaches to information and content access.

Such efforts will include core infrastructure, including servers and storage, but also a wide array of supporting hardware and software falling into three general categories:

  • Hardware and software to support the collection of digital material. This ranges from hardware and software for digitizing and converting analog materials, software for cataloging digital materials with the inclusion of metadata, hardware and software to support data repositories, and software for indexing the digital text and metadata.
  • Hardware and software to support the access to digital material. This includes access tools such as search engines, portals, catalogs, and finding aids, as well as delivery tools allowing users to download and view textual, image-based, multimedia, and cartographic data.
  • Core software for functions such as authentication and authorization, name administration, and name resolution.

Standards such as PDF-A have emerged to give governments a ready format for long-term archiving of routine government documents. But a collection of PDF/A documents does not in and of itself equal a useful government portal. There are many other issues of navigation, search, metadata, and context left unaddressed. This is true even before you consider the wide range of content produced by the government–pictorial, audio, video, and cartographic data are obvious–but also the wide range of primary source material that comes out of areas such as medical research, energy development, public transportation, and natural resource planning.

President Obama’s directives should lead to interesting and exciting work for content technology professionals in the government. We look forward to hearing more.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑