Curated for content, computing, and digital experience professionals

Author: Bill Trippe (Page 4 of 23)

New Workshop on Implementing DITA

As part of our Gilbane Onsite Technology Strategy Workshop Series, we are happy to announce a new workshop, Implementing DITA.

Course Description

DITA, the Darwin Information Typing Architecture is an emerging standard for content creation, management, and distribution. How does DITA differ from other XML applications? Will it work for my vertical industry’s content? From technical documentation, to training manuals, from scientific papers to statutory publishing. DITA addresses one of the most challenging aspects of XML implementation, developing a data model that can be user and shared with information partners. Even so, DITA implementation requires effective process, software, and content management strategies to achieve the benefits promised by the DITA business case, cost-effective, reusable content. This seminar will familiarize you with DITA concepts and terminology, describe business benefits, implementation challenges, and best practices for adopting DITA. How DITA enables key business processes will be explored, including content management, formatting & publishing, multi-lingual localization, and reusable open content. Attendees will be able to participate in developing an effective DITA content management strategy.

Audience

This is an introductory course suitable for anyone looking to better understand DITA standard, terminology, processes, benefits, and best practices. A basic understanding of computer processing applications and production processes is helpful. Familiarity with XML concepts and publishing helpful, but not required. No programming experience required.

Topics Covered

  • The Business Drivers for DITA Adoption

  • DITA Concepts and Terminology

  • The DITA Content Model

  • Organizing Content with DITA Maps

  • Processing, Storing & Publishing DITA Content

  • DITA Creation, Management & Processing Tools

  • Multi-lingual Publishing with DITA

  • Extending DITA to work with Other Data Standards

  • Best Practices & Pitfalls for DITA Implementation

For more information and to customize a workshop just for your organization, please contact Ralph Marto by email or at +617.497.9443 x117

The Marvels of Project Guttenberg

p146.jpg

If you don’t know Project Guttenberg–and you should–it’s well worth spending your time over there to familiarize yourself with its contents and the way it has gone about creating the collection.

I keep track of it through the RSS feed of recently added books, which is updated nightly. That’s where I find out about new books like, The Pecan and its Culture, published in 1906, which includes the photo shown at left.

On their own, the one image and the one title are perhaps not so interesting or so significant (though I for one love these little snapshots of Americana, especially such primary material). What is significant of course is the mass nature of the digitization, and the care in which it is undertaken.  I compare this care with the sometimes abysmal scanning work being done by Google (and with much more fanfare). The fruits of Project Guttenberg are much more openly available, much easier to access, and much easier to migrate to reading devices like the Kindle.

So as we look at all the eBook and digitization efforts underway today, let’s not forget Project Guttenberg.

Winds of Change at Tools of Change

O’Reilly’s Tools of Change conference in New York City this week was highly successful, both inside and outside the walls of the Marriott Marquis. The sessions were energetic, well-attended, and–on the whole–full of excellent insight and ideas about the digital trends taking a firm hold of nearly all sectors of the publishing business. Outside the walls, especially on Twitter, online communities were humming with news and commentary on the the conference. (You almost could have followed the entire conference just by following the #toc hash tag at Twitter and accessing the online copies of the presentations.)

But if you had done that, you would have missed the fun of being there. There were some superb keynotes and some excellent general sessions. Notable among the keynotes were Tim O’Reilly himself, Neelan Choksi from Lexcycle (Stanza), and Cory Doctorow. The general sessions  covered a fairly broad spectrum of topics but were heavy on eBooks and community. Because of my own and my clients’ interests, I spent most of my time in the eBook sessions. The session eBooks I: Business Models and Strategy was content-rich. To begin with, you heard straight from senior people at major publishers with significant eBook efforts (Kenneth Brooks from Cengage Learning, Leslie Hulse from Harper Collins Publishers, and Cynthia Cleto from Springer Science+Business Media). Along with their insight, the speakers–and moderator Michael Smith from IDPF–assembled an incredibly valuable wiki of eBook business and technical material to back up their talk. I also really enjoyed a talk from Gavin Bell of Nature, The Long Tail Needs Community, where he made a number of thoughtful points about how publishers need to think longer and harder about how reading engages and changes people and specifically how a publisher can build community around those changes and activities.

There were a few soft spotsin the schedule. Jeff Jarvis’ keynote, What Would Google do with Publishing?, was more about plumping his new book (What Would Google Do?) than anything else, but was also weirdly out of date, even though the book is hot off the presses, with 20th century points like “The link changes everything” and “If you’re not searchable, you won’t be found.” (Publishers are often, somewhat unfairly, accused of being Luddite, but they are not that Luddite.) There were also a couple of technical speakers who didn’t seem to make the necessary business connections to the technical points they were making, which would have been helpful to those members of the audience who were less technical and more publishing-product and -process oriented. But these small weaknesses were easily outshone by the many high points, the terrific overall energy, and the clear enthusiasm of the attendees.

One question I have for the O’Reilly folks is to ask how they will keep the energy going. They have a nascent Tools of Change community site. Perhaps they could enlist some paid community managers to seed and moderate conversations, and also tie community activities to other O’Reilly products such as the books and other live and online events.

O’Reilly has very quickly established a very strong conference and an equally strong brand around the conference. With the publishing industry so engulfed in digital change now, I have to think this kind of conference and community can only continue to grow.

Will Downward eBook Prices Lead to New Sales Models?

UK-based publishing consultant Paul Coyne asked a good question on LinkedIn: Can e-books ever support a secondary (second-hand) market?

I love books. And eBooks. However, many of my books are second hand from booksellers, car-boot sales and friends. How important is this secondary market to books and can ebooks ever really go mainstream without a secondary market? BTW I have no clue how this would work!

I offered the following thoughts…

Great question. The secondary market is incredibly important to the buyer of course, and perhaps a blessing and a curse to the publisher–a blessing because it creates more value in the buyer’s mind and a curse because it slows and eliminates some sales in markets like college and school book publishing.

One of the great ongoing questions about eBooks is price point. There is a growing feeling they should be very inexpensive compared to their print counterparts, both because of the perception they are less costly to produce and the reality that there is no current secondary market. Thus you see Amazon trying to get all Kindle books under $10 (US).

I still like the idea of superdistribution for digital products. By my crude definition (some authoritative links in a moment), a buyer of an eBook would be able to pass along the eBook and gain something from the eventual use of it by another user. Think of it as me getting a small commission when someone I pass it along to ends up buying it. I guess you could also think about it as a kind of viral sales model.

See also:

A decent Wikipedia entry on superdistribution.
An old but well written Wired
magazine article on superdistribution.

We covered this in a DRM book I cowrote with Bill Rosenblatt and Steve Mooney.

Podcast on Structured Content in the Enterprise

Traditionally, the idea of structured content has always been associated with product documentation, but this is beginning to change. Featuring Bill Trippe, Lead Analyst at The Gilbane Group, and Bruce Sharpe, XMetaL Founding Technologist at JustSystems, a brand new podcast on The Business Value of Structured Content takes a look into why many companies are beginning to realize that structured content is more than just a technology for product documentation – it’s a means to add business value to information across the whole enterprise. 

From departmental assets such as marketing website content, sales training materials, or technical support documents, structured content can be used to grow revenue, reduce costs, and mitigate risks, ultimately leading to an improved customer experience.  

Listen to the podcast and gain important insight on how structured content can

  • break through the boundaries of product documentation
  • help organizations meet high user expectations for when and where they can access content
  • prove to be especially valuable in our rough economic times
  • …and more!

Open Government Initiatives will Boost Standards

Following on Dale’s inauguration day post, Will XML Help this President?,  we have today’s invigorating news that President Obama is committed to more Internet-based openness. The CNET article highlights some of the most compelling items from the two memoes, but I am especially heartened by this statement from the memo on the Freedom of Information Act (FOIA):

I also direct the Director of the Office of Management and Budget to update guidance to the agencies to increase and improve information dissemination to the public, including through the use of new technologies, and to publish such guidance in the Federal Register.

The key phrases are "increase and improve information dissemination" and "the use of new technologies." This is keeping in spirit with the FOIA–the presumption is that information (and content) created by or on behalf of the government is public property and should be accessible to the public.  This means that the average person should be able to easily find government content and be able to readily consume it–two challenges that the content technology industry grapples with every day.

The issue of public access is in fact closely related to the issue of long-term archiving of content and information. One of the reasons I have always been comfortable recommending XML and other standards-based technology for content storage is that the content and data would outlast any particular software system or application. As the government looks to make government more open, they should and likely will look at standards-based approaches to information and content access.

Such efforts will include core infrastructure, including servers and storage, but also a wide array of supporting hardware and software falling into three general categories:

  • Hardware and software to support the collection of digital material. This ranges from hardware and software for digitizing and converting analog materials, software for cataloging digital materials with the inclusion of metadata, hardware and software to support data repositories, and software for indexing the digital text and metadata.
  • Hardware and software to support the access to digital material. This includes access tools such as search engines, portals, catalogs, and finding aids, as well as delivery tools allowing users to download and view textual, image-based, multimedia, and cartographic data.
  • Core software for functions such as authentication and authorization, name administration, and name resolution.

Standards such as PDF-A have emerged to give governments a ready format for long-term archiving of routine government documents. But a collection of PDF/A documents does not in and of itself equal a useful government portal. There are many other issues of navigation, search, metadata, and context left unaddressed. This is true even before you consider the wide range of content produced by the government–pictorial, audio, video, and cartographic data are obvious–but also the wide range of primary source material that comes out of areas such as medical research, energy development, public transportation, and natural resource planning.

President Obama’s directives should lead to interesting and exciting work for content technology professionals in the government. We look forward to hearing more.

Boston-Area DITA Users Group

Robert D Anderson from IBM, Chief Architect of the DITA Open Toolkit, writes:

Hello,
This note is to announce that after some time off, the Boston area DITA
Users Group will be starting up again in 2009. To get things started, we
have created a new group Yahoo, so that we will be in sync with and
searchable by users of the many other Yahoo DITA lists. If you are
interested in joining the DITA Boston Users Group, please visit this page
for sign-up info.
We will soon be sending a survey to that list with proposed meeting topics,
so please sign up in order to help us decide what to feature. We will also
be looking for companies willing to host a meeting; if you already know you
are interested in hosting, please join the group and send a note to
ditabug-owner (which will go to me as well as to Liz Augustine and Lee Anne
Kowalski).

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑