Curated for content, computing, and digital experience professionals

Category: Publishing & media (Page 30 of 53)

Markzware Releases Publisher-to-InDesign Software for Adobe InDesign CS3 and CS4

Markzware, a developer of data extraction and conversion software and inventor of preflighting, released an upgrade to its conversion tool PUB2ID for InDesign CS3 and CS4. PUB2ID v2 (Microsoft Publisher to Adobe InDesign) is a plug-in that enables users to convert native-application Microsoft Publisher files (MS Publisher versions 2002 through 2007) to Adobe InDesign while preserving the content, as well as the styles and formatting. http://www.markzware.com

Adobe Licenses SDL AuthorAssistant for FrameMaker and Technical Communication Suite

SDL announced that Adobe Systems is providing all its Adobe FrameMaker 9 users with SDL AuthorAssistant, the client component of SDL Global Authoring Management System. Adobe FrameMaker 9 software is an authoring and publishing solution that allows technical communicators to author, structure, review and publish complex and lengthy content. Starting with FrameMaker 9 and Adobe Technical Communications Suite 2, every user of FrameMaker can install SDL AuthorAssistant as part of the Adobe FrameMaker 9 environment, empowering them to create content for global markets and improve the quality of their content. SDL AuthorAssistant ensures adherence to style guide rules and consistent use of terminology. The software is also able to check against previously translated content, so that companies with global audiences can improve content reuse and reduce the downstream costs of localizing content. http://www.adobe.com, http://www.sdl.com

DPCI Joins Acquia Program to Deliver Drupal Publishing Solutions

DPCI announced it has joined the Acquia Partner program at the Platinum level. Through the program, DPCI will expand its open source content management offerings by developing and delivering custom publishing solutions utilizing Acquia’s value-added products and services for the Drupal social publishing system. Additionally the program allows DPCI to leverage the Acquia Network for support, site management, and remote network services. http://drupal.org, http://acquia.com, http://www.databasepublish.com

New Workshop on Implementing DITA

As part of our Gilbane Onsite Technology Strategy Workshop Series, we are happy to announce a new workshop, Implementing DITA.

Course Description

DITA, the Darwin Information Typing Architecture is an emerging standard for content creation, management, and distribution. How does DITA differ from other XML applications? Will it work for my vertical industry’s content? From technical documentation, to training manuals, from scientific papers to statutory publishing. DITA addresses one of the most challenging aspects of XML implementation, developing a data model that can be user and shared with information partners. Even so, DITA implementation requires effective process, software, and content management strategies to achieve the benefits promised by the DITA business case, cost-effective, reusable content. This seminar will familiarize you with DITA concepts and terminology, describe business benefits, implementation challenges, and best practices for adopting DITA. How DITA enables key business processes will be explored, including content management, formatting & publishing, multi-lingual localization, and reusable open content. Attendees will be able to participate in developing an effective DITA content management strategy.

Audience

This is an introductory course suitable for anyone looking to better understand DITA standard, terminology, processes, benefits, and best practices. A basic understanding of computer processing applications and production processes is helpful. Familiarity with XML concepts and publishing helpful, but not required. No programming experience required.

Topics Covered

  • The Business Drivers for DITA Adoption

  • DITA Concepts and Terminology

  • The DITA Content Model

  • Organizing Content with DITA Maps

  • Processing, Storing & Publishing DITA Content

  • DITA Creation, Management & Processing Tools

  • Multi-lingual Publishing with DITA

  • Extending DITA to work with Other Data Standards

  • Best Practices & Pitfalls for DITA Implementation

For more information and to customize a workshop just for your organization, please contact Ralph Marto by email or at +617.497.9443 x117

eZ Systems Releases Apache Solr-based Open Source Enterprise Search Solution eZ Find 2.0

eZ Systems released Apache Solr-based Open Source Enterprise Search solution. eZ Find 2.0, the Open Source search extension for eZ Publish, has a number of new features such as: tuning of relevance rankings, facets for drill-down search result navigation, and spell checking and suggestions on search phrases. eZ Find already included features such as relevance ranking, native support for eZ Publish access rights, keyword highlighting, sophisticated multi-language support, and the ability to search multiple sites containing millions of objects. eZ Find 2.0 is compatible with eZ Publish 4.0 and the upcoming eZ Publish 4.1. eZ Find is free to download and install on eZ Publish sites. It is also a certified extension supported under eZ Publish Premium support and maintenance agreements. http://ez.no/ezfind/download

Webinar Wednesday: 5 Predictions for Publishers in 2009

Please join me on a webinar sponsored by Mark Logic on Wednesday 2/18/09 at 2pm EST. I’ll be covering my five top predictions for 2009 (and beyond). The predictions come largely from a forthcoming research study "Digital Platforms and Technologies for Book Publishers: Implementations Beyond eBook," that Bill Trippe and I are writing. Here are the predictions:

  1. The Domain Strikes Back – Traditional publishers leverage their domain expertise to create premium, authoritative digital products that trump free and informed internet content.
  2. Discoverability Overcomes Paranoia – Publishers realize the value in being discovered online, as research shows that readers do buy whole books and subscriptions based on excerpts and previews.
  3. Custom, Custom, Custom – XML technology enables publishers to cost-effectively create custom products, a trend that has rapidly accelerated in the last six to nine months, especially in the educational textbook segment.
  4. Communities Count – and will exert greater influence on digital publishing strategies, as providers engage readers to help build not only their brands but also their products.
  5. Print on Demand – increases in production quality and cost-effectiveness, leading to larger runs, more short-run custom products and deeper backlists.

I look forward to your questions and comments! Register today at http://bit.ly/WApEW

Winds of Change at Tools of Change

O’Reilly’s Tools of Change conference in New York City this week was highly successful, both inside and outside the walls of the Marriott Marquis. The sessions were energetic, well-attended, and–on the whole–full of excellent insight and ideas about the digital trends taking a firm hold of nearly all sectors of the publishing business. Outside the walls, especially on Twitter, online communities were humming with news and commentary on the the conference. (You almost could have followed the entire conference just by following the #toc hash tag at Twitter and accessing the online copies of the presentations.)

But if you had done that, you would have missed the fun of being there. There were some superb keynotes and some excellent general sessions. Notable among the keynotes were Tim O’Reilly himself, Neelan Choksi from Lexcycle (Stanza), and Cory Doctorow. The general sessions  covered a fairly broad spectrum of topics but were heavy on eBooks and community. Because of my own and my clients’ interests, I spent most of my time in the eBook sessions. The session eBooks I: Business Models and Strategy was content-rich. To begin with, you heard straight from senior people at major publishers with significant eBook efforts (Kenneth Brooks from Cengage Learning, Leslie Hulse from Harper Collins Publishers, and Cynthia Cleto from Springer Science+Business Media). Along with their insight, the speakers–and moderator Michael Smith from IDPF–assembled an incredibly valuable wiki of eBook business and technical material to back up their talk. I also really enjoyed a talk from Gavin Bell of Nature, The Long Tail Needs Community, where he made a number of thoughtful points about how publishers need to think longer and harder about how reading engages and changes people and specifically how a publisher can build community around those changes and activities.

There were a few soft spotsin the schedule. Jeff Jarvis’ keynote, What Would Google do with Publishing?, was more about plumping his new book (What Would Google Do?) than anything else, but was also weirdly out of date, even though the book is hot off the presses, with 20th century points like “The link changes everything” and “If you’re not searchable, you won’t be found.” (Publishers are often, somewhat unfairly, accused of being Luddite, but they are not that Luddite.) There were also a couple of technical speakers who didn’t seem to make the necessary business connections to the technical points they were making, which would have been helpful to those members of the audience who were less technical and more publishing-product and -process oriented. But these small weaknesses were easily outshone by the many high points, the terrific overall energy, and the clear enthusiasm of the attendees.

One question I have for the O’Reilly folks is to ask how they will keep the energy going. They have a nascent Tools of Change community site. Perhaps they could enlist some paid community managers to seed and moderate conversations, and also tie community activities to other O’Reilly products such as the books and other live and online events.

O’Reilly has very quickly established a very strong conference and an equally strong brand around the conference. With the publishing industry so engulfed in digital change now, I have to think this kind of conference and community can only continue to grow.

On Stimulating Open Data Initiatives

Yesterday the big stimulus bill cleared the conference committee that resolves the Senate and House versions. If you remember your civics that means it will be likely to pass in the chambers and then be signed into law by the president.

Included in the bill are billions of dollars for digitizing important information such as medical records or government information. Wow! That is a lot of investment! The thinking is that inaccessible information locked in paper or proprietary formats cost us billions each year in productivity. Wow! That’s a lot of waste! Also, that access to the information could spawn a billions of dollars of new products and services, and therefore income and tax revenue. Wow! That’s a lot of growth!

Many agencies and offices have striven to expose useful official information and reports at the federal and state level. Even so, there is a lot of data still locked away, or incomplete or in difficult to use forms. A while ago a Senate official once told me that they do not maintain a single, complete, accurate, official copy of the US Statutes internally. Even if this is no longer true, the public often relies on the “trusted” versions that are available only through paid online services. Many other data types, like many medical records, only exist in paper.

There are a lot of challenges, such as security and privacy issues, even intellectual property rights issues. But there are a lot of opportunities too. There are thousands of data sources that could be tapped into that are currently locked in paper or proprietary formats.

I don’t think the benefits will come at the expense of commercial services already selling this publicly owned information as some may fear. These online sites provide a service, often emphasizing timeliness or value adds like integrating useful data from different sources, in exchange for their fees. I think a combination of free government open data resources and delivery tools, plus innovative commercial products will emerge. Maybe some easily obtained data may become commoditized, but new ways of accessing and integrating information will emerge. The big information services probably have more to fear from startups than from free government applications and data.

As it happens, I saw a demo yesterday of a tool that took all the activity of a state legislature and unified it under one portal. This allows people to track a bill and all related activity in a single place. For free! The bill working its way through both chambers is connected to related hearing agendas and minutes, which are connected to schedules, with status and other information captured in a concise dashboard-like screen format (there are other services you can pay for which fund the site). Each information component came from a different office and was originally in it’s own specialized format. What we were really looking at was a custom data integration application done with AJAX technology integrating heterogeneous data in a unified view. Very powerful, and yet scalable. The key to its success was strong integration of data, the connections that were used to tie the information together. The vendor collected and filtered the data, converted to a common format, added the linkage and relationship information to provide an integrated view into data. All source data is stored separately and maintained by different offices. Five years ago it would have been a lot more difficult to create the service. Technology has advanced, and the data are increasingly available in manageable forms.

The government produces a lot of information that affect us daily that we, as taxpayers and citizens, actually own, but have limited or no access to. These include statutes and regulations, court cases, census data, scientific data and research, agricultural reports, SEC filings, FDA drug information, taxpayer publications, forms, patent information, health guidelines, etc., etc., etc. The list is really long. I am not even scratching the surface! It also includes more interactive and real-time data, such as geological and water data, whether information, and the status of regulation and legislation changes (like reporting on the progress of the stimulus bill as it worked it way through both chambers). All of these can be made more current, expanded for more coverage, integrated with related materials, validated for accuracy. There are also new opportunities to open up the process of using forums and social media tools for collecting feedback from constituents and experts (like the demo mentioned above). Social media tools may both give people an avenue to express their ideas to their elected officials, as well as be a collection tool to gather raw data that can be analyzed for trends and statistics, which in turn becomes new government data that we can use.

IMHO, this investment in open government data is a powerful catalyst that could actually create or change many jobs or business models. If done well, it could provide significant positive returns, streamline government, open access to more information, and enable new and interesting products and applications. </>

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑