Rivet Software, the premier provider of standards-based business reporting and analytics, announced the release of Crossfire 3.0, an enhanced software platform that simplifies the process of SEC financial filings by managing the complicated preparation and review processes. Crossfire uses eXtensible Business Reporting Language (XBRL) technology to control document progression and centralize reviewers’ comments. Crossfire 3.0 is a standards-based reporting platform that specializes in internal and external financial reporting and analytics. Based on an XBRL framework, Crossfire 3.0 simplifies the user experience by eliminating the file management issue. Rivet’s integrated solution allows its customers to control the financial reporting cycle and comply with all SEC filing needs. Crossfire 3.0 includes an integrated Reviewer’s Guide that allows preparers and reviewers to closely collaborate across multiple iterations as the filing progresses from inception to completion. With this guide, users no longer need to interact with standalone documents to review XBRL tag selections and comment information. This “single document” system streamlines the process for reviewing and approving filings in a way not previously available. Crossfire 3.0 now preserves existing tags and comments when rolling forward from one filing to the next. When new data matches an XBRL tag from the previous quarter, Crossfire recognizes the match and automatically applies the tag throughout the document. The latest release of Crossfire allows users to change XBRL-tagged data in one location and instantly apply that change to exact-matched data throughout the entire document. Crossfire 3.0 includes the ability to split the XBRL templates so a filing can be worked on by different people in parallel. Once the separate pieces are complete, a user can simply merge them back into the master file. Crossfire 3.0 is supported by Rivet’s global professional services team 24 hours a day, seven days a week. www.rivetsoftware.com
Page 218 of 931
Repeat after us: What happens to specific devices or formats, such as Kindle or the iPad, will not be a significant factor for book publishers.
The title of this blog is taken from a sub-section heading in out Industry Forecast chapter in our just published 277-page study, A Blueprint for Book Publishing Transformation: Seven Essential Processes to Re-Invent Publishing, as is the quote above.
We’ve been following ebook efforts for well over a decade, and for some of us, thinking back to CD-ROM or the Gutenberg Project , the timeline is deeper yet. I mention this perhaps to excuse some of our assumptions going into the work of the Blueprint study, which was that many book publishers remained nervous about participating in ebooks because of the uncertainty about ebook formats among their potential customers, themselves, and, indeed, the market at large. We were, largely, wrong.
For one thing, a good part of book publishers—even trade—are already working with XML. Here’s a quote from the new study:
There will remain plenty of help for book publishers to deal with the format flux, and, as book publishers move more completely into digital workflow—and especially grow in sophistication in regard to XML content format within editorial and production processes—the difficulties to meet specific output format demands will ease.
Overall, we have come to understand that the convergence of functionality supporting enhanced ebooks among general-purpose mobile communications and computing devices, along with emerging standards for display, sale, and distribution of ebook titles, will also make platform issues for digital publishers largely moot. Recent announcements of new tablet devices, such as those by Samsung, which projects 11 million unit sales in 2011, simply expand market numbers rather than confuse markets. That is, if, as a book publisher, you handle your content in a way that can be created once and used in many ways.
To be clear (as we hope the following quote from the study is):
…book publishers should involve XML formats as early in the publishing process as possible. We are convinced ebook formats will evolve and change, and new ones will emerge. XML stands today as the one standard format that will enable publishers to best create, manage, and curate content over time. Moreover, the future will expand how XML and metadata can support strong integration among the various publishing processes within the publisher’s own work.
As per our agreement with the sponsors of the Blueprint study, the sponsors have a 30-day exclusive distribution for the study, and Blueprint won’t be available through Gilbane.com for a few weeks yet. We’ll be posting announcements from the study sponsors , providing download links as we get them.
LinkedIn today announced Signal, a new feature (currently in beta) that lets members see an activity stream that combines LinkedIn status updates and Twitter posts from other members who have opted-in to the feature. LinkedIn has licensed the Twitter firehose to incorporate all of its members’ tweets into the site, not just tweets with the #in hashtag embedded, as is current practice.
While it is hard to imagine anyone other than corporate and independent talent recruiters will make LinkedIn their primary Twitter client, Signal does have an element that is worthy of emulation by other social networks and enterprise social software providers that incorporate an activity stream (and which of those does not these days!) That feature is role-specific filters.
I wrote previously in this post about the importance of providing filters with which individuals can narrow their activity stream. I also noted that the key is to understand which filters are needed by which roles in an organization. LinkedIn apparently gets this, judging by the screenshot pictured below.
Notice the left-hand column, labeled “Filter by”. LinkedIn has most likely researched a sample of its members to determine which filters would be most useful to them. Given that recruiters are the most frequent users of LinkedIn, the set of filters displayed in the screenshot makes sense. They allow recruiters to see tweets and LinkedIn status updates pertaining to LinkedIn members in specific industries, companies, and geographic regions. Additionally, the Signal stream can be filtered by strength of connection in the LinkedIn network and by post date.
The activity stream of every enterprise social software suite (ESS) should offer such role-based filters, instead of the generic ones they currently employ. Typical ESS filtering parameters include individuals, groups or communities, and workspaces. Some vendors offer the ability to filter by status as a collaborator on an object, such as a specific document or sales opportunity. A few ESS providers allow individuals to create custom filters for their activity stream. While all of these filters are helpful, they do not go far enough in helping individuals narrow the activity stream to view updates needed in a specific work context.
The next logical step will be to create standard sets of role-based filters that can be further customized by the individuals using them. Just as LinkedIn has created a filter set that is useful to recruiters, ESS providers and deploying organizations must work together to create valuable filter sets for employees performing specific jobs and tasks. Doing so will result in increased productivity from, and effectiveness of, any organization’s greatest asset – it’s people.
eZ has introduced eZ Publish Enterprise, which provides a package of software and services in a integrated, pay as you go product. As an eZ Publish Enterprise product, you will get all the power of the eZ Publish community project, along with professionally supported software that includes additional Enterprise features and services. This release of eZ Publish Enterprise integrates all the Enterprise services into a Service Portal in the administration interface of eZ Publish, making administrators’ lives simpler. With version 4.4, eZ Publish provides users a range of features that will help them succeed in their day-to-day use of eZ Publish, whether they are end users, occasional contributors, editors or administrators. The brand new built-in Online Image Editor will provide a simple way for editors to perform the most common tasks of photo management in eZ Publish. New with this version is the native support for HTML5 video without need for advanced development. Publishing on mobile devices such as the iPhone and iPad has been made easier. User Generated Content gets a helping hand with the addition of native support for reCaptcha, the Google-based free captcha service on the web, which helps prevent your website from being infested by spam. A new User Session handler gives more possibilities for the configuration of web servers. File system-based user session management multiplies the performance of eZ Publish servers when talking to a large audience of anonymous users. A new Archiving toolkit implements large volume archiving scenarios where old content can be moved to archive repositories, and can still be searched and rendered with the eZ Publish presentation engine. eZ Publish 4.4 improves section management, multi-site setup, and extension loading, but the biggest news is the Developer Preview of the forthcoming eZ Publish API. The eZ Publish API shows the way for developing remote applications for new devices. Connecting to eZ Publish and using its content and functionalities is easier than ever. The light-weight remote API makes eZ Publish the platform of choice for Mobile Content Management, whether you focus on the iPhone and iPad platforms, Android or Blackberry. The new Newsletter system, developed in collaboration with the CJW partner, an eZ partner and active member of the eZ community, is a prime example of community innovation. http://ez.no/
Authoring in a structured text environment has traditionally been done with dedicated structured editors. These tools enable validation and user assisted markup features that help the user create complete and valid content. But these structured editors are somewhat complicated and unusual and require training in their use for the user to become proficient. The learning curve is not very steep but it does exist.
Many organizations have come to see documentation departments as a process bottleneck and try to engage others throughout the enterprise in the content creation and review processes. Engineers and developers can contribute to documentation and have a unique technical perspective. Installation and support personnel are on the front lines and have unique insight into how the product and related documentation is used. Telephone operators not only need the information at their fingertips, but can also augment it with comments and ides that occur while supporting users. Third-party partners and reviewers may also have a unique perspective and role to play in a distributed, collaborative content creation, management, review, and delivery ecosystem.
Our recently completed research on XML Smart Content in the Enterprise indicates that as we strive to move content creation and management out of the documentation department silo, we will also need to consider how the data is encoded and the usefulness of the data model in meeting our expanded business requirements. Smart content is multipurpose content designed with several uses in mind. Smart content is modular to support being assembled in a variety of forms. And smart content is structured content that has been enriched with semantic information to better identify it’s topic and role to aide processing and searching. For these reasons, smart content also improves distributed collaboration. Let me elaborate.
One of the challenges for distributed collaboration is the infrequency of user participation and therefore, unfamiliarity with structured editing tools. It makes sense to simplify the editing process and tools for infrequent users. They can’t always take a refresher course in the editor and it’s features. They may be working remotely, even on a customer site installing equipment or software. These infrequent users need structured editing tools that are designed for them. These collaboration tools need to be intuitive and easy to figure out, easily accessible from just about anywhere, and should be affordable and have flexible licensing to allow a larger number of users to participate in the management of the content. This usually means one of two things: either the editor will be a plug in to another popular word processing system (e.g., MS Word), or it will be accessed though a thin-client browser, like a Wiki editor. In some environments, it is possible that both may be need in addition to traditional structured editing tools. Smart content modularity and enrichment allows flexibility in editing tools and process design. This allows the use of a variety of editing tools and flexibility in process design, and therefore expanding who can collaborate from throughout the enterprise.
Also, infrequent contributors may not be able to master navigating and operating within a complex repository and workflow environment either for the same familiarity reasons. Serving up information to a remote collaborator might be enhanced with keywords and other metadata that is designed to optimize searching and access to the content. Even a little metadata can provide a lot of simplicity to an infrequent user. Product codes, version information, and a couple of dates would allow a user to hone in on the likely content topics and select content to edit from a well targeted list of search results. Relationships between content modules that are indicated in metadata can alert a user that when one object is updated, other related objects may need to be reviewed for potential update as well.
It is becoming increasingly clear that there is no one model for XML or smart content creation and editing. Just as a carpenter may have several saws, each designed for a particular type of cut, a robust smart content structured content environment may have more than one editor in use. It behooves us to design our systems and tools to meet the desired business processes and user functionality, rather than limit our processes to the features of one tool.
Semantic Software Technologies: Landscape of High Value Applications for the Enterprise is now posted for you to download for free; please do so. The topic is one I’ve followed for many years and was convinced that the information about it needed to be captured in a single study as the number of players and technologies had expanded beyond my capacity for mental organization.
As a librarian, it was useful to employ a genre of publications known as “bibliography of bibliographies” on any given topic when starting a research project. As an analyst, gathering the baskets of emails, reports, and publications on the industry I follow, serves a similar purpose. Without a filtering and sifting of all this content, it had become overwhelming to understand and comment on the individual components in the semantic landscape.
Relating to the process of report development, it is important for readers to understand how analysts do research and review products and companies. Our first goal is to avoid bias toward one vendor or another. Finding users of products and understanding the basis for their use and experiences is paramount in the research and discovery process. With software as complex as semantic applications, we do not have the luxury of routine hands-on experience, testing real applications of dozens of products for comparison.
The most desirable contacts for learning about any product are customers with direct experience using the application. Sometimes we gain access to customers through vendor introductions but we also try very hard to get users to speak to us through surveys and interviews, often anonymously so that they do not jeopardize their relationship with a vendor. We want these discussions to be frank.
To get a complete picture of any product, I go through numerous iterations of looking at a company through its own printed and online information, published independent reviews and analysis, customer comments and direct interviews with employees, users, former users, etc. Finally, I like to share what I have learned with vendors themselves to validate conclusions and give them an opportunity to correct facts or clarify product usage and market positioning.
One of the most rewarding, interesting and productive aspects of research in a relatively young industry like semantic technologies is having direct access to innovators and seminal thinkers. Communicating with pioneers of new software who are seeking the best way to package, deploy and commercialize their offerings is exciting. There are many more potential products than those that actually find commercial success, but the process for getting from idea to buyer adoption is always a story worth hearing and from which to learn.
I receive direct and indirect comments from readers about this blog. What I don’t see enough of is posted commentary about the content. Perhaps you don’t want to share your thoughts publicly but any experiences or ideas that you want to share with me are welcomed. You’ll find my direct email contact information through Gilbane.com and you can reach me on Twitter at lwmtech. My research depends on getting input from all types of users and developers of content software applications, so, please raise your hand and comment or volunteer to talk.
A Blueprint for Book Publishing Transformation: Seven Essential Processes to Re-Invent Publishing, The Gilbane Group’s Publishing Practice latest study, is due out any day now. One thing about the study that sets it apart from other ebook-oriented efforts is that Blueprint describes technologies, processes, markets, and other strategic considerations from the book publisher’s perspective. From the Executive Summary of our upcoming study:
For publishers and their technology and service partners, the challenge of the next few years will be to invest wisely in technology and process improvement while simultaneously being aggressive about pursuing new business models.
The message here is that book publishers really need to “stick to their knitting,” or, as we put it in the study:
The book publisher should be what it has always best been about—discovering, improving, and making public good and even great books. But what has changed for book publishers is the radically different world in which they interact today, and that is the world of bits and bytes: digital content, digital communication, digital commerce.
If done right, today’s efforts toward digital publishing processes will “future proof” the publisher, because today’s efforts done right are aimed at adding value to the content in media neutral, forwardly compatible forms.
A central part of the “If done right” message is that book publishers still should focus on what publishers do with content, but that XML workflow has become essential to both print and digital publishing success. Here’s an interesting finding from Blueprint:
Nearly 48% of respondents say they use either an “XML-First” or “XML-Early” workflow. We define an XML-First workflow as one where XML is used from the start with manuscript through production, and we define an “XML-Early” workflow as one where a word processor is used by authors, and then manuscript is converted to XML.”
Tomorrow, Aptara and The Gilbane Group are presenting a webinar, eBooks, Apps and Print? How to Effectively Produce it All Together, with myself and Bret Freeman, Digital Publishing Strategist, Aptara. The webinar takes place on Tuesday, September 28, 2010, at 11 a.m., EST, and you can register here.
Sophia, the provider of contextually aware enterprise search solutions, announced Sophia Search, a new search solution which uses a Semiotic-based linguistic model to identify intrinsic terms, phrases and relationships within unstructured content so that it can be recovered, consolidated and leveraged. Use of Sophia Search is designed to minimize compliance risk and reduce the cost of storing and managing enterprise information. Sophia Search is able to deliver a “three-dimensional” solution to discover, consolidate and optimize enterprise data, regardless of its data type or domain. Sophia Search helps organizations manage and analyze critical information by discovering the themes and intrinsic relationships behind their information, without taxonomies or ontologies, so that more relevant information may be discovered. By identifying both duplicates and near duplicates, Sophia Search allows organizations to effectively consolidate information and minimizing storage and management costs. Sophia Search features a patented Contextual Discovery Engine (CDE) which is based on the linguistic model of Semiotics, the science behind how humans understand the meaning of information in context. Sophia Search is available now to both customers and partners. Pricing starts at $30,000. http://www.sophiasearch.com/