The Gilbane Advisor

Curated for content, computing, and digital experience professionals

Page 235 of 917

IXIASOFT Announces DITA CMS v2.6 Availability

IXIASOFT has announced the availability of version 2.6 of its DITA CMS solution. DITA CMS is a content management solution enabling technical communicators to author, manage and publish their DITA content efficiently. The solution’s flexible search tool enable users to find and reuse their DITA topics, images and maps. New features include saving search queries and the exporting of search results. The DITA relationship table editor now has drag and drop capability for creating relations between topics, as well as a relationship overview feature for finding items a topic is linked to. Other new features include– a dependency view (“where-used” feature); the ability to use an external diff tool (in addition to the built-in tool), for xml-aware comparison; drag and drop interface in the map editor for creating maps from search results; and the ability to run certain tasks in the background while the user continues on a different task. http://www.ixiasoft.com/

Kentico CMS for ASP.NET Gets New Enterprise Search Capabilities

Kentico Software released a new version 4.1 of Kentico CMS for ASP.NET. The new version comes with a enterprise-class search engine as well as user productivity enhancements. The search engine enables web content to be searchable to assist  visitors in finding information. The search engine provides search results with ranking, previews, thumbnail images and customizable filters. The site owners can dictate which parts of the site, which content types and which content fields are searchable. The search engine uses the Lucene search framework. The new version also enhances productivity by changing the way images are inserted into text. The uploaded images can be part of the page life cycle. When the page is removed from the site, the related images and attachments are also removed which helps organizations avoid invalid or expired content on their server. Other improvements were made to the management of multi-lingual web sites. Kentico CMS for ASP.NET now supports workflow configuration based on the content language and it allows administrators to grant editors with permissions to chosen language versions. Content editors can see which documents are not translated or their translation is not up-to-date. http://www.kentico.com/

Digital Publishing Visionary Profile: Cengage’s Ken Brooks

 

Ken Brooks is senior vice president, global production and manufacturing services at Cengage Learning (formerly Thomson Learning) where his responsibilities include the development, production, and manufacturing of textbooks and reference content in print and digital formats across the Academic and Professional Group, Gale, and International divisions of Cengage Learning. Prior to his position at Cengage Learning, Ken was president and founder of publishing Dimensions, a digital content services company focused in the eBook and digital strategy space. Over the course of his career, Ken founded a Philippines-based text conversion company; a public domain publishing imprint; and a distribution-center based print-on-demand operation and has worked in trade, professional, higher education and K-12 publishing sectors. He has held several senior management positions in publishing, including vice president of digital content at Barnes & Noble, vice president of operations, production, and strategic planning at Bantam Doubleday Dell, and vice president of customer operations at Simon & Schuster. Prior to his entry into publishing, Ken was a senior manager in Andersen Consulting’s logistics strategy practice.

 

This interview is part of our larger study on digital publishing.

 

Continue reading

Multilingual Product Content Research: One Analyst’s Perspective

We’ll soon hit the road to talk about the findings revealed in our new research study on Multilingual Product Content: Transforming Traditional Practices Into Global Content Value ChainsWhile working on presentations and abstracts, I found myself needing to be conscious of the distinction between objective and subjective perspectives on the state of content globalization.

As analysts, we try to be rigorously objective when reporting and analyzing research results, using subjective perspective sparingly, with solid justification and disclaimer. We focus on the data we gather and on what it tells us about the state of practice. When we wrapped up the multilingual product content study earlier this summer, Leonor, Karl, and I gave ourselves the luxury of concluding the report with a few paragraphs expressing our own personal opinions on the state of content globalization practices. Before we put on our analyst game face and speak from that objective perspective, we thought it would be useful to share our personal perspectives as context for readers who might attend a Gilbane presentation or webinar this fall.

Here are my thoughts on market readiness, as published in the conclusion of Multilingual Product Content:

Continue reading

Mind the XBRL GAAP

Recently, XBRL US and the FASB released a new taxonomy reference linkbase to enable referencing to the FASB Codification. The FASB Codification is the electronic data base that contains all US GAAP authoritative literature and was designated as official US GAAP as of July 1, 2009.  Minding the GAAP between the existing 2009 US GAAP taxonomy reference linkbase, which contains references to the old GAAP hierarchy (such as FAS 142r or FAS 162) and the new Codification system is an interesting trip indeed.

The good news is that the efforts of the XBRL US people working in co-operation with FASB and the SEC have resulted in direct links from the new XBRL reference database to the Codification.  There are a couple problems, however.

The new reference linkbase is unofficial and will not be accepted by the SEC’s EDGAR system. URI links point to the proper places in the COD for FASB publications but require a separate log in and give you access to the public (high level) view only.

Firms and organizations with professional access to the Codification will not find this a problem, but individual practitioners will have to subscribe (at $850 per year) to get any views beyond the bare bones.

SEC literature stops at the top of the page for ALL SEC GAAP citations. For example, any XBRL element that has a regulation SX reference will point to exactly the same place, the top of the document. Not very useful. The SEC should address this.

So it appears we have three levels of accounting material to deal with, 1) the high level public access literature, which is official US GAAP in the Codification; 2) the professional view additional detail and explanations, and 3) and the non-GAAP material the FASB left out of the COD that is in their hard copy literature but didn’t make the COD/US GAAP cut. Ideally, all literature coming from the SEC or the FASB should be, in my opinion, easily accessible via the Internet.

The present plans to fix the GAAP in the US GAAP XBRL taxonomy are to wait until the 2010 taxonomy is issued (Spring 2010).  Although this would give the SEC plenty of time to tweak the EDGAR system into accepting the new linkbase, until then users of XBRL will have to accept workarounds to discover the authoritative literature link from an XBRL element tag and official US GAAP.

Convergence of Enterprise Search and Text Analytics is Not New

Prompted by the news item about IBM’s bid for SPSS and similar acquisitions by Oracle, SAP and Microsoft made me think about the predictions of more business intelligence (BI) capabilities being conjoined with enterprise search. But why now and what is new about pairing search and BI? They have always been complementary, not only for numeric applications but also for text analysis. Another article by John Harney in KMWorld referred to the “relatively new technology of text analytics” for analyzing unstructured text. The article is a good summary of some newer tools but the technology itself has had a long shelf life, too long for reasons which I’ll explore later.

Like other topics in this blog this one requires a readjustment in thinking by technology users. One of the great things about digitizing text was the promise of ways in which it could be parsed, sorted and analyzed. With heavy adoption of databases that specialized in textual, as well as numeric and date data fields for business applications in the 1960s and 70s, it became much easier for non-technical workers to look at all kinds of data in new ways. Early database applications leveraged their data stores using command languages; the better ones featured statistical analysis and publication quality report builders. Three that I was familiar with were DRS from ADM, Inc., BASIS from Battelle Columbus Labs and INQUIRE from IBM.

Tools that accompanied database back-ends had the ability to extract, slice and dice the database content, including very large text fields to report: word counts, phrase counts (breaking on any delimiter), transaction counts, relationships among data elements across associated record types, ability to create relationships on the fly, report expert activity and working documents, and describe distribution of resources. These are just a few examples of how new content assets could be created for export in minutes. In particular, a sort command with DRS had histogram controls that were invaluable to my clients managing corporate document and records collections, news clippings files, photographs, patents, etc. They could evaluate their collections by topic, date ranges, distribution, source, and so on, at any time.

So, there existed years ago the ability to connect data structures and use a command language to formulate new data models that informed and elucidated how information was being used in the organization, or to illustrate where there were holes in topics related to business initiatives. What were the barriers to wide-spread adoption? Upon reflection, I came to realize that extracting meaningful content from database in new and innovative formats requires a level of abstract thinking for which most employees are not well-trained. Putting descriptive data into a database via a screen form, then performing a transaction on the object of that data on another form, and then adding more data about another similar but different object are isolated in the database user’s experience and memory. The typical user is not trained to think about how the pieces of data might be connected in the database and therefore is not likely to form new ideas of how it can all be extracted in a report with new information about the content. There is a level of abstraction that eludes most workers whose jobs consist of a lot of compartmentalized tasks.

It was exciting to encounter prospects that really grasped the power of these tools and were excited to push the limits of the command language and reporting applications, but they were scarce. It turned out that our greatest use came in applying text analytics to the extraction of valuable information from our customer support database. A rigorously disciplined staff populated it after every support call with not only demographic information about the nature of the call, linked to a customer record that had been created back at the first contact during the sales process (with appropriate updates along the way in the procurement process) but also a textual description of the entire transaction. Over time this database was linked to a “wish list” database and another “fixes” database and the entire networked structure provided extremely valuable reports that guided both development work and documentation production. We also issued weekly summary reports to the entire staff so everyone was kept informed about product conditions and customer relationships. The reporting tools provided transparency to all staff about company activity and enabled an early version of “social search collaboration.”

Current text analytics products have significantly more algorithmic horsepower than the old command languages. But making the most of their potential and transforming them into utilities that any knowledge worker can leverage will remain a challenge for vendors in the face of poor abstract reasoning among much of the work force. The tools have improved but maybe not in all the ways they need to for widespread adoption. Workers should not have to be dependent on IT folks to create that unique analysis report that reveals a pattern or uncovers product flaws described by multiple customers. We expect workers to multitask, have many aptitudes and skills, and be self-servicing in so many aspects of their work, but for them to flourish the tools fall short too often. I’m putting in a big plug for text analytics for the masses, soon, so that enterprise search begins to deliver more than personalized lists of results for one person at a time. Give more reporting power to the user.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑