Curated for content, computing, and digital experience professionals

Year: 2009 (Page 13 of 39)

Multilingual Product Content Research: One Analyst’s Perspective

We’ll soon hit the road to talk about the findings revealed in our new research study on Multilingual Product Content: Transforming Traditional Practices Into Global Content Value ChainsWhile working on presentations and abstracts, I found myself needing to be conscious of the distinction between objective and subjective perspectives on the state of content globalization.

As analysts, we try to be rigorously objective when reporting and analyzing research results, using subjective perspective sparingly, with solid justification and disclaimer. We focus on the data we gather and on what it tells us about the state of practice. When we wrapped up the multilingual product content study earlier this summer, Leonor, Karl, and I gave ourselves the luxury of concluding the report with a few paragraphs expressing our own personal opinions on the state of content globalization practices. Before we put on our analyst game face and speak from that objective perspective, we thought it would be useful to share our personal perspectives as context for readers who might attend a Gilbane presentation or webinar this fall.

Here are my thoughts on market readiness, as published in the conclusion of Multilingual Product Content:

Continue reading

Mind the XBRL GAAP

Recently, XBRL US and the FASB released a new taxonomy reference linkbase to enable referencing to the FASB Codification. The FASB Codification is the electronic data base that contains all US GAAP authoritative literature and was designated as official US GAAP as of July 1, 2009.  Minding the GAAP between the existing 2009 US GAAP taxonomy reference linkbase, which contains references to the old GAAP hierarchy (such as FAS 142r or FAS 162) and the new Codification system is an interesting trip indeed.

The good news is that the efforts of the XBRL US people working in co-operation with FASB and the SEC have resulted in direct links from the new XBRL reference database to the Codification.  There are a couple problems, however.

The new reference linkbase is unofficial and will not be accepted by the SEC’s EDGAR system. URI links point to the proper places in the COD for FASB publications but require a separate log in and give you access to the public (high level) view only.

Firms and organizations with professional access to the Codification will not find this a problem, but individual practitioners will have to subscribe (at $850 per year) to get any views beyond the bare bones.

SEC literature stops at the top of the page for ALL SEC GAAP citations. For example, any XBRL element that has a regulation SX reference will point to exactly the same place, the top of the document. Not very useful. The SEC should address this.

So it appears we have three levels of accounting material to deal with, 1) the high level public access literature, which is official US GAAP in the Codification; 2) the professional view additional detail and explanations, and 3) and the non-GAAP material the FASB left out of the COD that is in their hard copy literature but didn’t make the COD/US GAAP cut. Ideally, all literature coming from the SEC or the FASB should be, in my opinion, easily accessible via the Internet.

The present plans to fix the GAAP in the US GAAP XBRL taxonomy are to wait until the 2010 taxonomy is issued (Spring 2010).  Although this would give the SEC plenty of time to tweak the EDGAR system into accepting the new linkbase, until then users of XBRL will have to accept workarounds to discover the authoritative literature link from an XBRL element tag and official US GAAP.

Convergence of Enterprise Search and Text Analytics is Not New

Prompted by the news item about IBM’s bid for SPSS and similar acquisitions by Oracle, SAP and Microsoft made me think about the predictions of more business intelligence (BI) capabilities being conjoined with enterprise search. But why now and what is new about pairing search and BI? They have always been complementary, not only for numeric applications but also for text analysis. Another article by John Harney in KMWorld referred to the “relatively new technology of text analytics” for analyzing unstructured text. The article is a good summary of some newer tools but the technology itself has had a long shelf life, too long for reasons which I’ll explore later.

Like other topics in this blog this one requires a readjustment in thinking by technology users. One of the great things about digitizing text was the promise of ways in which it could be parsed, sorted and analyzed. With heavy adoption of databases that specialized in textual, as well as numeric and date data fields for business applications in the 1960s and 70s, it became much easier for non-technical workers to look at all kinds of data in new ways. Early database applications leveraged their data stores using command languages; the better ones featured statistical analysis and publication quality report builders. Three that I was familiar with were DRS from ADM, Inc., BASIS from Battelle Columbus Labs and INQUIRE from IBM.

Tools that accompanied database back-ends had the ability to extract, slice and dice the database content, including very large text fields to report: word counts, phrase counts (breaking on any delimiter), transaction counts, relationships among data elements across associated record types, ability to create relationships on the fly, report expert activity and working documents, and describe distribution of resources. These are just a few examples of how new content assets could be created for export in minutes. In particular, a sort command with DRS had histogram controls that were invaluable to my clients managing corporate document and records collections, news clippings files, photographs, patents, etc. They could evaluate their collections by topic, date ranges, distribution, source, and so on, at any time.

So, there existed years ago the ability to connect data structures and use a command language to formulate new data models that informed and elucidated how information was being used in the organization, or to illustrate where there were holes in topics related to business initiatives. What were the barriers to wide-spread adoption? Upon reflection, I came to realize that extracting meaningful content from database in new and innovative formats requires a level of abstract thinking for which most employees are not well-trained. Putting descriptive data into a database via a screen form, then performing a transaction on the object of that data on another form, and then adding more data about another similar but different object are isolated in the database user’s experience and memory. The typical user is not trained to think about how the pieces of data might be connected in the database and therefore is not likely to form new ideas of how it can all be extracted in a report with new information about the content. There is a level of abstraction that eludes most workers whose jobs consist of a lot of compartmentalized tasks.

It was exciting to encounter prospects that really grasped the power of these tools and were excited to push the limits of the command language and reporting applications, but they were scarce. It turned out that our greatest use came in applying text analytics to the extraction of valuable information from our customer support database. A rigorously disciplined staff populated it after every support call with not only demographic information about the nature of the call, linked to a customer record that had been created back at the first contact during the sales process (with appropriate updates along the way in the procurement process) but also a textual description of the entire transaction. Over time this database was linked to a “wish list” database and another “fixes” database and the entire networked structure provided extremely valuable reports that guided both development work and documentation production. We also issued weekly summary reports to the entire staff so everyone was kept informed about product conditions and customer relationships. The reporting tools provided transparency to all staff about company activity and enabled an early version of “social search collaboration.”

Current text analytics products have significantly more algorithmic horsepower than the old command languages. But making the most of their potential and transforming them into utilities that any knowledge worker can leverage will remain a challenge for vendors in the face of poor abstract reasoning among much of the work force. The tools have improved but maybe not in all the ways they need to for widespread adoption. Workers should not have to be dependent on IT folks to create that unique analysis report that reveals a pattern or uncovers product flaws described by multiple customers. We expect workers to multitask, have many aptitudes and skills, and be self-servicing in so many aspects of their work, but for them to flourish the tools fall short too often. I’m putting in a big plug for text analytics for the masses, soon, so that enterprise search begins to deliver more than personalized lists of results for one person at a time. Give more reporting power to the user.

The Nexus of Defined Business Process and Ad Hoc Collaboration

My friend Sameer Patel wrote and published a very good blog post last week that examined the relationship of Enterprise Content Management (ECM) and enterprise social software. His analysis was astute (as usual) and noted that there was a role for both types of software, because they offer different value propositions. ECM enables controlled, repeatable content publication processes, whereas social software empowers rapid, collaborative creation and sharing of content. There is a place for both in large enterprises. Sameer’s suggestion was that social software be used for authoring, sharing, and collecting feedback on draft documents or content chunks before they are formally published and widely distributed. ECM systems may then be used to publish the final, vetted content and manage it throughout the content lifecycle.

The relationship between ECM and enterprise social software is just one example of an important, higher level interconnection — the nexus of defined business processes and ad hoc collaboration. This is the sweet spot at which organizations will balance employees’ requirements for speed and flexibility with the corporation’s need for control. The following (hypothetical, but typical) scenario in a large company demonstrates this intersection.

A customer account manager receives a phone call from a client asking why an issue with their service has not been resolved and when it will be. The account manager can query a workflow-supported issue management system and learn that the issue has been assigned to a specific employee and that it has been assigned an "in-progress" status. However, that system does not tell the account manager what she really needs to know! She must turn to a communication system to ask the other employee what is the hold up and the current estimate of time to issue resolution. She emails, IM’s, phones, or maybe even tweets the employee to whom the issue has been assigned to get an answer she can give the customer.

The employee to whom the issue was assigned most likely cannot use the issue management system to actually resolve the problem either. He uses a collaboration system to find documented information and individuals possessing knowledge that can help him deal with the issue. Once the problem is solved, the employee submits the solution to the issue management system, which feeds it to a someone who can make the necessary changes for the customer and inform the customer account manager that the issue is resolved. Case closed.

The above scenario illustrates the need for both process and people-centric systems. Without the cludgy, structured issue management system, the customer account manager would not have known to whom the issue had been assigned and, thus, been unable to contact a specific individual to get better information about its status. Furthermore, middle managers would not have been able to assign the case in a systematic way or see the big picture of all cases being worked on for customers without the workflow and reporting capabilities of the issue management system. On the other hand, ad hoc communication and collaboration systems were the tools that drove actual results. The account manager and the employee to whom the issue was assigned would not have been able to do their work if the issue management system was their only support tool. They needed less structured tools that allowed them to communicate and collaborate quickly to actually resolve the issue.

We should not expect that organizations striving to become more people-centric will abandon their ECM, ERP, or other systems that guide or enforce key business processes. There is a need for both legacy management and Enterprise 2.0 philosophies and systems in large enterprises operating in matrixed organizational structures. Each approach can provide value; one quantifiable in hard currency and the other in terms of softer, but important, business metrics (more on this in a future post.) The enterprises that identify, and operate at, the intersection of structured process and ad hoc communication/collaboration will gain short-term competitive advantage.

Searching Email in the Enterprise

Last week I wrote about “personalized search” and then a chance encounter at a meeting triggered a new awareness of business behavior that makes my own personalized search a lot different than might work for others. A fellow introduced himself to me as the founder of a start-up with a product for searching email. He explained that countless nuggets of valuable information reside in email and will never be found without a product like the one his company had developed. I asked if it only retrieved emails that were resident in an email application like Outlook; he looked confused and said “yes.” I commented that I leave very little content in my email application but instead save anything with information of value in the appropriate file folders with other documents of different formats on the same topic. If an attachment is substantive, I may create a record with more metadata in my content management database so that I can use the application search engine to find information germane to projects I work on. He walked away with no comment, so I have no idea what he was thinking.

It did start me thinking about the realities of how individuals dispose of, store, categorize and manage their work related documents. My own process goes like this. My work content falls into four broad categories: products and vendors, client organizations and business contacts, topics of interest, and local infrastructure related materials. When material is not purposed for a particular project or client but may be useful for a future activity, it gets a metadata record in the database and is hyperlinked to the full-text. The same goes for useful content out on the Web.

When it comes to email, I discipline myself to dispose of all email into its appropriate folder as soon as I can. Sometimes this involves two emails, the original and my response. When the format is important I save it in the *.mht format (it used to be *.htm until I switched to Office 2007 and realized that doing so created a folder for every file saved); otherwise, I save content in *.txt format. I rename every email to include a meaningful description including topic, sender and date so that I can identify the appropriate email when viewing a folder. If there is an attachment it also gets an appropriate title and date, is stored in its native format and the associated email has “cover” in the file name; this helps associate the email and attachment. The only email that is saved in Outlook in personal folders is current activity where lots of back and forth is likely to occur until a project is concluded. Then it gets disposed of by deleting, or with the project file folders as described above. This is personal governance that takes work. Sometimes I hit a wall and fall behind on the filtering and disposing but I keep at it because it pays off in the long term.

So, why not relax and leave it all in Outlook, then let a search engine do the retrieval? Experience had revealed that most emails are labeled so poorly by senders and the content is so cryptic that to expect a search engine to retrieve it in a particular context or with the correct relevance would be impossible. I know this from the experience of having to preview dozens of emails stored in folders for projects that are active. I have decided to give myself the peace of mind that when the crunch is on, and I really need to go to that vendor file and retrieve what they sent me in March of last year, I can get it quickly in a way that no search engine could ever do. Do you realize how much correspondence you receive from business contacts using their “gmail” account with no contact information revealing their organization in the body and signed with a nickname like “Bob” and messages “like we’re releasing the new version in four weeks” or that just have a link to an important article on the web with “thought this would interest you?”

I did not have a chance to learn if my new business acquaintance had any sense of the amount of competition he has out there for email search, or what his differentiator is that makes a compelling case for a search product that only searches through email, or what happens to his product when Microsoft finally gets FAST search bundled to work with all Office products. OR, perhaps the rest of the world is storing all content in Outlook. Is this true? If so, he may have a winner.

Gilbane Boston Speaking Proposal Update

We are still working on the program for this year’s Boston conference, December 1-3, and Sarah has left us for graduate school. Fortunately, we have a great new Marketing Coordinator, Scott Templeman, who will be communicating with all of you who have submitted proposals. You can reach Scott at 617-497-9443 ext 156 or at scott@gilbane.com with any questions about the status of your proposals, but official confirmations are still a week or two away.

Gilbane Group Releases New Study on Multilingual Product Content

For Immediate Release

Pioneering Research Describes Transformation of Technical Communications Practices to Align More Closely With Global Business Objectives

Cambridge, MA, July 28 — Gilbane Group, Inc., the analyst and consulting firm focused on content technologies and their application to high-value business solutions, today announced the publication of its latest research, Multilingual Product Content: Transforming Traditional Practices Into Global Content Value Chains.

The report is backed by in-depth qualitative research on how global businesses are creating, managing, and publishing multilingual product content. The study extends Gilbane’s 2008 research on multilingual business communications with a close look at the strategies, practices, and infrastructures specific to product content.

The research clearly shows a pervasive enterprise requirement for product content initiatives to tangibly improve global customer experience. Respondents from a mix of technical documentation, customer support, localization/translation, and training departments indicate that “global-ready technology architectures” are the second most often cited ROI factor to meet the directive. All respondents view single-sourcing strategies and self-help customer support applications as the two most important initiatives to align product content with global business objectives.

“Successful business cases for product content globalization address top-line issues relevant to corporate business goals while tackling bottom-line process improvements that will deliver cost savings,” commented Leonor Ciarlone, Senior Analyst, Gilbane Group, and program lead for Multilingual Product Content. “Our research shows that while multilingual content technologies are clearly ROI enablers, other factors influence sustainable results. Cross-departmental collaboration and overarching business processes, cited as essential improvements by 70% and 82% of respondents respectively, are critical to transforming traditional practices.”

 Multilingual Product Content is the first substantive report on the state of end-to-end product content globalization practices from multiple perspectives. “Gilbane’s latest research continues to show both language and content professionals how the well-managed intersection of their domains is becoming best practice,” said Donna Parrish, Editor, MultiLingual magazine. “With practical insights and real experiences in the profiles, this study will serve as a valuable guide for organizations delivering technical documentation, training, and customer support in international markets.”

The report covers business and operational issues, including the evolving role of service providers as strategic partners; trends in authoring for quality at the source, content management and translation management integration, machine translation, and terminology management; and progress towards developing metrics for measuring the business impact of multilingual content. Profiles of leading practioners in high tech, manufacturing, automotive, and public sector/education are featured in the study.

Multilingual Product Content: Transforming Traditional Practices Into Global Content Value Chains is available as a free download from the Gilbane Group website at https://gilbane.com. The report is also available from study sponsors Acrolinx, Jonckers, Lasselle-Ramsay, LinguaLinx, STAR Group, Systran, and Vasont Systems.

About Gilbane Group

Gilbane Group, Inc., is an analyst and consulting firm that has been writing and consulting about the strategic use of information technologies since 1987. We have helped organizations of all sizes from a wide variety of industries and governments. We work with the entire community of stakeholders including investors, enterprise buyers of IT, technology suppliers, and other analyst firms. We have organized over 70 educational conferences in North America and Europe. Our next event  if Gilbane Boston, 1-3 December 2009, http://gilbaneboston.com. Information about our newsletter, reports, white papers, case studies, and blogs is available at https://gilbane.com. Follow Gilbane Group on Twitter at http://twitter.com/gilbane.

Contact:
Gilbane Group, Inc.
Ralph Marto, +1.617.497.0443 xt 117
ralph@gilbane.com

 

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑