The Gilbane Advisor

Curated for content, computing, data, information, and digital experience professionals

Page 258 of 935

Mark Logic Releases MarkLogic Toolkit for Excel

Mark Logic Corporation released the MarkLogic Toolkit for Excel. This new offering provides users a free way to integrate Microsoft Office Excel 2007 with MarkLogic Server. Earlier this year, Mark Logic  delivered a Toolkit for Word and a Connector for SharePoint. Together, these offerings allow users to extend the functionality of Microsoft Office products and build applications leveraging the native document format, Office Open XML (OOXML). Distributed under an open source model, MarkLogic Toolkit for Excel comes with an Excel add-in that allows users to deploy information applications into Excel, comprehensive libraries for managing and manipulating Excel data, and a sample application that leverages best practices. The MarkLogic Toolkit for Excel offers greater search functionality, allowing organizations to search across their Excel files for worksheets, cells, and formulas. Search results can be imported directly into the workbooks that users are actively authoring. Workbooks, worksheets, formulas, and cells can be exported directly from active Excel documents to MarkLogic Server for immediate use by queries and applications. The Toolkit for Excel allows customers to easily create new Excel workbooks from existing XML documents. Users can now manipulate and re-use workbooks stored in the repository with a built-in XQuery library. For instance, a financial services firm can replace the manual process of cutting-and-pasting information from XBRL documents to create reports in Excel with an automated system. Utilizing Toolkit for Excel, this streamlined process extracts relevant sections of XBRL reports, combines them, and saves them as an Excel file. The Toolkit also allows users to add and edit multiple custom metadata documents across workbooks. This improves the ability for users to discover and reuse information contained in Excel spreadsheets. To download MarkLogic Toolkit for Excel, visit the Mark Logic Developer Workshop located at http://developer.marklogic.com/code/, http://www.marklogic.com

Paying Attention to Enterprise Search Results

When thinking about some enterprise search use cases that require planning and implementation, presentation of search results is not often high on the list of design considerations. Learning about a new layer of software called Documill from CEO and founder, Mika Könnölä, caused me to reflect on possible applications in which his software would be a benefit.

There is one aspect of search output (results) that always makes an impression when I search. Sometimes the display is clear and obvious and other times the first thing that pops into my mind is “what the heck am I looking at” or “why did this stuff appear?” In most cases, no matter how relevant the content may end up being to my query, I usually have to plow through a lot (could be dozens) of content pieces to confirm the validity or usefulness of what is retrieved.

Admittedly, much of my searching is research or helping with a client’s intranet implementation, not just looking for a quick answer, a fact or specific document. When I am in the mode for what I call “quick and dirty” search, I can almost always frame the search statement to get the exact result I want very quickly. But when I am trying to learn about a topic new to me, broaden my understanding or collect an exhaustive corpus of material for research, sifting and validating dozens of documents by opening each and then searching within the text for the piece of the content that satisfied the query is both tedious and annoyingly slow.

That is where Documill could enrich my experience considerably for it can be layered on any number of enterprise search engines to present results in the form of precise thumbnails that show where in a document the query criterion/criteria is located. In their own words, “it enhances traditional search engine result list with graphically accurate presentation of the content.”

Here are some ideas for its application:

  • In an application developed to find specific documents from among thousands that are very similar (e.g. invoices, engineering specifications), wouldn’t it be great to see only a dozen, already opened, pages to the correct location where the data matches the query?
  • In an application of 10s of thousands of legacy documents, OCRed for metadata extraction displayable as PDFs, wouldn’t it be great to have the exact pages of the document that match the search displayed as visual images opened to read in the results page? This is especially important in technical documents of 60-100 pages where the target content might be on page 30 or 50.
  • In federated search output, when results may contain many similar documents, the immediate display of just the right pages as images ready for review will be a time-saving blessing.
  • In a situation where a large corpus of content contains photographs or graphics, such as newspaper archives, scientific and engineering drawings, an instantaneous visual of the content will sharpen access to just the right documents.

I highly recommend that you ask your search engine solution provider about incorporating Documill into your enterprise search architecture. And, if you have, please share your experiences with me through comments to this post or by reaching out for a conversation.

VFP XBRL Errors Will Not Carry Over to Mandatory Program

The article “Accuracy Essential to Success of XBRL Financial Filing Program,” by Eileen Z. Taylor and Matt Shipman, NC State News, June 8, 2009 — has been widely talked about recently in XBRL circles.

The key sentence in the news story about the academic paper states:

“The researchers are concerned that, if the upcoming XBRL filings do not represent a significant improvement from the voluntary reports, stakeholders in the financial community will not have any faith in the XBRL program – and it will be rendered relatively ineffective.”

Wrong on at least two counts.  First, to assume that the quality of XBRL submissions in the formal, rule laden, error checking mandatory XBRL program is going to be as error ridden as the sand-box, free for all no rules VFP is flat out wrong.  I suggest the authors of the paper read the Edgar filing manual, chapter 6, which details hundreds of rules that must be followed for an XBRL exhibit will be accepted by the system.  In other words, almost every error found in the VFP by the researchers will rejected by the SEC and require correction.

Second, validation programs can correct some of the accounting errors introduced into XBRL filings, responsible and knowledgeable humans at filing corporations must review submissions prior to filing.  The management team is responsible for the data contained in the XBRL exhibits.  The SEC has specifically stated that they expect corporations to have in place an XBRL preparation process that is documented and tested in a similar fashion to other required internal controls.  An accounting error on any future XBRL exhibit is an indication that the company does not have sufficient internal controls in place.

No, I’m not expecting the startup to be perfect.  However, I do expect XBRL filings to be as accurate or more accurate that existing HTML EDGAR filings.

Webinar: Multilingual Product Content at FICO

Wednesday, June 17th, 2009 — 11:00 to 12:00 (GMT -5:00) Eastern Time

* To check the webinar time in your local area, go to: www.timezoneconverter.com.

The challenges facing FICO, a leading supplier of decision management analytics, applications and tools, will sound familiar to global organizations: the need to streamline product and content development lifecycles, support global expansion with accurate and timely localization and translation processes, and satisfy customers worldwide with consistent, quality experience. What makes FICO’s story unique is its strategic and proactive approach to addressing them.

With a successful business case based on reuse as a “first principle,” FICO is building an enterprise content infrastructure that includes XML and DITA, component content management, translation memory and terminology management, and automated publishing. Learn how FICO is aligning global content practices with the company’s business goals and objectives. If you need to spark that “aha!” moment within your organization, you won’t want to miss this webinar event. Topics:

  • Reuse as the tipping point: the synergies of component approaches to product and content development
  • Implementing an end-to-end global information strategy
  • The value of content agility in FICO’s global business strategy

Speakers:

  • Leonor Ciarlone, Senior Analyst, Gilbane Group
  • Carroll Rotkel, Director, Product Documentation, FICO
  • Howard Schwartz, Ph.D., VP Content Management, SDL Trisoft

Registration is open. Sponsored by SDL.

Emerging Enterprise Content Management Trends

I was at the Gilbane Conference in San Francisco last week, where I answered questions as a panelist, moderated another panel, heard many excellent presentations, and joined in many engaging discussions. On the plane ride home, I took some time to piece together the individual bits of information and opinion that I had absorbed during the two-day event. This reflection led to the following observations regarding the state of enterprise content management practices and technologies.

Up With People

Many content software vendors are now focusing on people first, content second. This is a huge shift in perspective, especially when voiced at a content management conference! Kumar Vora, Vice President & General Manager, Enterprise at Adobe was the first person to proclaim this philosophical change during his opening keynote presentation at Gilbane San Francisco. He reported that Adobe has shifted its business philosophy to focus on serving people and their needs, as opposed to thinking about content first. Many other vendor representatives and attendees from end user organizations echoed Kumar’s emphasis on people during the event. It is too early to say definitively what this radical change in perspective means, but we should see more user friendly enterprise content management tools as a result.

Keyword Fail

Keyword search has largely failed end users and incremental improvements haven’t been able to keep up with the explosion in newly created content. Jeff Fried, VP Product Management for Microsoft’s FAST search engine actually proclaimed that “keyword search is dead!” The business world is at a point where alternatives, including machine-generated and social search techniques, must be explored. The latter method was on many attendees minds and lips, which should not surprise, given the shift to people-centric thinking identified above. Social search will be an increasingly hot topic in 2009 and 2010.

SharePoint Upheaval

Microsoft SharePoint 2010 has the potential to completely shake up the information management market. The next version of SharePoint will likely include a raft of (as of yet unconfirmed) Web Content Management features that have been missing or rudimentary. In her keynote address, Tricia Bush, Group Product Manager for SharePoint said that the promise of content management has not yet been realized and that her team is focusing diligently on the opportunity. This increased emphasis on content management is contrary to the first trend that I described above, and the negative perceptions many hold of SharePoint may increase unless Microsoft also better enables people in SharePoint 2010 (it is rumored that the product will also see substantial additions to its currently limited social collaboration functionality.) Those placing bets should do so knowing that Microsoft intends to, and probably will, be a major force in enterprise information management.

Simplicity Trumps Complexity

Enterprise applications and systems managed by IT departments continue to grow in complexity. As this happens, end users turn to simpler alternatives, including consumer oriented Web 2.0 applications, in order to get work done. The “problem” is that these consumer applications aren’t approved or controlled by the IT function. The opportunity is a potentially large market for software vendors that can create enterprise ready versions of Web 2.0 applications by adding security, reliability, and other attributes demanded by CIOs. For those vendors to succeed, however, they must retain the simplicity (intuitiveness and ease of use) that are the hallmark of consumer Web 2.0 applications.

Communication Beats Publishing

Communication applications are increasingly being used by end users to collaborate, because enterprise content management applications have become too complex (see the trend immediately above). Additionally, communication tools are favored by end users because they can use them to simultaneously create and distribute content. This increased speed of content publication also accelerates general business process execution, allowing users of communication tools to be more productive than users of formal enterprise content systems. Communication tools will continue to become an important and growing back channel that employees use to share content when overly complex publishing tools impede or fail them.

Having one’s ideas validated by a reputable peer is always rewarding. John Mancini, President of AIIM, published a blog post in the time between when I first formulated these thoughts on the flight home from San Francisco last week and when I published this post today. Reading John’s post should encourage you to believe that the trends I (and he) have described are for real. The question for all of us now is how will we respond to these emerging realities.

 

Buzz from Gilbane San Francisco Conference

The Gilbane Conference held in San Francisco last week was a great success!  There were informative presentations, lively discussions, and abundant tweets both days of the event. If you are skeptical about this admitedly biased assessment, check out the following tweets that were broadcast by attendees during and after the conference.

Thanks to everyone who attended the conference and especially those who live tweeted during the event. We look forward to seeing you again at Gilbane Boston in December!

Yooba Releases CMS for Flash

Yooba Ltd announced the full commercial availability of its online Flash creation and management system, Yooba. Yooba is a content management system (CMS) specially designed for Flash website content creation. As with CMSs for static content, Yooba puts full creative power over Flash, right down to the object level, but without the need for programming skills, into the hands of editors and others responsible for site content origination and maintenance. As a Software-as-a-Service (SaaS) application, there are no licensing issues with Yooba and users are always working with the latest version. The SaaS structure also gives full scalability on pricing, to suit anyone from individual professionals to enterprise companies. Once content is created, Yooba simplifies the scheduling and publication of created and edited material. This is carried out through Yooba’s graphical admin dashboard, which gives users total control of Flash objects within a website at a glance, making it easy to update and change them as frequently as information and sales campaigns require. http://www.yooba.com/

This just in…. 2009 US GAAP XBRL taxonomies but not until July 22nd

The SEC has finally announced what will be happening regarding the use of US GAAP taxonomies for its mandatory XBRL program.  The following was posted on the SEC website today:

** The US GAAP 2009 Taxonomies are in the process of being loaded into the EDGAR system and will be available for use on July 22, 2009. We strongly encourage companies to begin working with this new taxonomy now — it is publicly available at .

Companies should use the latest available taxonomy for their entire fiscal year. However, due to the timing of the US GAAP 2009 taxonomy being made available, companies will be permitted to use the U.S. GAAP 1.0 taxonomy in their first required submission before switching to the US GAAP 2009 taxonomy.

This creates a very interesting dilemma for companies under the gun for the first round of filings.  Do they create returns with an obsolete taxonomy (the 2008 US GAAP taxonomy) or wait until July 22 to file with the SEC?  I wonder how this will affect companies who want to send their second quarter returns into the SEC prior to July 22nd.  If they do file with the 2008 taxonomy as permitted (see above) will their filing be comparable with companies who wait and use the 2009 taxonomy?  Will they file amendments to the filings to update to the 2009 taxonomy?  Oh so many questions.  I cant wait for the SEC’s webcast on Wednesday.

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑