Mark Logic Corporation released the MarkLogic Toolkit for Excel. This new offering provides users a free way to integrate Microsoft Office Excel 2007 with MarkLogic Server. Earlier this year, Mark Logic delivered a Toolkit for Word and a Connector for SharePoint. Together, these offerings allow users to extend the functionality of Microsoft Office products and build applications leveraging the native document format, Office Open XML (OOXML). Distributed under an open source model, MarkLogic Toolkit for Excel comes with an Excel add-in that allows users to deploy information applications into Excel, comprehensive libraries for managing and manipulating Excel data, and a sample application that leverages best practices. The MarkLogic Toolkit for Excel offers greater search functionality, allowing organizations to search across their Excel files for worksheets, cells, and formulas. Search results can be imported directly into the workbooks that users are actively authoring. Workbooks, worksheets, formulas, and cells can be exported directly from active Excel documents to MarkLogic Server for immediate use by queries and applications. The Toolkit for Excel allows customers to easily create new Excel workbooks from existing XML documents. Users can now manipulate and re-use workbooks stored in the repository with a built-in XQuery library. For instance, a financial services firm can replace the manual process of cutting-and-pasting information from XBRL documents to create reports in Excel with an automated system. Utilizing Toolkit for Excel, this streamlined process extracts relevant sections of XBRL reports, combines them, and saves them as an Excel file. The Toolkit also allows users to add and edit multiple custom metadata documents across workbooks. This improves the ability for users to discover and reuse information contained in Excel spreadsheets. To download MarkLogic Toolkit for Excel, visit the Mark Logic Developer Workshop located at http://developer.marklogic.com/code/, http://www.marklogic.com
Category: Web technologies & information standards (Page 21 of 58)
Here we include topics related to information exchange standards, markup languages, supporting technologies, and industry applications.
The article “Accuracy Essential to Success of XBRL Financial Filing Program,” by Eileen Z. Taylor and Matt Shipman, NC State News, June 8, 2009 — has been widely talked about recently in XBRL circles.
The key sentence in the news story about the academic paper states:
“The researchers are concerned that, if the upcoming XBRL filings do not represent a significant improvement from the voluntary reports, stakeholders in the financial community will not have any faith in the XBRL program – and it will be rendered relatively ineffective.”
Wrong on at least two counts. First, to assume that the quality of XBRL submissions in the formal, rule laden, error checking mandatory XBRL program is going to be as error ridden as the sand-box, free for all no rules VFP is flat out wrong. I suggest the authors of the paper read the Edgar filing manual, chapter 6, which details hundreds of rules that must be followed for an XBRL exhibit will be accepted by the system. In other words, almost every error found in the VFP by the researchers will rejected by the SEC and require correction.
Second, validation programs can correct some of the accounting errors introduced into XBRL filings, responsible and knowledgeable humans at filing corporations must review submissions prior to filing. The management team is responsible for the data contained in the XBRL exhibits. The SEC has specifically stated that they expect corporations to have in place an XBRL preparation process that is documented and tested in a similar fashion to other required internal controls. An accounting error on any future XBRL exhibit is an indication that the company does not have sufficient internal controls in place.
No, I’m not expecting the startup to be perfect. However, I do expect XBRL filings to be as accurate or more accurate that existing HTML EDGAR filings.
The SEC has finally announced what will be happening regarding the use of US GAAP taxonomies for its mandatory XBRL program. The following was posted on the SEC website today:
** The US GAAP 2009 Taxonomies are in the process of being loaded into the EDGAR system and will be available for use on July 22, 2009. We strongly encourage companies to begin working with this new taxonomy now — it is publicly available at .
Companies should use the latest available taxonomy for their entire fiscal year. However, due to the timing of the US GAAP 2009 taxonomy being made available, companies will be permitted to use the U.S. GAAP 1.0 taxonomy in their first required submission before switching to the US GAAP 2009 taxonomy.
This creates a very interesting dilemma for companies under the gun for the first round of filings. Do they create returns with an obsolete taxonomy (the 2008 US GAAP taxonomy) or wait until July 22 to file with the SEC? I wonder how this will affect companies who want to send their second quarter returns into the SEC prior to July 22nd. If they do file with the 2008 taxonomy as permitted (see above) will their filing be comparable with companies who wait and use the 2009 taxonomy? Will they file amendments to the filings to update to the 2009 taxonomy? Oh so many questions. I cant wait for the SEC’s webcast on Wednesday.
Technology is literally exploding: that’s a good thing isn’t it? PDAs, Twitter, iPods that do everything but cook, social networking and constant connectedness: all of it making our lives more in-touch, immediate, visual and interactive. There is, however, another side to this amazing progress. I like to call it the "technology imperative" and it grows from the fact that as technology and its use grows, it usually follows paths driven by consumers’ desires and willingness to spend money–whims if you will. Once unleashed, these technology-triggered, consumer driven appetites tend to return the favor, pointing the way to where and how their technology providers will go next. Sometimes the process literally becomes circular, taking the technology and its uses into a spiral no one would ever have predicted and for which no one is fully prepared. If you’re designing chips, selling gadgets or trolling Best Buy for the next version of the iPhone, this looks like the best of all possible worlds. The problem comes when non-consumer sectors of the culture begin to feel the impact of this race to connect. Technology is Neutral but its uses are Often a Poor Guide: In effect, consumer technology becomes the de facto guide for areas of our culture far from the environments for which it was designed and the modes in which consumers use it. For example, as we saw the rise of the Blackberry, instant email and messaging, we eventually saw workers, even in meetings, with their eyes and attention spans glued to their devices, scarcely even aware that they were supposed to be a contributing part of the meeting and its decision making. The situation became so widespread and vexing that many firms have literally banned PDAs from company meetings, and in 2006 a new condition known as Continuous Partial Attention Syndrome was identified in which the individual becomes so distracted by the overload of available information that any attempt to focus on a thought or subject is seriously degraded if not lost. In its extreme form, this syndrome sees the individual succumbing to a virtual addiction to instant information gratification, leading to a mind wandering in a sea of tidbits with no logical relationship to the subject at hand, even if that subject involves controlling a 4,000 pound automobile. Should Government Use Technology or Technology Drive Government? Today, technology has progressed far beyond those days, rudimentary by comparison, into a world of constant connectedness that can deliver not only the linkage but an intense, and seductive, visual, auditory and activity experience. With it, we are seeing an entirely new impact, especially pronounced in government sectors. Should government agencies, for example, put their important decisions out on Twitter and other social media to inform and elicit feedback from citizens? Sounds like a good way to improve the governing process, but in practice it has all manner of problems, not the least of which are mass responses that can overwhelm the agency’s ability to make sense of them, egalitarian leveling that makes everyone’s opinion on every subject of equal weight if not value, group influenced or generated responses that masquerade as individual opinions, and so on. In the intersection of government and technology, the technology is likely to come out on top, driving the governing process in directions it should not take, but becomes powerless to avoid. So what are we to do? Like Ulysses stuffing his crew’s ears with wax to avoid the clarion call of the Sirens, we must ignore how technology is taken up by the consumer world, no matter how enticing the outcome, concentrating instead on how the governing process may be improved by increased transparency and responsiveness. This concentration should be based on a healthy respect for the unintended consequences of any fundamental changes in the governing process coupled with an even healthier skepticism for any of the brave new world claims of the technological community. As we better understand what is broken in our governing process and what can be accomplished more effectively, we will have a foundation to consider, evaluate and adopt technology in a way the improves government as it was envisioned by our founders, always remaining mindful that government as we conceive it is not supposed to be slick or interactive but solid, fair and resistant to both individual whim and mob rule.
EMC Corporation (NYSE:EMC) announced free, full-function developer editions of its enterprise content management (ECM) products and launched two new online communities dedicated to EMC Documentum and XML developers. Through the Documentum and XML communities, developers will have open access to resources that includes code samples, tutorials, full product documentation and “getting started” guides. EMC Documentum Content Server Developer Edition provides developers free access and offers a “one-click” deployment that can be run on a laptop so that developers can quickly start creating their Documentum-based solutions. EMC Documentum xDB Developer Edition provides developers a scalable, high performance, native XML database at no cost for development and testing. .NET Productivity Suite makes it easier for developers to conduct integrations with Microsoft applications such as SharePoint by allowing them to work exclusively in a Microsoft environment. EMC Documentum Content Services for Salesforce CRM allows developers to embed Documentum content services within Salesforce CRM. All products are available today. The free developer editions of Content Server and xDB are for development, testing and trial only. Standard licensing fees apply for production and run-time deployments. http://www.emc.com/
The Securities and Exchange Commission (SEC) is getting a bit behind on XBRL. Since publishing the final rule in the federal register (http://www.sec.gov/rules/final/2009/33-9002.pdf) on February 10, 2009, the SEC has been preparing for the first official filing.
gs in XBRL. According to the guidelines, large accelerated filers with a world-wide capital float greater than $5 billion USD as of June 30, 2008 are required to begin filing XBRL with the first quarterly filing for periods ending after June 15, 2009.
In preparation for the new filings, XBRL US released the 2009 version of the US GAAP XBRL taxonomy. The new taxonomy contains guidance on the latest FASB pronouncements (FAS 160, 161, 163, 141 for example) that are required for most filers after December 15, 2008. FASB’s timetable requires companies to adopt the new Financial Accounting Standards and have their SEC filing for 2009 reflect the changes.
The new standards were published long after the present official XBRL 2008 US GAAP taxonomy. In other words, the new 2009 US GAAP taxonomy, which does incorporate all filing requirements for US GAAP as of December 31, 2009, is required for the filing of correct XBRL. XBRL US has released the 2009 taxonomy and they are now available on their web site but they are not available on the SEC’s website.
As of May 24, 2009, the taxonomies listed on the SEC website as official (see http://www.sec.gov/info/edgar/edgartaxonomies.shtml ) are dated March 31, 2008. This means that any filing entity subject to the new FASB regulations will not have an official taxonomy with which to file. In fact, a few companies have already submitted XBRL to the SEC using the new 2009 taxonomy only to receive a swift rejection. The EDGAR system is not yet ready to accept the 2009 taxonomy and will not be until it is announced and listed on the SEC’s website. The XBRL viewer will also not accept the 2009 US GAAP taxonomy leaving filers with no means to validate their XBRL.
Will the SEC correct this problem? Of course they will. In the meantime, individual filers and filing agents are without any official means of determining the correctness of filings. Companies that are subject to the new FAS pronouncements are encouraged to prepare their filings with the new 2009 taxonomy and wait for SEC notification. Let’s hope that day arrives very soon.
The Government Information Transparency Act (H.R. 2392), introduced May 14 by Rep. Darrell Issa (R-Calif.), would standardize the collection of business information throughout agencies. It would require agencies to use a single data standard known as eXtensible Business Reporting Language (XBRL) and require that collected information be made readily available for public access.
Of course, this begs the question of what data will be collected. Once decided, a taxonomy can be easily assembled that would collect the data in a uniform way. Software companies in the XBRL space have developed web-based html driven fill in the blank data collection devices that convert cell entries into XBRL unseen by the end user. This requires a mind shift from a report-based approach to a data-based approach to receiving feedback on TARP projects. It also requires government agencies to centralize their data requests rather than each agency develop their own required paper-based reports.
The key again is to formulate the taxonomy, collect the data, then develop reports using a common data repository. To see how this approach can work, take a look the Dutch Taxonomy Project ) or the Bank of Japan project detailed here http://www.xbrl.org/CaseStudies/BoJ_XBRL_06.pdf
JustSystems and Mark Logic Corporation announced a technology partnership that will enable enterprise organizations to streamline data aggregation, content creation and publishing of financial reports using XBRL. Driven by XBRL compliance requirements set forth by the SEC and by the need to eliminate accounting errors and fraud, JustSystems and Mark Logic will offer customers a technology integration that combines JustSystems xfy XBRL Report and MarkLogic Server. This integration option allows enterprise organizations to push complex financial XBRL content from MarkLogic Server to content applications built on the JustSystems xfy platform at any stage of a business workflow. Using this functionality, organizations can also analyze and create detailed financial reports from XBRL content and add XBRL-aware document management functionality to their existing MarkLogic Server deployments. JustSystems xfy XBRL Report lets users review and analyze XBRL data and share their results with others. To that end, xfy consumes XBRL, creating content that uses the XBRL data and publishes it in a web-ready HTML or PDF. Other users can view the published reports without any special software, just a standard web browser. http://www.justsystems.com/, http://www.marklogic.com