Curated for content, computing, and digital experience professionals

Category: Web technologies & information standards (Page 42 of 58)

Here we include topics related to information exchange standards, markup languages, supporting technologies, and industry applications.

Reactivity Announces Enterprise RSS Offering; Partners with SimpleFeed

Reactivity, Inc., announced it has entered the enterprise RSS market, announcing that its Reactivity Gateways are able to make XML-based syndication practical for use with sensitive, secure and private content. Reactivity Gateways provide the security and other features enterprises need to take RSS beyond public news, marketing and product announcements, while ensuring that RSS use complies with the privacy and data protection requirements of government and corporate regulations. Reactivity Gateways can be used to authenticate access, secure transport, encrypt single and aggregated RSS feeds and transform RSS data to and from other XML formats. Advanced tracking, alerting, reporting and XML message manipulation capabilities, made possible by Reactivity’s Advanced Messaging Architecture, simplify the compliant use of RSS while enabling enterprises to respond proactively to non-compliant RSS situations and improper identity usage. And Reactivity’s any-to-any interoperability and mediation with other XML and Web services formats, standards, transports and data allows integration of RSS with existing back-end systems. Reactivity also supports Atom 1.0. Reactivity also announced that SimpleFeed has partnered with Reactivity to deliver Secure RSS service. http://www.reactivity.com

XBRL and The Truth

Can tags lie? Of course they can. But this is usually not a problem because incorrect or misleading tagging typically causes trouble for the very same people who are doing the tagging. This gives them an incentive to get the tags right. And, if the tags aren’t right, there is an incentive to fix them.

Consider an XML-based publishing application. I want to get the tags right so that the presentation comes out right.  Or, in a syndication application, I want my tags to be semantically, not just syntactically correct, because I want someone else to use and link back to my information. Even in an XML-based commercial transaction, where there might in fact be more incentive for me to have the tags tell lies — increasing the quantity of goods shipped, for example — the external controls already built into the transaction (counting the quantity of goods received, for example) create an incentive to ensure that the tags tell the truth, reducing overall processing costs and ensuring repeat business.

All of this changes when we use XBRL to communicate financial information to analysts and investors. The incentives to misrepresent information or, in some cases, to hide it altogether, are substantial. This makes XBRL different from many other XML applications and requires a different approach to validation. This is not just a detail. The shift from intrinsic incentives that help get the tagging right to a need for external controls changes the way XBRL is used. It also adds to the list of capabilities that must be in place to build an XBRL market.

I want to dwell on this last point for a minute, since I am about to launch into a few paragraphs about accounting, attestation standards, and so on. The accountants who are reading this piece already know this stuff. The real value here is for the non-accountants who are trying to make decisions about the XBRL market.  If you are an XBRL vendor, you need to know what has to be in place for the market to grow. If you are thinking of using XBRL, you, too, need to understand how the pieces and requirements fit together.

The accounting profession’s interest in XBRL arises from an interest in focusing more on high-value work.  As Charles Hoffman–CPA and “the father of XBRL”–explains it, the interest in XBRL has to do with reducing “friction” in the financial reporting and auditing process. In an interview in this month’s Journal of Accountancy, Hoffman speaks of the days before spreadsheets, when the high point in an audit was when the rows and columns of the lead schedules added up and cross-checked. Electronic spreadsheets and auditing tools changed the focus from addition to analysis, freeing the auditor to focus on making judgments and rendering an opinion. But Hoffman notes that there is still a great deal of friction left in the process, primarily related to the way that financial data is still communicated as unstructured text in tables and footnotes.. Hoffman built the first XBRL prototypes in the late 1990s with the goal of transforming these “unstructured clusters of text into structured data that computers can process to facilitate their re-use.”

Along with this desire to move from text to structured data, the accounting profession understands that taking the human readers and translators out of the data conversion process and moving to direct publication of machine-readable data opens new opportunities to misrepresent the data. Detecting and preventing financial misrepresentation is at the heart of auditing, so it should come as no surprise that accountants are at the leading edge of thinking about how we can know whether or not we can trust what is in an XBRL document. Here are references to a few recent articles, papers, and presentations on the topic that you might find useful:

  • “XBRL and Financial Information Assurance Services” by Stephanie Farewell and Robert Pinsker — The CPA Journal Online, May, 2005.

This brief article is written for CPAs to help them understand the market opportunity for XBRL assurance services. The authors argue that assurance service opportunities exist not only among companies that are sending information out, but also within companies and institutions that are taking information in. They note, for example, that

“Institutional investors typically can correctly analyze company financial information. Providing institutional investors with XBRL-tagged financial information allows them to spend more time on data analysis instead of data reentry. Companies currently providing XBRL-tagged instance documents on their websites are doing so without assurance that the information had been attested to by a trusted, independent party for compliance with appropriate technical specifications.”

This line of reasoning is of interest to accounting professionals for obvious reasons. But it should also be of interest to others in the XBRL marketplace–vendors and users of XBRL–because it suggests how assurance services might emerge in a number of forms, for a number of purposes.

  • PCAOB Staff Questions and Answers: “Attest Engagements Regarding XBRL Financial Information Furnished Under the XBRL Voluntary Financial Reporting Program on the Edgar System” — May 25, 2005

The title of this document shows that the Public Company Accounting Oversight Board is not attending to marketing in place of substance. However, despite the very narrow focus claimed in the title, the document is much more generally useful than you might expect.  It tells us something about what the PCAOB thinks is important in making judgments about XBRL financial reports..

You do not have to be an accountant to make good use of this Staff Q&A document. However, it will be helpful to know a couple terms and definitions. It helps to know, for example, that an “Attest Engagement” is similar to an “audit” in that it is a professional engagement intended to provide assurance about a document or about some set of assertions.  Audits provide assurance about historical financial statements. Attestations are a kind of superset of audits, and cover a much broader range of assurance services, including assurance that a particular set of XBRL tags accurately reflects the information in an EDGAR filing.

Attestation engagements, like audits, are governed by standards of practice.  For attest engagements, the governing standard is “AT Section 101.”  The requirements specified in the standard are very broad.  For example, “The engagement shall be performed by a practitioner having adequate knowledge of the subject matter” and “The practitioner shall perform the engagement only if he or she has reason to believe that the subject matter is capable of evaluation against criteria that are suitable and available to users.”  What the PCAOB Staff Q&A does is take these very general statements and apply them to the particulars of an engagement dealing with XBRL submitted as part of the SEC’s voluntary XBRL filing program.

So … what do they say about providing assurance about XBRL financial reports?  Apart from the particulars of the SEC’s voluntary program (for example, the inclusion of required disclaimers), there are four things about this Staff Q&A that stand out as being particularly important:

  1. The XBRL data is tested to see that it agrees with an official, reference version of the EDGAR filing.
  2. The “XBRL-Related Documents” (instance documents, taxonomies, extensions, and other XBRL documents included in the filing) are tested to see that they are syntactically valid and have all the required elements.
  3. Semantics matter — tags must be used in appropriate ways, relative to content.
  4. Extensions are a potential headache.

The first point reflects the fact that XBRL is young and we are in a transition phase. Something elseis the audited document, presumed to be correct.  The goal, at the moment, is to see that the XBRL says the same things as the reference document.  At some point, if and when we are ready to actually audit the XBRL–rather than audit something else and then attest that the XBRL is consistent with it–we can collapse these two operations into a single engagement. (See an earlier posting for some thoughts about this audit process.)

The third point is especially interesting and important. How you tag something really matters. By being a little loose with the use of tags, a company might, for example, move assets from long-term to current, and improve its apparent current ratio and level of working capital.  The accountants therefore need to do much more than make sure that the document parses, they also need to assure users that it expresses the truth.

Which brings us to the fourth point. In financial matters, truth is partly a matter of convention. When a company extends a taxonomy, making up new tags, who is to say what is true?  This relates to the third attestation standard: “The practitioner shall perform the engagement only if he or she has reason to believe that the subject matter is capable of evaluation against criteria that are suitable and available to users.”  Boiled down, the third standard means that we need to agree on just what “truth” is.  Q&A number 5 in the staff document leaves the question of whether use of an extension taxonomy will be permitted open to judgment by the auditor.

  • “Assurance & XBRL: Status Update” — Presentation made by Dan Roberts of Grant Thornton at the XBRL in Government and Industry Conference, Washington, D.C., 19 July 2005

This presentation provides an update on activities undertaken by the AICPA’s Special Working Group on Assurance and XBRL. This group did much of the original work that was picked up by the PCAOB Staff Q&A on XBRL Assurance. The group is working through a “mock audit” and intends to use this experience to provide accountants with sample audit plans, workpapers, and examples of documentation that might be provided by the client. One of Roberts’ central arguments is that XBRL distribution of financial reports on the Internet will displace paper reports over time, and that there will be a presumption that this information is accurate, whether it is or not. His view is that the accounting profession needs to get ahead of this wave, creating a way for people to readily determine whether information is coming from a trusted source.

So … summing all of this up:

  • The accounting profession has made a major commitment to the success of XBRL. The reason for this is that XBRL can remove much of the low-value, error prone effort from audits and financial reporting work while opening up new opportunities for higher value services.
  • Providing assurance services directly related to XBRL is a major part of this commitment.
  • These assurance services will include tests of both form and content–accountants will make judgments about whether the use of particular tags is appropriate in a given context.
  • Taxonomy extensions will present special problems.
  • Detailed guidance for practitioners is on the way.

All of this work is important because, without it, XBRL will not succeed as a way to distribute financial reports to third parties. Financial information is simply not useful unless there is reason to believe that it is correct. Unlike so many other XML languages, where the motivation to “get it right” is intrinsic to the use of the language, there are strong incentives to misuse XBRL to hide bad news, overemphasize good news, or distort analysis. Those who want to see XBRL succeed will need to counteract these incentives by providing external controls and trust mechanisms. Fortunately, the accounting profession is signing up for the job.

Note that this need for external controls, in lieu of intrinsic incentives to get the tags right, is unique to the financial reporting side of the XBRL market. When XBRL is used internally, for management accounting, decision support, and other internal reporting purposes, there will be intrinsic rewards to encourage good use. This is not a black and white difference–there is, for example, a need for internal controls in such applications–but the difference is still substantive and important. For internal applications, XBRL can be broadly useful without having to build an external trust infrastructure.

One last thought: There is likely to be a chicken and egg problem here.  Many accountants will not make the investment in the training required to provide XBRL assurance services until there is some demand for those services. On the other hand, more use of XBRL for external reporting depends on having the assurances services in place. This is not an unusual problem for early markets — twenty years ago, for example, I was trying to figure out how to get reference publishers to invest in CD-ROM products when few libraries had CD-ROM players and how to get libraries to invest in CD-ROM players when there were no publications to use on them. But it is, nonetheless, a problem that needs to be solved.  The generally anticipated solution is a government requirement to use XBRL, but, as I have argued in other writings, that is not a sure bet.  Another possibility is that XBRL will find early adoption for internal use, where companies do not need this external infrastructure, and the external use will follow over time.

We’ll see …

Aligning Expectations With XBRL’s Maturity

A couple of days ago I wrote about an instance of XBRL’s leaping over the market chasm to see use in a no-nonsense, pragmatic, “early majority” application. This isn’t just idle marketing chatter. The question of where XBRL stands along the technology adoption curve is one that any organization or company thinking about using XBRL needs to be asking. Just how mature is this technology? How big a bet can you put on it? And if you do make a bet, what steps do you need to take to hedge it?

Just in case some readers are not familiar with Geoff Moore’s work in the area of how technologies get adopted, here is a picture of what we are talking about.

The gist of Moore’s argument is that the movement from Early Adopters to the mainstream market is discontinuous. The things that attract Early Adopters to a technology are not the same things that matter to the primary market. This isn’t just a small problem. Many promising technologies never make it across the chasm. They get hyped in the technology press and by the people who are excited because the technologies are elegant or innovative–but many of these technologies fall into the chasm and never get to the point where they make a difference in the daily operations of most companies.

Working on both sides of the chasm to effect a crossing is a tricky problem. You need Early Adopters to get a new technology started. Early Adopters are the organizations, and the people within them, who see a chance to use a new technology to build an entirely new market or to gain sudden, overwhelming competitive advantage. They are visionaries. They invest in the new technology when no one else will because they hope to change the rules of the game. Early Adopters are willing to take big risks in order to get a shot at big returns. Without Early Adopters, new technologies would never get out of the lab.

Early Majority buyers have a very different view of risk. They are managers, not visionaries, and are interested in new technologies when the technologies are the only way to reach key objectives. Early Majority buyers are not wholly risk averse, but they do need the reassurance that comes from seeing a variety of vendors offering a technology. They have no interest in being the first and only company out on a technology frontier. They also need to see a clear, near term business case before they invest in a new technology. As I noted in my article a couple of days ago, the adoption of XBRL for banking call reports is a good example of a pragmatic, early majority application. Risks can be controlled, there will be a substantial, certain payoff from Call Report Modernization, and XBRL is clearly the best way to do the job.

Not surprisingly, movement across the chasm does not happen all at once. Even as the first, highly focused applications move across the chasm, there continues to be a lot of visionary Early Adopter activity over on the left side. That is certainly true of XBRL today.

This business of being on both sides of the chasm can work out fine so long as everyone knows what side they are operating on. But you run into trouble when a buyer on the right side of the chasm, from the Early Majority, ends up with a solution that belongs somewhere over on the left side, in the world of the Early Adopters.

Last week’s XBRL conference in Washington presented an example of just such a straddling of the chasm. Barry Ward, Vice President and Head of Financial Reporting at ING Insurance Americas, U.S. Financial Services, described an XBRL initiative–a “proof of concept” effort–that his company undertook over the past year. ING sought to streamline internal reporting, eliminate rekeying of data, improve audit trails, enhance understanding of financial results within the company, and automate the creation of the many state reports that the company must produce. (Each state regulates the insurance business in its own way, with its own reporting requirements.) Ward noted that mere external reporting was NOT of interest as a stand-alone effort at ING since this would not improve efficiency, but would instead actually add another step to the reporting process. Internal use of XBRL, on the other hand, looked as if it could save money.

The U.S. unit of ING was upgrading its accounting software, and the vendor (left unnamed by Mr. Ward–later identified as a major ERP vendor) claimed to offer XBRL support as part of the general ledger program. This appeared to present an opportunity for ING to see if it could use XBRL to reduce financial reporting costs while improving the quality and utility of the reports.

Unfortunately, the project did not go well. One problem was that the general ledger vendor had not fully implemented XBRL, but had, instead, just hard-coded a set of XBRL tags into the system. So, there was no easy way for ING to update the system to make use of more recent work on the XBRL-GL taxonomy or to extend it to address the unique problems faced by the insurance industry.

On top of this problem, there was no XBRL presentation component built into the system, undercutting the whole purpose of the effort, which was to enable more flexible reporting. And then there were problems with things such as getting the system to deal with data from more than a single year.

Ward also noted that ING discovered that its financial reporting structure was more complicated than what the vendors’ XBRL system could deal with. This was partly, once again, a limitation of what Ward characterized as “first generation” software tools, but was also an indication that ING needed to spend more time on up front analysis and design than had been expected.

In Mr. Ward’s words, the end result of all this was that the XBRL proof of concept project was “put on hold for further evaluation of the approach.”

This was a painful story, though an interesting and important one. Just as interesting was the response from the XBRL experts in the audience. They were particularly critical of ING’s reliance on its general ledger software vendor for the solution. In different ways, the XBRL experts said that there ARE firms that could help ING address the problems it wants to solve. What ING should have done, in their view, was work with the much smaller companies specializing in XBRL, rather than with its mainstream accounting system vendor.

This story and reaction took me back 20 years, to when some early SGML implementations were hard coded to a particular DTD and when the only really capable SGML vendors were the little, specialist companies. Then, as now, there were stores of failed projects, first generation software, and too much complexity.

Looking back on those SGML stories, it is clear to me now that the problem was NOT one of immature software or poor planning, but of a mismatch between buyer expectations and the state of that early market. On the vendor side (I was among them), we were releasing new products every six months–sometimes more often than that–and moving ahead quickly. Most of our resources were going into development, so there was no 24/7, worldwide customer support. Customers with problems got on the phone with a developer and we released new code that fixed the problem, and probably added a few new features while we were at it. We were the “go to” companies if you wanted the most flexible, cutting edge SGML solutions–much like the companies that the XBRL experts were recommending to ING.  We owned the market on the left side of the chasm.

But some of the companies and government agencies wanting to use SGML came to the market with very different expectations. They were accustomed to software products with worldwide 24/7 support, that were thoroughly tested and stable before commercial release. They were accustomed to systems supported by service organizations, where training was available, where new releases came regularly, with predictable results.  They were trying to buy the kinds of systems that you find only on the right side of the chasm.

ING’s hopes and plans, matched against the response from the XBRL audience last week, shows this same kind of straddle over the chasm. ING is a worldwide company, running 24/7, wanting a system to handle its most critical information. Why should it be surprising that ING sought to work with its principal software vendor, who offers worldwide support and professional services? How could ING reconcile the size and critical importance of its accounting problems with the capabilities of a 25 person XBRL firm?

ING’s problem — and XBRL’s problem — is that ING was wanting to build an Early Majority application with what is still primarily an Early Adopter’s technology.

Which is why the FDIC’s Call Report Modernization project is such an important example. It is an early bridge across the chasm that should be able to work because the problem can be constrained and because the need to change processes and software among users can be minimized. It is the kind of XBRL project that can be put to real use right now, generating real returns.

When you are looking at your own XBRL project, ask yourself whether it is more like the Call Modernization project–nicely constrained–or whether is is more the kind of broader, more mature project that ING hoped to undertake. If it is the latter, be careful. You should probably think in terms of a pilot project–something where the focus is more on learning, rather than operational outcomes.

One last thought — technology matures rapidly, particularly when it has as much energy and investment behind it as XBRL does. The constraints on this market that apply today could be very different in six months.  This is precisely why it is important to pay such careful attention to what is happening on either side of the chasm and to what is moving across it.

XBRL and the Chasm

On Tuesday of last week XBRL-US sponsored a set of presentations in Washington, D.C. focused on “XBRL in Government and Industry.” The conference was hosted by the Federal Deposit Insurance Corporation (FDIC), which was appropriate since it was the FDIC that was the source of some of the most significant XBRL activity announced at the conference.

Here is the news: By October 1 of this year, the more than 8300 banks submitting Call Report data to the FDIC, the Federal Reserve System, and the Office of the Comptroller of the Currency will be required to do so using XBRL. Because most banks submit these reports through use of software and services supplied by a handful of vendors, this requirement will not bring about changes in the internal operations of most banks. The initiative does, however, represent a significant application of XBRL, and opens the door to greater reuse of data and simplification of workflows for other regulatory reporting requirements. It is also a good example of the kinds of broad improvement in financial information communication and processing that XBRL enables.

Under the current system, Call Reports are submitted in a number of formats, including physical transfer of magnetic media. The agencies currently spend a substantial amount of time converting these data. Data checking and validation is done only at the tail end of the process, within the government agencies, which means that when errors or omissions turn up, they must be communicated back to the banks so that corrections can be made. In the current operating model, the major component of the regulator’s investment in collecting and analyzing these data is spent in the conversion and validation process, leaving much less time for actual analysis.

The new system, implemented as part of an interagency consortium called the Federal Financial Institutions Examination Council (FFIEC), will allow the three agencies to share a single secure information repository. Each agency will use XBRL’s ability to provide a unique semantic identity for each incoming data element to enable the agency to extract the information it needs from the repository, in the form that it needs it.

Just as important, the use of XBRL coupled with Internet-based report submission allows the agencies to specify validation suites that can be run by the banks producing the information. The validation semantics are all expressed in XBRL, and so can be communicated in a reliable, standard way to the vendors, ensuring uniform performance across vendors and allowing the agencies to extend or improve validation without having to rely on the vendors. This is the kind of thing that XBRL excels at, and it plays an important role here. Moving validation closer to the source of the information will allow banks to catch their own errors, decreasing the time investment made by all parties in the submission process. The regulatory agencies will be able to spend less time on conversion, validation, and clean up and more time on analysis, all while reducing the cost of government operations.

This matter of the cost of data collection was a consistent theme across the presentations by government agencies at the conference. In general, financial and statistical information is currently heterogeneous in format, often with the added problem of having low information value. The numbers are all present in a report, but before the advent of XBRL there has been  no easy, consistent, reliable way to enable a machine to know what the numbers represent. As one example of the cost of the current way of doing business, Mark Carney of the Commodities and Futures Trading Commission spoke in terms of spending over 80 cents of every dollar invested in data collection on conversion. This is precisely the kind of problem that XBRL is designed to solve.

For more information on the FFIEC’s Call Report Modernization program and about the creation and operation of this Central Data Repository, see www.ffiec.gov/find/.

In my view, there are a number of interesting facts and associated implications about this Call Report initiative. The first is that this is a mandatory initiative. XBRL is not an option, it is a requirement. XBRL has reached a level of maturity and universality sufficient to allow a set of government agencies to make it the required standard for an important application.

A related observation is that this maturity and universality has developed outside the U.S., and is only now coming back into this country of XBRL’s origin. Mike Bartell, CIO for the FDIC, made the point this way: “If we are truly serious about disclosure and transparency, we need to move aggressively toward the adoption of XBRL. The U.S. has not been a leader in this area. It is time to step up the pace. We are buried in data, and we are not making better decisions because of that.”

It is also important to note that this early, significant development in the marketplace for XBRL tools and services is the kind of “early majority,” “pragmatist” application that Geoffrey Moore described so well in his classic marketing text, Crossing the Chasm. The pragmatists are the first technology adopters on the right side of the chasm, after crossing it. They seek narrowly focused solutions that solve specific, pressing problems, adopting a new technology when it is the only thing that will do the job. The payoff must be clear and substantial–no pie in the sky or grand visions here–and the costs must be easily contained. Pragmatists adopt new technology because it will be a sure win for a particular problem, not because it will change the world.

In this case, where the actual XBRL will produced by a handful of vendors, it is easy to see how to contain the costs associated with XBRL production.  Most banks won’t know and won’t care that the call reports are being submitted in XBRL. Further, the payoff, in terms of reduced conversion and validation costs, is clear and compelling. The fact that XBRL is an open, international standard adds to its pragmatist appeal. It is exactly the right tool for this niche application.

Getting an application across the chasm–away from the early adopters and technology aficionados and over on the other side, where the mainstream applications live–is a big step. This is real evidence of the utility and potential impact of XBRL. It is also important to remember that such early leaps across the chasm are always, of necessity, narrow, tailored, niche applications. It is the narrow applications that get across first. This is an important one. What comes next?

XML and the Rich Client

Writing for Power Builder Developer’s Journal, Coach Wei has an excellent article on how XML can play a key role in beefing up client applications in J2EE environments.

This question of client functionality continues to be key. Content applications, especially, often demand rich feature sets for client interfaces. The question is how to bring enough functionality out to the client without significant investment in cost and resources. As organizations bring more business process out to the browser–for larger and larger audiences–this question continues to pose practical challenges.

Astoria Software Introduces Astoria Version 4.4

Astoria Software announced Astoria XML Content Management Platform Version 4.4, adding new features for the publishing of large, complex documents that are changed often, such as technical service documents for medical equipment and flight operations manuals for airplanes. Among the new capabilities of Astoria Version 4.4 are expanded wide area network (WAN) capabilities for more robust support of networked and remote users, greater scalability to accommodate the largest XML documents, and expanded support for flight operations and airline maintenance applications in the commercial aerospace market. A new Table of Contents feature has been added that allows for large XML documents and content to be initially displayed as a table of contents, with links to full content. Large documents with numerous graphics or image files can now be managed more easily with a new feature that tracks changes to graphic files regardless of changes made to written content. New reporting capabilities validate cross-references across documents, and XML import features for WAN users can automatically include importing of reference graphics. There is a new SOAP-based dialog for reviewing and navigating Astoria Annotations within the Astoria Web Client and the WAN Bridge for Epic Editor, and new support for Blast Radius XMetaL 4.5 ActiveX for XML editing complementing support for Arbortext XmetaL 4.5 Author, Epic Editor 5.1 and Adobe FrameMaker 7.1. Book Level Administrator, a key component of Astoria for Aerospace, has been enhanced to simplify book updates by using Astoria Workbench, a new user interface based on Eclipse, an open source integrated development environment (IDE). The Astoria Content Management Platform 4.4 now supports the Apache Web Server and Citrix MetaFrame. Astoria Version 4.4 is available immediately from Astoria Software and its Services Partners. http://www.astoriasoftware.com

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑