Curated for content, computing, and digital experience professionals

Author: Bill Zoellick (Page 1 of 5)

XBRL and The Truth

Can tags lie? Of course they can. But this is usually not a problem because incorrect or misleading tagging typically causes trouble for the very same people who are doing the tagging. This gives them an incentive to get the tags right. And, if the tags aren’t right, there is an incentive to fix them.

Consider an XML-based publishing application. I want to get the tags right so that the presentation comes out right.  Or, in a syndication application, I want my tags to be semantically, not just syntactically correct, because I want someone else to use and link back to my information. Even in an XML-based commercial transaction, where there might in fact be more incentive for me to have the tags tell lies — increasing the quantity of goods shipped, for example — the external controls already built into the transaction (counting the quantity of goods received, for example) create an incentive to ensure that the tags tell the truth, reducing overall processing costs and ensuring repeat business.

All of this changes when we use XBRL to communicate financial information to analysts and investors. The incentives to misrepresent information or, in some cases, to hide it altogether, are substantial. This makes XBRL different from many other XML applications and requires a different approach to validation. This is not just a detail. The shift from intrinsic incentives that help get the tagging right to a need for external controls changes the way XBRL is used. It also adds to the list of capabilities that must be in place to build an XBRL market.

I want to dwell on this last point for a minute, since I am about to launch into a few paragraphs about accounting, attestation standards, and so on. The accountants who are reading this piece already know this stuff. The real value here is for the non-accountants who are trying to make decisions about the XBRL market.  If you are an XBRL vendor, you need to know what has to be in place for the market to grow. If you are thinking of using XBRL, you, too, need to understand how the pieces and requirements fit together.

The accounting profession’s interest in XBRL arises from an interest in focusing more on high-value work.  As Charles Hoffman–CPA and “the father of XBRL”–explains it, the interest in XBRL has to do with reducing “friction” in the financial reporting and auditing process. In an interview in this month’s Journal of Accountancy, Hoffman speaks of the days before spreadsheets, when the high point in an audit was when the rows and columns of the lead schedules added up and cross-checked. Electronic spreadsheets and auditing tools changed the focus from addition to analysis, freeing the auditor to focus on making judgments and rendering an opinion. But Hoffman notes that there is still a great deal of friction left in the process, primarily related to the way that financial data is still communicated as unstructured text in tables and footnotes.. Hoffman built the first XBRL prototypes in the late 1990s with the goal of transforming these “unstructured clusters of text into structured data that computers can process to facilitate their re-use.”

Along with this desire to move from text to structured data, the accounting profession understands that taking the human readers and translators out of the data conversion process and moving to direct publication of machine-readable data opens new opportunities to misrepresent the data. Detecting and preventing financial misrepresentation is at the heart of auditing, so it should come as no surprise that accountants are at the leading edge of thinking about how we can know whether or not we can trust what is in an XBRL document. Here are references to a few recent articles, papers, and presentations on the topic that you might find useful:

  • “XBRL and Financial Information Assurance Services” by Stephanie Farewell and Robert Pinsker — The CPA Journal Online, May, 2005.

This brief article is written for CPAs to help them understand the market opportunity for XBRL assurance services. The authors argue that assurance service opportunities exist not only among companies that are sending information out, but also within companies and institutions that are taking information in. They note, for example, that

“Institutional investors typically can correctly analyze company financial information. Providing institutional investors with XBRL-tagged financial information allows them to spend more time on data analysis instead of data reentry. Companies currently providing XBRL-tagged instance documents on their websites are doing so without assurance that the information had been attested to by a trusted, independent party for compliance with appropriate technical specifications.”

This line of reasoning is of interest to accounting professionals for obvious reasons. But it should also be of interest to others in the XBRL marketplace–vendors and users of XBRL–because it suggests how assurance services might emerge in a number of forms, for a number of purposes.

  • PCAOB Staff Questions and Answers: “Attest Engagements Regarding XBRL Financial Information Furnished Under the XBRL Voluntary Financial Reporting Program on the Edgar System” — May 25, 2005

The title of this document shows that the Public Company Accounting Oversight Board is not attending to marketing in place of substance. However, despite the very narrow focus claimed in the title, the document is much more generally useful than you might expect.  It tells us something about what the PCAOB thinks is important in making judgments about XBRL financial reports..

You do not have to be an accountant to make good use of this Staff Q&A document. However, it will be helpful to know a couple terms and definitions. It helps to know, for example, that an “Attest Engagement” is similar to an “audit” in that it is a professional engagement intended to provide assurance about a document or about some set of assertions.  Audits provide assurance about historical financial statements. Attestations are a kind of superset of audits, and cover a much broader range of assurance services, including assurance that a particular set of XBRL tags accurately reflects the information in an EDGAR filing.

Attestation engagements, like audits, are governed by standards of practice.  For attest engagements, the governing standard is “AT Section 101.”  The requirements specified in the standard are very broad.  For example, “The engagement shall be performed by a practitioner having adequate knowledge of the subject matter” and “The practitioner shall perform the engagement only if he or she has reason to believe that the subject matter is capable of evaluation against criteria that are suitable and available to users.”  What the PCAOB Staff Q&A does is take these very general statements and apply them to the particulars of an engagement dealing with XBRL submitted as part of the SEC’s voluntary XBRL filing program.

So … what do they say about providing assurance about XBRL financial reports?  Apart from the particulars of the SEC’s voluntary program (for example, the inclusion of required disclaimers), there are four things about this Staff Q&A that stand out as being particularly important:

  1. The XBRL data is tested to see that it agrees with an official, reference version of the EDGAR filing.
  2. The “XBRL-Related Documents” (instance documents, taxonomies, extensions, and other XBRL documents included in the filing) are tested to see that they are syntactically valid and have all the required elements.
  3. Semantics matter — tags must be used in appropriate ways, relative to content.
  4. Extensions are a potential headache.

The first point reflects the fact that XBRL is young and we are in a transition phase. Something elseis the audited document, presumed to be correct.  The goal, at the moment, is to see that the XBRL says the same things as the reference document.  At some point, if and when we are ready to actually audit the XBRL–rather than audit something else and then attest that the XBRL is consistent with it–we can collapse these two operations into a single engagement. (See an earlier posting for some thoughts about this audit process.)

The third point is especially interesting and important. How you tag something really matters. By being a little loose with the use of tags, a company might, for example, move assets from long-term to current, and improve its apparent current ratio and level of working capital.  The accountants therefore need to do much more than make sure that the document parses, they also need to assure users that it expresses the truth.

Which brings us to the fourth point. In financial matters, truth is partly a matter of convention. When a company extends a taxonomy, making up new tags, who is to say what is true?  This relates to the third attestation standard: “The practitioner shall perform the engagement only if he or she has reason to believe that the subject matter is capable of evaluation against criteria that are suitable and available to users.”  Boiled down, the third standard means that we need to agree on just what “truth” is.  Q&A number 5 in the staff document leaves the question of whether use of an extension taxonomy will be permitted open to judgment by the auditor.

  • “Assurance & XBRL: Status Update” — Presentation made by Dan Roberts of Grant Thornton at the XBRL in Government and Industry Conference, Washington, D.C., 19 July 2005

This presentation provides an update on activities undertaken by the AICPA’s Special Working Group on Assurance and XBRL. This group did much of the original work that was picked up by the PCAOB Staff Q&A on XBRL Assurance. The group is working through a “mock audit” and intends to use this experience to provide accountants with sample audit plans, workpapers, and examples of documentation that might be provided by the client. One of Roberts’ central arguments is that XBRL distribution of financial reports on the Internet will displace paper reports over time, and that there will be a presumption that this information is accurate, whether it is or not. His view is that the accounting profession needs to get ahead of this wave, creating a way for people to readily determine whether information is coming from a trusted source.

So … summing all of this up:

  • The accounting profession has made a major commitment to the success of XBRL. The reason for this is that XBRL can remove much of the low-value, error prone effort from audits and financial reporting work while opening up new opportunities for higher value services.
  • Providing assurance services directly related to XBRL is a major part of this commitment.
  • These assurance services will include tests of both form and content–accountants will make judgments about whether the use of particular tags is appropriate in a given context.
  • Taxonomy extensions will present special problems.
  • Detailed guidance for practitioners is on the way.

All of this work is important because, without it, XBRL will not succeed as a way to distribute financial reports to third parties. Financial information is simply not useful unless there is reason to believe that it is correct. Unlike so many other XML languages, where the motivation to “get it right” is intrinsic to the use of the language, there are strong incentives to misuse XBRL to hide bad news, overemphasize good news, or distort analysis. Those who want to see XBRL succeed will need to counteract these incentives by providing external controls and trust mechanisms. Fortunately, the accounting profession is signing up for the job.

Note that this need for external controls, in lieu of intrinsic incentives to get the tags right, is unique to the financial reporting side of the XBRL market. When XBRL is used internally, for management accounting, decision support, and other internal reporting purposes, there will be intrinsic rewards to encourage good use. This is not a black and white difference–there is, for example, a need for internal controls in such applications–but the difference is still substantive and important. For internal applications, XBRL can be broadly useful without having to build an external trust infrastructure.

One last thought: There is likely to be a chicken and egg problem here.  Many accountants will not make the investment in the training required to provide XBRL assurance services until there is some demand for those services. On the other hand, more use of XBRL for external reporting depends on having the assurances services in place. This is not an unusual problem for early markets — twenty years ago, for example, I was trying to figure out how to get reference publishers to invest in CD-ROM products when few libraries had CD-ROM players and how to get libraries to invest in CD-ROM players when there were no publications to use on them. But it is, nonetheless, a problem that needs to be solved.  The generally anticipated solution is a government requirement to use XBRL, but, as I have argued in other writings, that is not a sure bet.  Another possibility is that XBRL will find early adoption for internal use, where companies do not need this external infrastructure, and the external use will follow over time.

We’ll see …

Aligning Expectations With XBRL’s Maturity

A couple of days ago I wrote about an instance of XBRL’s leaping over the market chasm to see use in a no-nonsense, pragmatic, “early majority” application. This isn’t just idle marketing chatter. The question of where XBRL stands along the technology adoption curve is one that any organization or company thinking about using XBRL needs to be asking. Just how mature is this technology? How big a bet can you put on it? And if you do make a bet, what steps do you need to take to hedge it?

Just in case some readers are not familiar with Geoff Moore’s work in the area of how technologies get adopted, here is a picture of what we are talking about.

The gist of Moore’s argument is that the movement from Early Adopters to the mainstream market is discontinuous. The things that attract Early Adopters to a technology are not the same things that matter to the primary market. This isn’t just a small problem. Many promising technologies never make it across the chasm. They get hyped in the technology press and by the people who are excited because the technologies are elegant or innovative–but many of these technologies fall into the chasm and never get to the point where they make a difference in the daily operations of most companies.

Working on both sides of the chasm to effect a crossing is a tricky problem. You need Early Adopters to get a new technology started. Early Adopters are the organizations, and the people within them, who see a chance to use a new technology to build an entirely new market or to gain sudden, overwhelming competitive advantage. They are visionaries. They invest in the new technology when no one else will because they hope to change the rules of the game. Early Adopters are willing to take big risks in order to get a shot at big returns. Without Early Adopters, new technologies would never get out of the lab.

Early Majority buyers have a very different view of risk. They are managers, not visionaries, and are interested in new technologies when the technologies are the only way to reach key objectives. Early Majority buyers are not wholly risk averse, but they do need the reassurance that comes from seeing a variety of vendors offering a technology. They have no interest in being the first and only company out on a technology frontier. They also need to see a clear, near term business case before they invest in a new technology. As I noted in my article a couple of days ago, the adoption of XBRL for banking call reports is a good example of a pragmatic, early majority application. Risks can be controlled, there will be a substantial, certain payoff from Call Report Modernization, and XBRL is clearly the best way to do the job.

Not surprisingly, movement across the chasm does not happen all at once. Even as the first, highly focused applications move across the chasm, there continues to be a lot of visionary Early Adopter activity over on the left side. That is certainly true of XBRL today.

This business of being on both sides of the chasm can work out fine so long as everyone knows what side they are operating on. But you run into trouble when a buyer on the right side of the chasm, from the Early Majority, ends up with a solution that belongs somewhere over on the left side, in the world of the Early Adopters.

Last week’s XBRL conference in Washington presented an example of just such a straddling of the chasm. Barry Ward, Vice President and Head of Financial Reporting at ING Insurance Americas, U.S. Financial Services, described an XBRL initiative–a “proof of concept” effort–that his company undertook over the past year. ING sought to streamline internal reporting, eliminate rekeying of data, improve audit trails, enhance understanding of financial results within the company, and automate the creation of the many state reports that the company must produce. (Each state regulates the insurance business in its own way, with its own reporting requirements.) Ward noted that mere external reporting was NOT of interest as a stand-alone effort at ING since this would not improve efficiency, but would instead actually add another step to the reporting process. Internal use of XBRL, on the other hand, looked as if it could save money.

The U.S. unit of ING was upgrading its accounting software, and the vendor (left unnamed by Mr. Ward–later identified as a major ERP vendor) claimed to offer XBRL support as part of the general ledger program. This appeared to present an opportunity for ING to see if it could use XBRL to reduce financial reporting costs while improving the quality and utility of the reports.

Unfortunately, the project did not go well. One problem was that the general ledger vendor had not fully implemented XBRL, but had, instead, just hard-coded a set of XBRL tags into the system. So, there was no easy way for ING to update the system to make use of more recent work on the XBRL-GL taxonomy or to extend it to address the unique problems faced by the insurance industry.

On top of this problem, there was no XBRL presentation component built into the system, undercutting the whole purpose of the effort, which was to enable more flexible reporting. And then there were problems with things such as getting the system to deal with data from more than a single year.

Ward also noted that ING discovered that its financial reporting structure was more complicated than what the vendors’ XBRL system could deal with. This was partly, once again, a limitation of what Ward characterized as “first generation” software tools, but was also an indication that ING needed to spend more time on up front analysis and design than had been expected.

In Mr. Ward’s words, the end result of all this was that the XBRL proof of concept project was “put on hold for further evaluation of the approach.”

This was a painful story, though an interesting and important one. Just as interesting was the response from the XBRL experts in the audience. They were particularly critical of ING’s reliance on its general ledger software vendor for the solution. In different ways, the XBRL experts said that there ARE firms that could help ING address the problems it wants to solve. What ING should have done, in their view, was work with the much smaller companies specializing in XBRL, rather than with its mainstream accounting system vendor.

This story and reaction took me back 20 years, to when some early SGML implementations were hard coded to a particular DTD and when the only really capable SGML vendors were the little, specialist companies. Then, as now, there were stores of failed projects, first generation software, and too much complexity.

Looking back on those SGML stories, it is clear to me now that the problem was NOT one of immature software or poor planning, but of a mismatch between buyer expectations and the state of that early market. On the vendor side (I was among them), we were releasing new products every six months–sometimes more often than that–and moving ahead quickly. Most of our resources were going into development, so there was no 24/7, worldwide customer support. Customers with problems got on the phone with a developer and we released new code that fixed the problem, and probably added a few new features while we were at it. We were the “go to” companies if you wanted the most flexible, cutting edge SGML solutions–much like the companies that the XBRL experts were recommending to ING.  We owned the market on the left side of the chasm.

But some of the companies and government agencies wanting to use SGML came to the market with very different expectations. They were accustomed to software products with worldwide 24/7 support, that were thoroughly tested and stable before commercial release. They were accustomed to systems supported by service organizations, where training was available, where new releases came regularly, with predictable results.  They were trying to buy the kinds of systems that you find only on the right side of the chasm.

ING’s hopes and plans, matched against the response from the XBRL audience last week, shows this same kind of straddle over the chasm. ING is a worldwide company, running 24/7, wanting a system to handle its most critical information. Why should it be surprising that ING sought to work with its principal software vendor, who offers worldwide support and professional services? How could ING reconcile the size and critical importance of its accounting problems with the capabilities of a 25 person XBRL firm?

ING’s problem — and XBRL’s problem — is that ING was wanting to build an Early Majority application with what is still primarily an Early Adopter’s technology.

Which is why the FDIC’s Call Report Modernization project is such an important example. It is an early bridge across the chasm that should be able to work because the problem can be constrained and because the need to change processes and software among users can be minimized. It is the kind of XBRL project that can be put to real use right now, generating real returns.

When you are looking at your own XBRL project, ask yourself whether it is more like the Call Modernization project–nicely constrained–or whether is is more the kind of broader, more mature project that ING hoped to undertake. If it is the latter, be careful. You should probably think in terms of a pilot project–something where the focus is more on learning, rather than operational outcomes.

One last thought — technology matures rapidly, particularly when it has as much energy and investment behind it as XBRL does. The constraints on this market that apply today could be very different in six months.  This is precisely why it is important to pay such careful attention to what is happening on either side of the chasm and to what is moving across it.

XBRL and the Chasm

On Tuesday of last week XBRL-US sponsored a set of presentations in Washington, D.C. focused on “XBRL in Government and Industry.” The conference was hosted by the Federal Deposit Insurance Corporation (FDIC), which was appropriate since it was the FDIC that was the source of some of the most significant XBRL activity announced at the conference.

Here is the news: By October 1 of this year, the more than 8300 banks submitting Call Report data to the FDIC, the Federal Reserve System, and the Office of the Comptroller of the Currency will be required to do so using XBRL. Because most banks submit these reports through use of software and services supplied by a handful of vendors, this requirement will not bring about changes in the internal operations of most banks. The initiative does, however, represent a significant application of XBRL, and opens the door to greater reuse of data and simplification of workflows for other regulatory reporting requirements. It is also a good example of the kinds of broad improvement in financial information communication and processing that XBRL enables.

Under the current system, Call Reports are submitted in a number of formats, including physical transfer of magnetic media. The agencies currently spend a substantial amount of time converting these data. Data checking and validation is done only at the tail end of the process, within the government agencies, which means that when errors or omissions turn up, they must be communicated back to the banks so that corrections can be made. In the current operating model, the major component of the regulator’s investment in collecting and analyzing these data is spent in the conversion and validation process, leaving much less time for actual analysis.

The new system, implemented as part of an interagency consortium called the Federal Financial Institutions Examination Council (FFIEC), will allow the three agencies to share a single secure information repository. Each agency will use XBRL’s ability to provide a unique semantic identity for each incoming data element to enable the agency to extract the information it needs from the repository, in the form that it needs it.

Just as important, the use of XBRL coupled with Internet-based report submission allows the agencies to specify validation suites that can be run by the banks producing the information. The validation semantics are all expressed in XBRL, and so can be communicated in a reliable, standard way to the vendors, ensuring uniform performance across vendors and allowing the agencies to extend or improve validation without having to rely on the vendors. This is the kind of thing that XBRL excels at, and it plays an important role here. Moving validation closer to the source of the information will allow banks to catch their own errors, decreasing the time investment made by all parties in the submission process. The regulatory agencies will be able to spend less time on conversion, validation, and clean up and more time on analysis, all while reducing the cost of government operations.

This matter of the cost of data collection was a consistent theme across the presentations by government agencies at the conference. In general, financial and statistical information is currently heterogeneous in format, often with the added problem of having low information value. The numbers are all present in a report, but before the advent of XBRL there has been  no easy, consistent, reliable way to enable a machine to know what the numbers represent. As one example of the cost of the current way of doing business, Mark Carney of the Commodities and Futures Trading Commission spoke in terms of spending over 80 cents of every dollar invested in data collection on conversion. This is precisely the kind of problem that XBRL is designed to solve.

For more information on the FFIEC’s Call Report Modernization program and about the creation and operation of this Central Data Repository, see

In my view, there are a number of interesting facts and associated implications about this Call Report initiative. The first is that this is a mandatory initiative. XBRL is not an option, it is a requirement. XBRL has reached a level of maturity and universality sufficient to allow a set of government agencies to make it the required standard for an important application.

A related observation is that this maturity and universality has developed outside the U.S., and is only now coming back into this country of XBRL’s origin. Mike Bartell, CIO for the FDIC, made the point this way: “If we are truly serious about disclosure and transparency, we need to move aggressively toward the adoption of XBRL. The U.S. has not been a leader in this area. It is time to step up the pace. We are buried in data, and we are not making better decisions because of that.”

It is also important to note that this early, significant development in the marketplace for XBRL tools and services is the kind of “early majority,” “pragmatist” application that Geoffrey Moore described so well in his classic marketing text, Crossing the Chasm. The pragmatists are the first technology adopters on the right side of the chasm, after crossing it. They seek narrowly focused solutions that solve specific, pressing problems, adopting a new technology when it is the only thing that will do the job. The payoff must be clear and substantial–no pie in the sky or grand visions here–and the costs must be easily contained. Pragmatists adopt new technology because it will be a sure win for a particular problem, not because it will change the world.

In this case, where the actual XBRL will produced by a handful of vendors, it is easy to see how to contain the costs associated with XBRL production.  Most banks won’t know and won’t care that the call reports are being submitted in XBRL. Further, the payoff, in terms of reduced conversion and validation costs, is clear and compelling. The fact that XBRL is an open, international standard adds to its pragmatist appeal. It is exactly the right tool for this niche application.

Getting an application across the chasm–away from the early adopters and technology aficionados and over on the other side, where the mainstream applications live–is a big step. This is real evidence of the utility and potential impact of XBRL. It is also important to remember that such early leaps across the chasm are always, of necessity, narrow, tailored, niche applications. It is the narrow applications that get across first. This is an important one. What comes next?

XBRL and The Big Stick

On Wednesday of last week PR Newswire sponsored a set of webcast presentations on XBRL. This was part of PR Newswire’s increasing engagement with XBRL. The company is in the business of publishing earnings releases and would like to see more of them arriving tagged in XBRL. To that end, PR Newswire has entered into a number of agreements with technology firms and others engaged in XBRL. last week’s panel discussion showcased an agreement with Rivet Software, in which PR Newswire offers Rivet’s Dragon Tag tool, which can be used to set up XBRL tagging of documents from Microsoft Excel and Word. The panel included Campbell Pryde, Executive Director at Morgan Stanley, Wayne Harding, VP Business Development at Rivet Software, Daniel Roberts, National Director of Assurance Innovation at Grant Thornton LLP and Vice-Chair US Adoption for XBRL-US, and Liv Watson, Vice President of XBRL at EDGAR Online, Inc.

The presentations would be useful for anyone wanting an update on XBRL issues. They are available in an onlinearchive.

As anyone following my contributions on the Gilbane blogs knows, I think that XBRL is an important early-stage standards initiative. I also find myself wondering about the eventual pace and scope of XBRL adoption. In particular, I have been wondering what will drive adoption. Much of the early XBRL activity has been focused around external financial reporting–rather than internal use of XBRL–and I have been wondering where the payoff would be for a company. If the benefits of these early XBRL initiatives go primarily to external users, what is the motivation for the investment?

One common answer is the “Big Stick” theory of adoption–the SEC is going to MAKE companies use XBRL. Well … maybe. I heard Peter Derby, Managing Executive for Operations and Management at the SEC, talk about the SEC’s XBRL initiative at the 11th International XBRL conference, and he sure didn’t sound like he was ready to make XBRL submissions a requirement. (see my blog entry on Derby’s presentation). But, there certainly could be another story behind the official, public assertion that the market, not agencies, should set standards. Does the SEC have strong motivations of its own to push for faster, more pervasive use of XBRL–or something like it?

Daniel Roberts of Grant Thornton used part of his time during last week’s panel presentation to argue that there is indeed such a motivation. Roberts acknowledged all of the official reasons for the SEC voluntary program–testing technology features, uncovering software available for data tagging, finding out how mature the taxonomies are, discovering how deeply data will be tagged, assessing the amount of effort required to tag the data, and–of course–assessing the utility of the tagging for the SEC. However, in Roberts’ view these official reasons leave out the BIG reason that the SEC needs XBRL. According to Roberts, the SEC needs XBRL so that the agency does not end up buried in a mountain of paper (or PDF or HTML — which are largely the same thing when it comes to analyzing financial reports) and with keeping the SEC out of trouble with Congress.

According to Roberts, the SEC now receives a million pages of newly filed information every day. Given this enormous stream of data, the SEC is currently able to review the financial statements from only about 18% of the companies submitting filings each year. Roberts directed attention to the language of Section 408 of the Sarbanes Oxley Act, which reads, “In no event shall an issuer required to file reports under section 13(a) or 15(d) of the Securities Exchange Act of 1934 be reviewed under this section less frequently than once every 3 years.” If the SEC is currently doing only 18% of the companies a year, and they are required to review every company no less frequently than every three years, well … do the math.

Further, given the outcry from companies having difficulty meeting SEC deadlines under the Sarbanes Oxley Act, Roberts said that it was highly unlikely that the SEC would want to report back to Congress with the news that the SEC, itself, was not able to meet the requirements of the Act.

In short, Roberts argued that the SEC needs XBRL more than it is letting on and that companies should expect to find that XBRL submissions are required within a matter of years. Given that outlook, companies would be well-advised to get started on the XBRL learning curve now, while submissions are voluntary, while you can use XBRL for a quarter, skip it for a quarter, upgrade your tools and techniques, and try again.

There is no doubt about it: A Big Stick makes an impressive argument. And there can be no doubt that the SEC does have a Big Stick. The question that remains, given the U.S. discomfort with setting technology standards by regulatory decree and the change to a new SEC chairman who is generally expected to be less aggressive in introducing regulations, is whether the SEC will want to use its Big Stick to answer the question of why companies should adopt XBRL. What do you think?

Document Retention in Light of Today’s Supreme Court Reversal of Andersen Verdict

Today’s Supreme Court ruling reversing the decision against Arthur Andersen
is big news in the compliance world. My bet is that it will have two important
effects–both good. 

The first is that, once again, it will be OK to destroy documents in
accordance with a company’s retention policy. The second is that it is going to
become even more obvious to companies that they really do need to have a
carefully designed document retention policy, along with a way to ensure that it
is implemented and monitored.

Continue reading

PCAOB Clarifies SOX Compliance Rules

Yesterday the Public Company Accounting Oversight Board (PCAOB) issued its
response to concerns that Sarbanes Oxley Section 404 requirements were onerous,
unwieldy, and just too expensive. The PCAOB published a policy
that affirmed the goals and requirements in the regulations
implementing Section 404, which requires that public companies have effective
internal controls over financial reporting and requires that an independent
auditor provides an opinion regarding the effectiveness of these controls. No
surprise there. 

What was more interesting and important was that the PCAOB did acknowledge
that many first year audit efforts were inefficient and too expensive. The
important parts of the statement called for a top-down, rather than bottom-up,
approach to internal control assessment. The PCAOB also made important
clarifications about the kinds of interactions between auditors and the
companies that they audit that are permissible and useful.

Understanding this business about "top-down" and
"bottom-up" is easier if you put it in the context of how auditing
practice has developed over time. Without that big picture perspective, Section
404 and the PCAOB statements sound like a lot of accounting jargon. But, given
the perspective, it is easier to see that we are talking about some fundamental
changes–and about expense and confusion emerging from not getting the changes
right during this past year.

Continue reading

The Operational Approach to Governance, Risk Management, and Compliance

Today marks the official release of the public draft of the governance, risk management, and compliance (GRC) paper that I have worked on over the past couple months with Ted Frank, of The Compliance Consortium, and others. The writing of the paper was driven by three convictions:

  • GRC stands apart: Governance, risk management, and compliance are all of a piece–and they are related to a coherent set of objectives and practices that are fundamentally different from the other things going on in an organization.
  • GRC needs high level attention: Governance, risk management, and compliance comprise a set of concerns and objectives that must be dealt with at the board of directors and senior management level.
  • GRC is manageable: Even though governance, risk management, and compliance touch thousands of processes and objectives throughout an organization, there really is a small, manageable set of concerns that should inform board and management decision-making.

This last point relates to the “both forest and trees” view that I wrote about in my recent post on XBRL and Compliance. To make GRC manageable we need ways to zoom into the details and zoom back out to the big picture. Said more formally, we need ways to deal with the concept at different levels of abstraction, from fine-grained to chunky. XBRL looks promising in this regard.

One of the key ideas expressed in the paper is that the United States Sentencing Commission guidelines regarding compliance and ethics can serve as a good starting point for identifying the important, board and senior management level GRC objectives. This idea is practically appealing, since following the guidelines can result in a 95% reduction in penalties in the event that, despite a company’s best efforts to prevent it, fraudulent activity takes place. The intent of the paper is to also make this idea appealing at an operational and functional level — we believe that we make the case that concentrating on just seven objectives can get management and board members focused on the right concerns and questions.

If this interests you, take a look at the paper.  If you have comments, you can of course add them here — but if you want your comments to get more in the way official consideration, you should also express your views on the Compliance Consortium website.

XBRL and Compliance

I have just finished working on a paper with an industry group that is concerned with compliance issues. The paper takes a broad look at enterprise-wide compliance issues, as distinguished from the trap (an easy one to fall into) of dealing with compliance in a fragmented way, driven by the demands of different (and changing) regulations.

What are the requirements for an enterprise-wide, operational approach to compliance? Well, to get the full answer you will need to read the paper when it comes out in the next few weeks. But there was one requirement –a requirement that I want to talk about here– that ties into the threads and postings about XBRL here on the Gilbane website.

One of the first, big steps toward getting a broader, more useful view of “compliance” consists of applying it to internal control procedures, rather than just in reference to external requirements. “Compliance,” in this view, means doing what is right for the organization.

Take relations with donors within a non-profit organization as an example. Compliance, in this instance, means that the staff follows the organization’s procedures for contacting donors, working with donors to structure gifts for maximum tax advantage, and staying in touch with and supporting donors after the gift has been given. Compliance, in this sense, means making use of what the organization has learned over time. Compliance is the means by which the organization ensures that learning is retained and put into practice.

Stepping back from the particulars and looking at the general case, compliance is one part of the mechanism by which an organization responds to its environment — to the sources of support, to threats, and, of course, to rules put in place by governments. Compliance–the exercise of internal control systems–is how the organization regulates itself so that it survives and thrives in its environment. To use a human analogy, your body’s responding to infection is kind of compliance response. At a higher level, using learned compliance, your responses in a business meeting–measuring your reactions, thinking before you speak–are also forms of control and compliance.

The point of taking this broader view  of compliance is, of course, to help organizations deal more deliberately and productively with the process of making decisions and taking risks.

But … when you put this good thinking and theory into practice, you run into a problem. The problem is that, for each component in this overall compliance system, the key to making the system work is always in the details–BUT–at the same time, you want to somehow get these systems to connect with each other.

And, they DO connect with each other. When you connect the details of
responding to infections with the details for responding to a business meeting, for example, you find that it is very difficult to put all the tact and learning about social interactions into play when you are running a raging fever.

This isn’t a far-fetched analogy. When you take a close look at the day-to-day operations at Enron, courtesy of a book such as Kurt Eichenwald’s Conspiracy of Fools, it is hard to escape the sense that the Enron tragedy grew from a combination of thousands of small infections coupled with a couple of big instances of shortsightedness and fraud. The interesting question raised by a book like Eichenwald’s is one of how the entire system managed to get out of control–and, if we can understand that–how can we prevent such interactions in the future.

So, the problem is one of finding a way to operate effectively both at the level of forest and at the level of trees. You’ve got to sweat the little things to make compliance work, but you also have to see how the little things work together in big ways.

One reason that this is so difficult is that many of the different, “tree-level” compliance efforts use different terms, because they reflect different concerns.  Calibration of lab instruments is an important aspect of compliance. Protecting privacy of patient records is another aspect of compliance. Tracking costs for clinical trials is yet another. Each uses a different language, reflecting different concerns. Yet all of these activities, taken together, contribute to assessing the health of a pharmaceutical research effort.

Successful governance–overseeing these compliance efforts and understanding what they are telling us–depends on finding a way to abstract the common elements and concerns. Communication of the common concerns depends on defining a “forest level” view that imposes uniform, organization-level language and perspective on all the tree-level activities.

My sense–and I am putting this out here for discussion and argument–is that XBRL is a good candidate for doing this. Taxonomies are a large part of what XBRL is all about, and XBRL has the flexibility, viewed as a formal language, to describe taxonomies at the level of “trees” and to link those “tree-level” concepts back up to a set of concepts that are appropriate to the needs of someone who wants to see and manage the “forest.”

Taking my pharmaceutical research example, XBRL taxonomies could describe the disparate concerns of instrument calibration, patient records, and financial costs, recording and tagging the facts associated with each of these areas of activity. The recording and identification of these facts would be an integral part of each detailed control process. At the same time, XBRL could be used to capture exception conditions and other aggregations, supporting high level, management control systems.

I would be interested in reader feedback on this idea. I am pretty sure that we do need a way to move from trees to forest and back again, and it seems to me that XBRL is set up to do that job. What do others think?

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑