Curated for content, computing, and digital experience professionals

Category: Web technologies & information standards (Page 46 of 58)

Here we include topics related to information exchange standards, markup languages, supporting technologies, and industry applications.

The Matter of Assurance

One of the interesting things about the XBRL application space is the way that it takes some of the problems that have been around since the birth of SGML and XML and brings them into sharp focus. One of the “foundation” ideas behind XML — and, before that, SGML — is that the tagged file is the reference object–its the part you keep–and the printed output is, well … just output. You can always make another one of those. The idea, as everyone knows, that you save the document in one form, and then publish and deliver in many forms.

Now, suppose that what you are saving–in XBRL–is your firm’s financial statements.  Here is the question: When the auditors show up to offer their opinion as to whether your financials fairly represent the state of your company, which form of financials do they offer their opinion on?  The XBRL?  Or some kind of printed version?

It’s an important question. It is also something of a trick question. If you don’t answer “the XBRL version,” then it is pretty clear that XBRL is not really the version that matters. Oops.  So much for plans to really
submit financial statements in XBRL to the SEC on anything but a test basis … Right? But, if you DO answer “XBRL,” you quickly come face-to-face with the fact that nobody has any clear idea of just how you would do that.

What would auditing an XBRL financial statement mean?  Having accountants read through the actual XBRL?  Probably not, huh? So, they are dealing with something derived from the XBRL? How do they know that this output fairly represents the actual XBRL? How do they know that there isn’t something in the XBRL that isn’t showing up in the rendition that they are reviewing?

And … just what is the “XBRL document?”  The result of  XBRL that you can see does not come from just one thing. The representation that you see on the page draws from many linked files and uses processing capabilities scattered across other linked files. So, an audit couldn’t really look just at a single file — the opinion also depends on all the other stuff that is linked in that makes sense of the particular XBRL instance document.

I’m not arguing that this is not a solvable problem.  But I can say that it is not a SOLVED problem at the moment, and can also say that the solving of it is going to take some time and some serious effort.

In the meantime, what will happen in the real world is that auditors will express an opinion on financials that will be represented in PDF or HTML, and then someone will take those PDF or HTML pages and convert them to XBRL.

That’s fine — the XBRL is WAY more useful than the PDF or HTML — but is also an obviously flawed approach:

  • The XBRL is just a translation of the “real” financials.  We are left with the question of whether it is really right and whether there were errors in translation.
  • More fundamentally, the XBRL is not the official, normative document. It is, instead, some kind of shadow form of the real thing. That seems like a pretty poor basis for legally binding data interchange.  Someone, someday, needs to be able to provide a professional opinion about the XBRL, itself, as a fair representation that does not contain material misstatement.

Don’t get me wrong … this is not some kind of “fatal flaw” in the XBRL story.  But it is a real problem, and a really good illustration of kind of difficult problem that emerges, 20 years down the road, as we continue to pursue the SGML and XML ideal of separating content from presentation. As we have found again and again, the semantics depend on both content and presentation. Audited financials are just an extremely demanding application that underscores that point.

XBRL Conference – Tuesday Morning – Market Maturity

XBRL is a business standard. The immediate users include financial executives, accountants, analysts, and financial regulators, as well as investors of all sizes. All the suits and ties in the audience fit the picture of this user community–this sure does not look like the same crowd that I see at web publishing conferences.

But, scratch the surface, and there is a lot that’s the same. The audience at this conference is mostly people who are building things. They are early adopters and vendors and integrators serving early adopters.

One of the most interesting talks in the first set of the Tuesday morning sessions came from Peter Derby, whose job is to make the SEC more effective and efficient. His title is Managing Executive for Operations and Management, Office the Chairman, US. Securities and Exchange Commission.

The SEC would like to be able to review the substance of a much greater number of financial reports with greater accuracy and greater reliability. Receiving the filings in the form of tagged data has obvious appeal.  So, last year the SEC put out a request for comment on a proposal to invite companies to make voluntary submissions of data in XBRL format. The voluntary program went live in March of this year.

And so far the SEC has received (drum roll …) THREE voluntary filings.

Gosh. That many!

To be fair, companies have been covered up with meeting Sarbanes-Oxley 404 requirements, which are sure not voluntary.  That could be one reason for the slow response rate to date. But Derby thinks there could be other reasons–and other problems for the XBRL community to solve …

  • Not enough off-the-shelf tools:Derby’s view is that, at the moment, XBRL is just too hard. There are not enough tools for preparers to use, and there are not enough analytical and presentation tools for information users. There are too many people still looking at tags.
  • Not enough internal use:One artifact of the way that XBRL has been driven by regulators is that much of the early activity has been focused at the end of the process: after a company produces its financial statements the old way, THEN they are broken into pieces and marked up in XBRL. Derby notes that these leaves out most of the potential financial benefit of the process. He suggests that the XBRL community needs to start making the case for use earlier in the process, when the XBRL might serve internal processes.
  • Too much focus on boiling the ocean:  Derby said that he recognizes, of course, that XBRL is an international standard, and so needs to address a host of difficult problems as you move across accounting standards and practices. But, in his view some of this time would be better spent by focusing on pragmatic issues such as making XBRL easier for humans to read, and on change management.

In my view, Derby’s first two points are on the money. I am less in agreement with the last one. Particularly with a lot of the XBRL activity happening within the European Union, I think that getting the internationalization right is critical.  And … human readability? I thought we were going to focus on tools.

In speaking privately with Derby after his presentation, I asked him about the purpose of the voluntary program. His answer was that the SEC simply needs to find out what they could do with XBRL submissions. Further, he feels that this initiative must be largely market-driven, not regulator driven. His hope is that, perhaps over a period of three years, the SEC will begin to see enough volume in submissions to permit some real economies and new approaches to using and analyzing the financial filings.

Derby’s presentation was followed by Otmar Winzig, Vice president of investor relations for Software AG and Member of the Board of DIRK (German Investor Relations Association). After hearing about Derby’s three voluntary submissions, Winzig was suddenly feeling much better about his pilot program of 8 companies, scheduled to expand this year to 25.

Winzig made an interesting argument for small and mid-cap companies to get behind XBRL–disintermediation.  As the investor relations head at a mid-cap company, he recognizes that one of his big problems is getting analyst coverage.  He argues that 90% of the 10,000 companies traded on European stock exchanges are virtually unknown to investors. As a result, these companies are almost completely dependent on sell-side analysts to get the word out about the company’s performance–even when results are outstanding.

Winzig sees a possibility that broad adoption of XBRL, coupled with tools that allow investors to make direct use of XBRL, would allow small and mid-cap companies to take their good stories directly to investors, and, in the process, to become more independent of analysts who are also interest driven market participants.

All of this should be pretty familiar to readers who have watched SGML or XML market development — or, for that matter, almost any new market. The market needs more applications to grow, and the market is not big enough to attract substantial investment and application development. Put another way, it is precisely the kind of market where entering early with a relatively modest investment can produce a nice return.

 

XBRL – An Exciting Early Market

I am writing at the end of day 1 of the 11th International XBRL conference in Boston. Over the course of the day I have seen a lot and learned a lot–which I will share with you in a moment.

But I wanted to start with this end of the day perspective: this is a really exciting area of activity. If I were starting a small XML company today, this market would be at the top of my list. It is an EARLY market–no question about it. It is the kind of conference where the vendors still feel obliged to show you the actual markup — early, early. But there is energy and opportunity here that is missing in many of the more mature areas that we cover for the Gilbane Report. This is an exciting place with a lot of problems yet to solve.

At the moment, the activity is being driven primarily by regulatory requirements–most importantly, European regulatory requirements. (Think in terms of all the members of the EU now wanting to find ways to transparently share information across what once were many different accounting standards and sets of national regulations.) The good thing about regulatory requirements is that they can open opportunities for small, innovative firms. I am seeing that happening here.

Apart from the regulatory requirements, consider that, as of today, financial analysts begin the job of understanding a company’s financial statements by cutting and pasting data either from an HTML version of the financial statements–or a PDF one–into spreadsheets.  That’s nuts. It can’t last.  There has to be a better way. XBRL is that better way. At the end of the day, there are many users other than regulators who want this stuff.

There are plenty of problems here too. As I dig into my notes from today’s sessions in more detail–in subsequent Blog entries–I will share some of what I see. But I didn’t want to dive into the critical viewpoint stuff without first saying that this is one hot area.  I am looking forward to day 2.

XQuery

We had an interesting briefing with Jerry King, Vice President & General Manager, XML Products, for Data Direct Technologies. Jerry champions DataDirect’s XQuery initiatives and products, including Stylus Studio, their XML IDE.

Jerry makes a great case for XQuery being a game-changing technology. That’s his job, of course, but I tend to agree. I am involved in a project now where XQuery is the central technology, and I am convinced of its core benefits for this client and for others. There is also this roundup about XQuery on Internetnews.com that makes some of the points Jerry did, and includes some interesting quotes from Sandeepan Banerjee of Oracle, who leads their XML initiatives.

Whither SVG

Writing for Publish.com, Matt Hicks provides a very good update on SVG support in the browser, putting in perspective recent announcements about SVG support in upcoming versions of Opera and Mozilla Firefox.

Keynote Debate: Microsoft & Sun: What is the Right XML Strategy for Information Interchange?

I am liveblogging the Keynote Debate between Microsoft and Sun on what is the right strategy for information interchange. The panelists are Tim Bray, Director, Web Technologies, Sun Microsystems, and Jean Paoli, Senior Director, XML Architecture, Microsoft. Jon Udell is moderating.

  • Actually Frank Gilbane is moderating, and not Jon, so we will hear some of Jon’s thoughts as well
  • Frank: the session is really about strategies for sharing, preserving, and integrating document content, especially document content with XML.
  • Frank gave some background about the European Union attempts to standardize on Microsoft Office or OpenOffice
  • Tim elucidated some requirements of your data format. (1) Technically unencumbered and legally unencumbered (2) High quality (and a notable aspect of quality is allowing a low barrier to entry). Tim: “As Larry Wall (the inventer of Perl) noted, easy things should be easy, and hard things should be possible).”
  • Jean predicted that by 2010, 75% of new documents will be XML.
  • Tim agreed with Jean that 75% of new documents will be XML by 2010, but asked how many of them will be XHTML (as opposed toa more specialized schema, I assume).
  • Some agreement by all that electronic forms are an important aspect of XML authoring, but Tim thinks the area is “a mess.” I’m paraphrasing, but Tim commented on the official XForms release, “Well, it’s official.”
  • Jean commented that XML-based electronic forms are made more difficult because forms themselves require consideration of graphical user interface, interactivity, and even personalization to a degree. This suggests forms are more complex than documents. (And this reminds me of a comment Mark Birbeck made about there being a fine line between an electronic form and an application.)
  • Good question from the audience. So much time has elapsed since SGML got started, and we are still only have XSL-FO (which this person was not happy with). What does this suggest about how long it will take to get better, high-quality typographically sophisticated output?
  • Tim would suggest we are seeing some improvement, beginning with better resolution on the screen.
  • Another commenter weighed in, suggesting that format is important and format does convey meaning. Would like to hear that the tools are going to get better.
  • Frank: when do you need a customized schema?
  • Jean: best way to safeguard your data and systems is to have an XML strategy. You can gain efficiencies you never had before. Also suggested that the Microsoft schemas will not somehow trap your content into Microsoft’s intellectual property.
  • Jon’s takeaways: (1) software as service (2) XML-aware repositories and (3) pervasive intermediation (the content flows in such a way that you can intermediate it)

XBRL: You Gotta Love It

One of the cover stories on the new, April Journal of Accountancy carries the blurb: “Six Reasons to Love XBRL.” The article is actually part of continuing coverage by the American Institute of Public Accountants
(AICPA).  The AICPA’s commitment to making CPAs increasingly aware of XBRL
is a good thing if you are interested in seeing more and more ability to do
intelligent processing of financial documents. 

One unfortunate thing about the article, though, is that the six reasons to
love XBRL are all focused primarily on its use outside the company–after
you have produced financial statements in XBRL.  As I have noted
before, some of the really interesting applications–applications that could
be used as part of an internal control framework–happen only if you begin using
it inside the company and earlier in the process.

I think we’ll get there. 

By the way, there is a conference on
XBRL coming up in Boston later this month that you might be interested in if
you are wanting to learn more about XBRL and its applications.

Using XML in Enterprise Content Management: Technologies and Case Studies

As part of the conference next week, I will be doing a tutorial on XML and how it is currently used in content management applications. There is plenty to talk about. While there are few “pure” applications of XML content management, XML is used, in varying degrees, to manage and represent the content, the metadata, the supporting data, and the configuration data in many content management applications.
We will spend some time talking generally about how XML is used in content management applications. Much of the focus will be on a series of brief case studies–example applications really–discussing how successful projects use XML today.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑