Curated for content, computing, and digital experience professionals

Category: Web technologies & information standards (Page 22 of 58)

Here we include topics related to information exchange standards, markup languages, supporting technologies, and industry applications.

W3C Announces Update to CSS 2.1 Candidate Recommendation

The World Wide Web Consortium (W3C) Cascading Style Sheets (CSS) Working Group updated the Candidate Recommendation of “Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification.” CSS 2.1 is a style sheet language that allows authors and users to attach style (e.g., fonts and spacing) to structured documents (e.g., HTML documents and XML applications). CSS 2.1 corrects a few errors in CSS2 (the most important being a new definition of the height/width of absolutely positioned elements, more influence for HTML’s “style” attribute and a new calculation of the ‘clip’ property), and adds a few highly requested features which have already been widely implemented. But most of all CSS 2.1 represents a “snapshot” of CSS usage: it consists of all CSS features that are implemented interoperably. This draft incorporates errata resulting from implementation experience since the previous publication. http://www.w3.org/Style/

W3C HTML Working Group Publishes Working Draft of HTML 5

The World Wide Web Consortium (W3C) HTML Working Group has published a Working Draft of “HTML 5.” HTML 5 adds to the language of the Web features to help Web application authors, new elements based on research into prevailing authoring practices, and clear conformance criteria for user agents in an effort to improve interoperability. This particular draft specifies how authors can embed SVG in non-XML text/html content, and how browsers and other UAs should handle such embedded SVG content. See also the news about moving some parts of HTML 5 to individual drafts. The ” full list of changes” since the previous draft are listed in the updated companion document “HTML 5 differences from HTML 4.” http://www.w3.org/html/wg/

W3C Publishes New Working Drafts for OWL 2 – Last Call

The W3C OWL Working Group has published new Working Drafts for OWL 2, a language for building Semantic Web ontologies. An ontology is a set of terms that a particular community finds useful for organizing data (e.g., for data about a book, useful terms include “title” and “author”). OWL 2 (a compatible extension of “OWL 1″ ) consists of 13 documents (7 technical, 4 instructional, and 2 group Notes). For descriptions and links to all the documents, see the ” OWL 2 Documentation Roadmap.” This is a “Last Call” for the technical materials and is an opportunity for the community to confirm that these documents satisfy requirements for an ontology language. This is a second Last Call for six of the documents, but because the changes since the first Last Call are limited in scope, the review period lasts only 21 days. For an introduction to OWL 2, see the four instructional documents: an “overview,” “primer,” “list of new features,” and “quick reference.” http://www.w3.org/2007/OWL, http://www.w3.org/TR/2009/WD-owl2-new-features-20090421/

SEC Issues Summarized XBRL Guidance

The SEC has posted newly summarized XBRL compliance information on their website (http://www.sec.gov/info/smallbus/secg/interactivedata-secg.htm ). The guidance is directed towards small businesses but contains a concise description of the program for all companies. Information covers: the three year phase-in period, certification requirements, third-party involvement, Modified liability, consequences of non-compliance, web posting, grace periods, due dates, applicable financial statements, required formats, optional early compliance, and other helpful resources.

The information is not meant to replace the rules as published in the EDGAR Filing Manual (Chapter 6, Interactive Data), located here: http://www.sec.gov/rules/final/2009/33-9002.pdf.

The bottom line is that each company will be responsible for the content in their SEC XBRL filings and should become very familiar with all reporting requirements. http://www.sec.gov/info/smallbus/secg/interactivedata-secg.htm is an excellent place for companies large and small to begin to explore the SEC’s XBRL mandate.

Quark Teams with IBM Enterprise Content Management to Bring XML and DITA to the Masses

Quark announced that it has teamed with IBM Enterprise Content Management (ECM) to enable the broad adoption of XML across the enterprise by integrating Quark XML Author with IBM FileNet Content Manager. Quark makes it possible for any IBM FileNet Content Manager user working in Microsoft Word to author intelligent content that can be reused and delivered to multiple channels or formats. The ability to author, manage, and reuse structured content enables critical business needs, such as managing intellectual property, complying with regulatory mandates, and automating business processes. A simple and streamlined process for XML authoring also helps organizations to enable enterprise-wide adoption of XML and DITA. Quark XML Author for Microsoft Word is an XML authoring tool that allows users to create XML content in a familiar word processing environment. Quark XML Author enhances Microsoft Word’s native XML support by allowing users to create narrative XML documents directly, without seeing tags, being constrained to boxes, or being aware of the technical complexities associated with XML. http://www.quark.com/

An Information Parable

With apologies to S. I. Hayakawa, whose classic "A Semantic Parable" has been a staple of virtually everyone’s education for more than a half-century.

Not so long ago nor perhaps all that far away, there existed a need for several departments of a huge organization to share information in a rapid and transparent way so that the business of the organization could be improved and its future made more secure.

Now each of these departments understood and agreed with the basic need for sharing, so no one expected there to be problems achieving the desired results.  Each department had its own IT group, working diligently using best practices of information technology as they understood them. When the need for information sharing among the departments became evident, the executive managers called a meeting of IT, operating managers and lead technologists from each department.  At this meeting, the executives explained that the need for a more transparent and flexible information environment among the departments and with the world outside.  Everyone nodded their agreement.

The IT manager of a major department exclaimed; "what we need is an enterprise information architecture; an EIA." Most of the other IT representatives agreed, and an effort to develop such an architecture was begun right there. The initiating department stated that because it had the largest and most mature IT infrastructure, the EIA should be modeled on technology approaches it was using.  Several other departments agreed–they had already adopted similar IT approaches and could easily participate in such an EIA.  Some other departments, however, having gone down different paths in their IT planning, took some issue with this suggestion. They feared that changing course to come in line with the suggested architecture could seriously disrupt their existing IT plan, funding and staffing.  Although willing to be good citizens, they were mindful that their first responsibility was to their own department.

More discussion ensued, suggesting and examining different IT concepts like J2EE, SOA, SQL, Web-centricity, BPR, and so on.  Several departments that had software capable of supporting it even mentioned XML. Like a Chinese puzzle, the group always found itself just short of consensus, agreeing on the basic concepts but each bringing variations in implementation level, manufacturer, etc., to the discussion.  In the end, tempers frayed by the seemingly endless circular discussions, the group decided to table further action until more detail about the need could be developed. Actually, nearly everyone in the room knew that they probably were, at that moment, as close to consensus as they were likely to get unless the top managers chose and mandated a solution. Anticipating just such a mandate, nearly every department descended on top management to make the case for its particular IT and EIA approaches, or , sensing defeat, for an exemption from whatever the decision turned out to be. The top managers of course, who knew little about the details of IT, were affected most by the size and clout of the departments beseeching them and by the visibility of the IT vendors they touted.  Battle lines were drawn between groups of departments, some of whom even went so far as to turn their vendors loose on top management to help make the case for their approach. Like molasses in winter, the entire situation began to congeal, making any movement-or communication among the departments for that matter-unlikely.  In the midst of this growing chaos, the original need to share information-and the information itself-was almost completely forgotten.

Then, when things looked terminal, someone from a department operating staff suggested that maybe things would work better if the organization just developed and adopted standards for the information to be exchanged and didn’t try to develop anything so far-reaching as an entire Enterprise Information Architecture. At first, no one listened to this obviously "un-IT" suggestion, but as things got worse and progress seemed out of reach, top management asked why the suggestion shouldn’t be considered.  After much grumbling, a meeting was called in which the staff making the suggestion laid out their ideas:

  • First, they said, we should decide what information must be exchanged among departments. We can do this based on our knowledge of the information content itself so we won’t need a great deal of technical skill beyond an understanding of the information standard selected.
  • Next, we might decide what interchange format will be use to exchange the information. It will be important that this format be capable of easy creation and ingestion by the IT tools in each participating department. XML seems to be a growing interchange format so maybe we should consider XML.
  • Then we can document what and how we want to exchange, and publish the documentation to every department so that their staffs can attend to the task of exporting and importing the desired information elements, taking care to avoid asking the departments to use any particular technology to accomplish this exchange, but with the easy availability of XML tools, that shouldn’t be difficult.
  • Then we may want to set some deadlines by which the various departments must be able to exchange information in the format we choose. That will ensure that the entire effort keeps moving and will help flush out problems that need more resources. Maybe if we just tell them the results we need, they won’t be so likely to resist.
  • Finally, we ask the various IT staffs to come up with their own technological approaches to the act of sharing: intranet, Internet, VPN, etc. They’re really good at this and they should have the say as to how it is done for their department.

After the presentation, there was silence in the room followed by some mildly contemptuous grumbling from some of the IT staff members in the back.

How, they whispered, could a complex challenge like integrating the organization’s IT systems into an EIA be dealt with by a few simplistic rules about data formats?   Finally, one of these malcontents gave voice to this objection, to which the presenter replied that the entire idea was to avoid impact on the complex ongoing IT activities of the various departments. The goal, he said, was to articulate what the organization needed in terms of information, leaving the approaches for its provision to each department’s IT staff. This, he said, would hopefully provide a level at which consensus could be reached, technologically based consensus having proven elusive for many reasons, some quite serious.

Sometimes, he said, it isn’t as important to create the impetus to force consensus, as it is to develop a rationale on which that consensus can be achieved and accepted voluntarily by the players. In the case of our hypothetical organization, there were reasons why the technological lives of the departments would never fully coincide and why each would resist even the weight of management dictates to do so. There were not, however, the same reasons why these departments could not agree on what the organization needed in shared information, if each department would be allowed to support the sharing in its own way.

The group thought about this radical departure from good systems engineering disciplines and began to realize that perhaps some integration challenges cannot be met by traditional (hard) systems and technology approaches–in fact, it may have taken quite some time and more conversations to reach this point. When this had finally penetrated, the departments agreed to base their collaboration on information itself, began the joint process of building needed interchange foundations, actually working with the operating staffs who created, used and understood the information–they chose XML and each departm
ent found that it had significant XML resources in the software it already used–and went back to work confident that they would be asked to give up neither their hard-won IT environment nor their autonomy as professionals.

As for the organization as a whole, over the next year or so it saw its information sharing begin to improve, spent relatively little of money doing it… and it was able to continue the practice of having all-hands holiday parties at which the department IT staffers and operating folks spoke to one another.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑