Curated for content, computing, and digital experience professionals

Category: Web technologies & information standards (Page 33 of 58)

Here we include topics related to information exchange standards, markup languages, supporting technologies, and industry applications.

Gilbane Group Announces New Practice to Help Enterprises Leverage XML Technologies and Business Solutions

Gilbane Group Inc. announced the launch of a new practice area dedicated to helping organizations of all types utilize XML technologies and best practices. Well-known industry expert and long-time Gilbane associate Bill Trippe will be the practice’s Lead Analyst. Trippe is joined by industry veterans and Gilbane senior analysts Leonor Ciarlone and Mary Laplante. Gilbane’s XML Technologies and Content Strategies Practice is designed for IT and business managers who need to gain control of critical content, increase collaboration across enterprise applications, improve efficiencies through faster and more flexible information distribution between business partners and customers, and implement new business models that can keep pace with today’s internet-speed competitive requirements. The amount of XML content being generated today is staggering, as large infrastructure providers like Microsoft, IBM, Google, Oracle, and others offer tools and technologies that generate and manage XML information, While many organizations are taking advantage of XML within departmental applications, most companies are not even close to taking advantage of the XML information being created and utilized by popular applications including office software and database repositories. Significantly, many executives are unaware of the XML content and data that are untapped assets within their organizations. To learn more about Gilbane Group’s XML Consulting and Advisory Practice, visit the group’s new blog at https://gilbane.com/xml

 

Our “New/Old” XML Practice

Today we announced our new “XML Technologies & Content Strategies” consulting service. The service will be led by Lead Analyst Bill Trippe, who is joined by Mary Laplante and Leonor Ciarlone. See the press release, and Bill’s introductory post on the practices new blog at https://gilbane.com/xml. Bill, Mary, and Leonor all have long and deep experience in this area and make an exceptionally strong team. You can reach them at: xml@gilbane.com.

You’ll note the “New/Old” in this post’s title. Many readers will know that this is because we have always been involved in XML consulting, and before it existed were involved in SGML consulting, which of course is where XML came from. In fact, though we have changed the name of the company a couple of times, our original company was formed in 1986 to advice to organizations like the DoD, Department of Commerce, Lockheed, Fidelity, American Airlines, and many more, on the use of descriptive markup languages and meta-languages like SGML. In fact I first met Bill in 1987 when he was at Mitre investigating SGML. You can still read a lot of our monthly reports from the 90’s that cover markup technologies, although Tim Bray, who edited the Gilbane Report in the late 90s and is one of the authors of the XML standard didn’t write much about it then since XML was still in “stealth” mode. It was also important then to stay neutral about standards, which obviously would have been tough for Tim at the time.

So if we’ve been doing this all along, what’s new? In short, critical mass, information infrastructure, and demand. The sheer volume of XML being created is reaching a level that demands enterprise strategic attention. XML is already part of many organizations information infrastructure whether they know it or not. And while many of our consulting clients are focused on specific applications, there are also many who are looking at the big picture and really want to understand what information encoded in XML can do strategically for their business. More from today’s press release:

Gilbane’s XML Technologies and Content Strategies Practice is designed for IT and business managers who need to gain control of critical content, increase collaboration across enterprise applications, improve efficiencies through faster and more flexible information distribution between business partners and customers, and implement new business models that can keep pace with today’s internet-speed competitive requirements. The amount of XML content being generated today is staggering, as large infrastructure providers like Microsoft, IBM, Google, Oracle, and others offer tools and technologies that generate and manage XML information, While many organizations are taking advantage of XML within departmental applications, most companies are not even close to taking advantage of the XML information being created and utilized by popular applications including office software and database repositories. Significantly, many executives are unaware of the XML content and data that are untapped assets within their organizations.

Welcome to XML Technologies and Content Strategies

As Frank noted in our main blog and in the related press release, this blog is part of our launch this week of a new practice focused on the technologies, strategies, and best practices associated with using XML in content management. With this focus on XML, the new practice is broad–XML is fundamental to so many aspects of content management. Yet the focus on XML also compels us to look at content management through a certain lens. This begins with the vendor offerings, where nearly every platform, product, and tool has to meet anywhere from a few to myriad XML-related requirements. As XML and its related standards have evolved and matured, evaluating this support has become a more complex and considered task. The more complex and feature-rich the offering, the more difficult the task of evaluating its support.

And indeed, the offerings are becoming more complex, especially among platform vendors like Microsoft, IBM, and Oracle. Looking at SharePoint means evaluating it as a content management platform, but also looking specifically at how it supports technologies like XML forms interfaces, XML data and content feeds, and integration with the XML schemas underlying Microsoft Word and Excel. It also means looking at SOA interfaces and XML integration of Web Parts,and considering how developers and data analysts might want to utilize XML schema and XSLT in SharePoint application development. Depending on your requirements and applications, there could be a great deal more functionality for you to evaluate and explore. And that is just one platform.

But understanding the vendor–and open source–offerings is only one piece of the XML content management puzzle. Just as important as choosing the right tools are the strategic issues in planning for and later deploying these offerings. Organizations often don’t spend enough time asking and answering the biggest and most important questions. What goals do they have for the technology? Cost savings? Revenue growth? Accelerated time to market? The ability to work globally? These general business requirements need to then be translated into more specific requirements, and only then do these requirements begin to point to specific technologies. If XML is part of the potential solution, organizations need to look at what standards might be a fit. If you produce product support content, perhaps DITA is a fit for you. If you are a publisher, you might look at XML-based metadata standards like XMP or PRISM.

Finally, XML doesn’t exist in a content management vacuum, removed from the larger technology infrastructure that organizations have put in place. The platforms and tools must integrate well with technologies inside and outside the firewall; this is especially true as more software development is happening in the cloud and organizations are more readily embracing Software as a Service. One thing we have learned over the years is that XML is fundamental to two critical aspects of content management—for the encoding and management of the content itself (including the related metadata) and for the integration of the many component and related technologies that comprise and are related to content management. Lauren Wood wrote about this in 2002, David Guenette and I revisited it a year later, and the theme recurs in numerous Gilbane writings. The ubiquitous nature of XML makes the need for strategies and best practices more acute, and also points to the need to bring together the various stakeholders–notably the business people who have the content management requirements and the technologists who can help make the technology adoptions successful. Projects have the best chance of succeeding when these stakeholders are brought together to reach consensus first on business and technical requirements, and, later, to reach consensus on technology and approach.

As Frank noted, this is “New/Old” news for all of us involved with the new practice. I first discussed SGML with Frank in 1987 when I was at Mitre and responsible for a project to bring new technology to bear on creating specifications for government projects. Frank had recently launched his technology practice, Publishing Technology Management. Leonor was a client at Factory Mutual when I worked for Xyvision (now XyEnterprise) in the early 1990s. And I probably first met Mary at a GCA (now IDEAlliance) event during my Xyvision days and when she worked for a competitor, Datalogics. We are, in the polite vernacular of the day, seasoned professionals.

So welcome to the new blog. Watch this space for more details as we announce some of the offerings and initiatives. I plan to blog actively here, so please add the RSS feed if you prefer to digest your material that way. If you have ideas or suggestions, don’t hesitate to post here or contact me or any of the other analysts directly. We look forward to the interaction!

W3C Opens Data on the Web with SPARQL

W3C (The World Wide Web Consortium) announced the publication of SPARQL, the key standard for opening up data on the Semantic Web. With SPARQL query technology, pronounced “sparkle,” people can focus on what they want to know rather than on the database technology or data format used behind the scenes to store the data. Because SPARQL queries express high-level goals, it is easier to extend them to unanticipated data sources, or even to port them to new applications. Many successful query languages exist, including standards such as SQL and XQuery. These were primarily designed for queries limited to a single product, format, type of information, or local data store. Traditionally, it has been necessary to formulate the same high-level query differently depending on application or the specific arrangement chosen for the relational database. And when querying multiple data sources it has been necessary to write logic to merge the results. These limitations have imposed higher developer costs and created barriers to incorporating new data sources. The goal of the Semantic Web is to enable people to share, merge, and reuse data globally. SPARQL is designed for use at the scale of the Web, and thus enables queries over distributed data sources, independent of format. Because SPARQL has no tie to a specific database format, it can be used to take advantage of “Web 2.0” data and mash it up with other Semantic Web resources. Furthermore, because disparate data sources may not have the same ‘shape’ or share the same properties, SPARQL is designed to query non-uniform data. The SPARQL specification defines a query language and a protocol and works with the other core Semantic Web technologies from W3C: Resource Description Framework (RDF) for representing data; RDF Schema; Web Ontology Language (OWL) for building vocabularies; and Gleaning Resource Descriptions from Dialects of Languages (GRDDL), for automatically extracting Semantic Web data from documents. SPARQL also makes use of other W3C standards found in Web services implementations, such as Web Services Description Language (WSDL). http://www.w3.org/

MadCap Software Debuts MadCap Lingo & MadCap Analyzer

MadCap Software announced MadCap Lingo, an XML-based, integrated translation memory system and authoring tool, aimed at eliminating the need for file transfers in order to complete translation. Document components, such as tables of content, topics, index keywords, concepts, glossaries, and variables all remain intact throughout the translation and localization process, so there is never a need to recreate them. MadCap Lingo also is integrated with MadCap Flare and MadCap Blaze, and it is Unicode enabled to help documentation professionals deliver a consistent user experience in print, online, and in any language. MadCap Lingo is being announced in conjunction with the new MadCap Analyzer, software that proactively recommends documentation content and design improvements. MadCap Lingo works with MadCap Flare, the company’s native-XML authoring product, and MadCap Blaze, the native-XML tool for publishing long print documents, which will be generally available in early 2008. A user creates a MadCap Lingo project to access the source content in a Flare or Blaze project via a shared file structure. Working through Lingo’s interface, the user accesses and translates the content. Because the content never actually leaves the structure of the original Flare or Blaze project, all the content and formatting is preserved in the translated version. Once a project is translated, it is opened in either Flare or Blaze, which generates the output and facilitates publishing. At the front end of the process, Flare and Blaze can import a range of document types to create the source content. Following translation, the products provide single-source delivery to multiple formats online and off, including the Internet, intranets, CDs, and print. MadCap Lingo is available and is priced at $2,199 per license, but is available at an introductory price of $899 for a limited time. MadCap Lingo also is available on a subscription basis for $649 per year. Fees for support start at $449 per year. http://www.madcapsoftware.com/

SiberSafe Hosted XML CMS Service Now Available On-Demand

SiberLogic announced SiberSafe On-Demand, a monthly subscription approach to XML content management for technical documentation teams who are looking for significant efficiency gains in producing long-lived, complex, evolving content. SiberSafe On-Demand delivers full SiberSafe functionality as an ASP service in a secure data center. Each team has full access/administrative rights to their server for system administration and configuration. SiberSafe On-Demand also includes daily content backups and SiberLogic’s technical support service. SiberSafe On-Demand “out of the box” configuration offers your choice of DTD – DITA, DocBook, or MIL-STD 2361 – with sample templates and stylesheets. Also included are SiberSafe Communicator (our XML authoring tool) and our integrated publishing tool. Alternatively, you can continue to use your own editor, such as XMetaL, Epic, or FrameMaker, or your own publishing tools. SiberSafe On-Demand costs only $799 per month for the first pair of users (one author and one reviewer) and as little as $275 per user monthly for 10+ users. There are no additional upfront costs. Anyone who signs up for SiberSafe On-Demand before the end of January 2008 will receive access for one additional author free of charge for the first year. http://www.siberlogic.com/

In.vision and Trisoft Announce Partnership to Boost Technical Publishing

In.vision Research and Trisoft have announced a partnership to integrate their products to meet the needs of content management and publishing for a variety of industries including telecom, high-tech, software, mechanical equipment and automotive. The In.vision and Trisoft partnership will use In.vision’s Xpress Author for Microsoft Word add-in to allow non-technical writers to create XML content in Word. When the non-technical writer saves the document into the Trisoft InfoShare Content Management System, technical writers will be able to incorporate the content into their documents with ease, and continue with the various review, translation, and publishing processes that InfoShare provides. http://www.tri-soft.com, http://www.invisionresearch.com

Understanding Globalization Standards: Gilbane Boston Session Summary

The Globalization Track’s “Understanding the Globalization Standards Landscape” session provided a trio of experts to content management professionals wading through the industry’s “alphabet soup” of authoring, translation, and integration standards. Moderator Kaija Poysti deftly led the audience on a road trip through a multi-dimensional standards landscape with more than a few controversial roadblocks.
The mission was to understand how a standards-driven strategy provides an impact on customer experience, provide expert guidance on which ones really matter, and take-away advice on what to ask when evaluating solutions. Panelists Don DePalma from Common Sense Advisory (CSA), Andrew Draheim from Dig-IT!, and Serge Gladkoff from GALA delivered on the mission and then some, with commentary on which are practical, which are simply theoretical, and most importantly, which have a positive impact when adopted. Highlights:

    • On a “standards reality check”: “You have no choice on some; Some are about good hygiene, but little used; and others are not ready for prime time in their current form. However, the code and content ecosystems definitely need an injection of globalization DNA.” Don DePalma, CSA.
    • On standards benefits: “Adoption can decrease the internal cost of doing business, decrease typical business risks, facilitate business interactions, increase the value of services to clients, save on R&D and business development, and save on internal personnel training. However, there are too many private standards and too few generally-adopted public standards. Standards are notoriously difficult to develop and upon completion, they compete; be warned though, the “winning” standards not always the best ones.” Serge Gladkoff, GALA Standards Committee Chair.
    • On synergies between content and translation management: “When these technologies work together, it streamlines processes, reduces duplication and errors, and makes publishing easier. Which standards will be around tomorrow? Take a look at Translation Memory eXchange, Segmentation Rules eXchange, XML Localisation Interchange File Format (XLIFF), and TermBase eXchange.” Andrew Draheim, Dig-IT!.

Many thanks to our panel for guiding our audience through the globalization standards landscape with candor and real-world advice.

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑