The Gilbane Advisor

Curated for content, computing, data, information, and digital experience professionals

Page 224 of 934

Semantic Technology: Sharing a Large Market Space

It is always interesting to talk shop with the experts in a new technology arena. My interview with Luca Scagliarini, VP of Strategy and Business Development for Expert System, and Brooke Aker, CEO of Expert System USA was no exception. They had been digesting my research on Semantic Software Technologies and last week we had a discussion about what is in the Gilbane report.

When asked if they were surprised by anything in my coverage of the market, the simple answer was “not really, nothing we did not already know.” The longer answer related to the presentation of our research illustrating the scope and depth of the marketplace. These two veterans of the semantic industry admitted that the number of players, applications and breadth of semantic software categories is impressive when viewed in one report. Mr. Scagliarini commented on the huge amount of potential still to be explored by vendors and users.

Our conversation then focused on where we think the industry is headed and they emphasized that this is still an early stage and evolving area. Both acknowledged the need for simplification of products to ease their adoption. It must be straightforward for buyers to understand what they are licensing, the value they can expect for the price they pay; implementation, packaging and complementary services need to be easily understood.

Along the lines of simplicity, they emphasized the specialized nature of most of the successful semantic software applications, noting that these are not coming from the largest software companies. State-of-the-art tools are being commercialized and deployed for highly refined applications out of companies with a small footprint of experienced experts.

Expert System knows about the need for expertise in such areas as ontologies, search, and computational linguistic applications. For years they have been cultivating a team of people for their development and support operations. It has not always been easy to find these competencies, especially right out of academia. Aker and Scagliarini pointed out the need for a lot of pragmatism, coupled with subject expertise, to apply semantic tools for optimal business outcomes. It was hard in the early years for them to find people who could leverage their academic research experiences for a corporate mission.

Human resource barriers have eased in recent years as younger people who have grown up with a variety of computing technologies seem to grasp and understand the potential for semantic software tools more quickly.

Expert System itself is gaining traction in large enterprises that have segmented groups within IT that are dedicated to “learning” applications, and formalized ways of experimenting with, testing and evaluating new technologies. When they become experts in tool use, they are much better at proving value and making the right decisions about how and when to apply the software.

Having made good strides in energy, life sciences, manufacturing and homeland security vertical markets, Expert System is expanding its presence with the Cogito product line in other government agencies and publishing. The executives reminded me that they have semantic nets built out in Italian, Arabic and German, as well as English. This is unique among the community of semantic search companies and will position them for some interesting opportunities where other companies cannot perform.

I enjoyed listening and exchanging commentary about the semantic software technology field. However, Expert System and Gilbane both know that the semantic space is complex and they are sharing a varied landscape with a lot of companies competing for a strong position in a young industry. They have a significant share already.

For more about Expert System and the release of this sponsored research you can view their recent Press Release.

Open Text Expands Solutions for the Global Legal Market

Open Text Corporation, the provider of enterprise content management (ECM) software, today announced that it has expanded its solutions in the global legal market including introducing key integrations between Open Text Document Management, eDOCS Edition (eDOCS DM) and Open Text Social Workplace available this fall. First released in the summer of 2009, Open Text Social Workplace supports a team’s ability to form, organize and collaborate on projects. Built to be flexibly deployed either standalone or as part of another solution, Open Text Social Workplace will integrate with Open Text eDOCS allowing users from within law firms to collaborate on documents and matters stored and governed within eDOCS. This includes new microblogging and instant messaging features and respects current permissions and governance rules. Open Text Document Management, eDOCS Edition helps eliminate inefficiencies caused by an inability to manage documents as well as the “islands of information” prevalent in many organizations. It helps control document-based knowledge assets by enabling users to capture, organize, locate and share business content in an integrated environment. With the release of eDOCS DM 5.3, full Windows 7 and Microsoft® Office® 2010 support and updated integrations are available. Also added in this release is new platform support for 32 and 64 Bit versions of Windows Server 2008,SQL Server 2008, and Windows Communication Foundation (WCF) support, while deployment costs are lowered through native Microsoft Windows Installer (MSI) support. eDOCS DM customers can now use the Open Text flagship records management offering with native integration, search and access from within the native eDOCS DM user interface. Records management of physical items, electronic records and email, as well as structured data from systems such as Microsoft SharePoint® 2010 and SAP are all available in this release. Support for Apple iPad is now available for WirelessDMS for eDOCS allowing users to access content from within eDOCS DM using the iPad device. http://www.opentext.com/

SDL and Leximation Team Up for DITA Solution with Adobe FrameMaker

SDL, the provider of Global Information Management solutions, announced the availability of an integration between SDL Trisoft and Leximation DITA -FMx, the DITA plugin for Adobe FrameMaker. By providing a bridge between SDL Trisoft and Leximation DITA -FMx, SDL gives the ability to use SDL’s advanced Component Content Management capabilities with Adobe FrameMaker. The new integration combines technologies for authoring and managing structured content. Together SDL and Leximation share several major clients that want a joint solution. www.sdl.com www.leximation.com

DotNetNuke Introduces Enterprise Edition

DotNetNuke Corp.,the Web Content Management Platform company, announced the release of the DotNetNuke Enterprise Edition. The Enterprise Edition includes the new DotNetNuke Content Staging feature that allows users to edit and approve content on a staging server prior to pushing the site to production. In addition, the DotNetNuke Web CMS includes a Content Localization feature that enables management of multi-language web sites. The Content Localization feature is included in all Editions of the platform including the Enterprise, Professional and Community Editions. Exclusive to the Enterprise Edition, Content Staging allows users to create a separate staging server where all intended production web site changes can be implemented and tested before being published publicly. It has features for organizations with many content contributors and tight restrictions on web site content publishing and review. The system will compare the staging server configuration to the production server and identify missing components and provides a detailed view of all planned changes. Other features include: An audit tool that creates a record of all publishing events; A “white list” in which users can define which modules should push both their settings and content during publishing and which should push only the module settings; A secure publishing function which allows users to easily push their site from staging to production. All Editions of DotNetNuke including the Enterprise, Professional and Community Editions, now feature a new Content Localization capability that helps users manage translated versions of their web pages. This feature includes management and configuration-mapping tools to keep translated pages synchronized across a web site. With the introduction of the Enterprise Edition, DotNetNuke will no longer offer the Elite Edition, which will now comprise the Professional Edition plus Elite Support. The new Elite Support option – available for the Professional and Enterprise Editions – features extended support hours, faster guaranteed support, priority management of support tickets, installation upgrade assistance, and source code access to the proprietary Professional or Enterprise Edition modules. http://www.dotnetnuke.com

Federated Media Acquires Technology Suite from TextDigger

Federated Media Publishing, a “next-generation” media company, announced the acquisition of a platform for semantic and linguistic profiling of web-based content from TextDigger, a San Jose-based semantic search startup. FM provides a full suite of media and marketing services for brand advertisers that depends heavily on a proprietary media and marketing technology platform. TextDigger’s technology complements FM’s platform with a set of semantic solutions for content tagging, filtering and clustering, as well as related tools that enhance the user experience, ad targeting, and semantic search engine optimization for a site. TextDigger will continue its search business, all of TextDigger’s customers will continue to be supported by either FM or TextDigger, depending on the type of project or service. www.federatedmedia.net www.textdigger.com

Data Mining for Energy Independence

Mining content for facts and information relationships is a focal point of many semantic technologies. Among the text analytics tools are those for mining content in order to process it for further analysis and understanding, and indexing for semantic search. This will move enterprise search to a new level of research possibilities.

Research for a forthcoming Gilbane report on semantic software technologies turned up numerous applications used in the life sciences and publishing. Neither semantic technologies nor text mining are mentioned in this recent article Rare Sharing of Data Leads to Progress on Alzheimer’s in the New York Times but I am pretty certain that these technologies had some role in enabling scientists to discover new data relationships and synthesize new ideas about Alzheimer’s biomarkers. The sheer volume of data from all the referenced data sources demands computational methods to distill and analyze.

One vertical industry poised for potential growth of semantic technologies is the energy field. It is a special interest of mine because it is a topical area in which I worked as a subject indexer and searcher early in my career. Beginning with the 1st energy crisis, oil embargo of the mid-1970s, I worked in research organizations that involved both fossil fuel exploration and production, and alternative energy development.

A hallmark of technical exploratory and discovery work is the time gaps between breakthroughs; there are often significant plateaus between major developments. This happens if research reaches a point that an enabling technology is not available or commercially viable to move to the next milestone of development. I observed that the starting point in the quest for innovative energy technologies often began with decades-old research that stopped before commercialization.

Building on what we have already discovered, invented or learned is one key to success for many “new” breakthroughs. Looking at old research from a new perspective to lower costs or improve efficiency for such things as photovoltaic materials or electrochemical cells (batteries) is what excellent companies do.
How does this relate to semantic software technologies and data mining? We need to begin with content that was generated by research in the last century; much of this is just now being made electronic. Even so, most of the conversion from paper, or micro formats like fîche, is to image formats. In order to make the full transition to enable data mining, content must be further enhanced through optical character recognition (OCR). This will put it into a form that can be semantically parsed, analyzed and explored for facts and new relationships among data elements.

Processing of old materials is neither easy nor inexpensive. There are government agencies, consortia, associations, and partnerships of various types of institutions that often serve as a springboard for making legacy knowledge assets electronically available. A great first step would be having DOE and some energy industry leaders collaborating on this activity.

A future of potential man-made disasters, even when knowledge exists to prevent them, is not a foregone conclusion. Intellectually, we know that energy independence is prudent, economically and socially mandatory for all types of stability. We have decades of information and knowledge assets in energy related fields (e.g. chemistry, materials science, geology, and engineering) that semantic technologies can leverage to move us toward a future of energy independence. Finding nuggets of old information in unexpected relationships to content from previously disconnected sources is a role for semantic search that can stimulate new ideas and technical research.

A beginning is a serious program of content conversion capped off with use of semantic search tools to aid the process of discovery and development. It is high time to put our knowledge to work with state-of-the-art semantic software tools and by committing human and collaborative resources to the effort. Coupling our knowledge assets of the past with the ingenuity of the present we can achieve energy advances using semantic technologies already embraced by the life sciences.

IBM Acquires Datacap

IBM announced the acquisition of Datacap Inc., a privately-held company based in Tarrytown, NY. Datacap is a provider of software that enables organizations to transform the way they capture, manage and automate the flow of business information to improve business processes, reduce paper costs or manual errors and meet compliance mandates. Financial terms were not disclosed. The acquisition strengthens IBM’s ability to help organizations digitize, manage and automate their information assets, particularly in paper-intensive industries such as healthcare, insurance, government and finance. Additionally, regulations such as HIPAA and Sarbanes-Oxley have demanded new standards and now legislation is encouraging the adoption of new records management solutions, including scanning and capture to increase accuracy, lower costs and speed business processes to meet these regulations. http://ibm.com http://www.datacap.com/

Adobe to Acquire Day Software

Yesterday, it was announced that another CMS poster child of the late 90’s is to be acquired as Adobe Systems Incorporated and Day Software Holding AG announced the two companies have entered into a definitive agreement for Adobe to acquire all of the publicly held registered shares of Day Software in a transaction worth approximates US$240 million.

This follows Adobe’s acquisition of Omniture late last year and clearly demonstrates their intent in entering the web experience management (WEM) market place that we cover with interest here at Gilbane – as we anticipate they bring together the audience insight gained through the web analytics of Omniture and Day’s CRX content platform.  

This will presumably add momentum to Day’s own move into the WEM space with their recent product marketing strategy, as they have reinvented themselves to be closer to the marketer with recent attention paid to functionality such as personalization, analytics, variant testing and messaging around using their repository for marketing campaigns and asset management.   We await with interest firm integration plans. 

In addition Day are a longtime advocate of CMS repository standards (JCR and CMIS), something that is also close to our heart at Gilbane. This announcement has also sent tremors through the Open Source community, as they wonder about Adobe’s commitment to the Apache projects like Sling and Jackrabbit that Day have been so supportive of.    

Whilst Adobe and Day have been very quick to state that they will maintain Day’s commitment to these community projects, it’s hard not think that this commitment inside Day is cultural and we wonder whether this can realistically be maintained as the acquisition matures and Day is brought into the fold. 

The acquisition also raises questions about what this means for Alfresco’s two year relationship with Adobe that runs pretty deep with OEM integration to Adobe LiveCycle – and Erik Larson (Senior Director of Product Management at Adobe) has publically stated the intention to integrate Day and LifeCycle to create a ‘full suite of enterprise technologies’.  It will be important for the Adobe customers that have adopted the Alfresco based integration, to understand how this will affect them going forward. 

One other area that I am sure my colleagues here at Gilbane in the Publishing Technologies practice will be watching with interest is the impact this will have on Adobe’s digital publishing offering.  

As we’ve seen with previous acquisitions, it’s best to be cautious over what the future might hold. From a WEM product strategy perspective bringing Ominture and Day together makes a great deal of sense to us. The commitment to standards and open source projects is probably safe for now, it has been a part of the Day identity and value proposition for as long as I can remember and one of the most exciting things could be what this acquisition means for digital publishing. 

Let’s wait and see… 

Suggested further reading:

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑