Curated for content, computing, data, information, and digital experience professionals

Category: Enterprise search & search technology (Page 23 of 61)

Research, analysis, and news about enterprise search and search markets, technologies, practices, and strategies, such as semantic search, intranet collaboration and workplace, ecommerce and other applications.

Before we consolidated our blogs, industry veteran Lynda Moulton authored our popular enterprise search blog. This category includes all her posts and other enterprise search news and analysis. Lynda’s loyal readers can find all of Lynda’s posts collected here.

For older, long form reports, papers, and research on these topics see our Resources page.

Focused on Unifying Content to Reduce Information Overload

A theme running through the sessions I attended at Enterprise Search Summit and KMWorld 2010 in Washington, DC last month was the diversity of ways in which organizations are focused on getting answers to stakeholders more quickly. Enterprises deploying content technologies, all with enterprise search as the end game, seek to narrow search results accurately to retrieve and display the best and most relevant content.

Whether the process is referred to as unified indexing, federating content or information integration, each constitutes a similar focus among the vendors I took time to engage with at the conference. Each is positioned to solve different information retrieval problems, and were selected to underscore what I have tried to express in my recent Gilbane Beacon, Establishing a Successful Enterprise Search Program: Five Best Practices, namely the need to first establish a strategic business need. The best practices include the need for understanding how existing technologies and content structures function is the enterprise before settling on any one product or strategy. The essential activity of conducting a proof of concept (POC) or pilot project to confirm product suitability for the targeted business challenge is clearly mandated.

These products, in alphabetic order, are all notable for their unique solutions tailored to different audiences of users and business requirements. All embody an approach to unifying enterprise content for a particular business function:

Access Innovations (AI) was at KMWorld to demonstrate the aptly named product suite, Data Harmony. AI products cover a continuum of tools to build and maintain controlled vocabularies (AKA taxonomies and thesauri), add content metadata through processes tightly integrated with the corresponding vocabularies, search and navigation. Its vocabulary and content management tools can be layered to integrate with existing CMS and enterprise search systems.

Attivio, a company providing a platform solution known as Active Intelligence Engine (AIE), has developers specializing in open source tools for content retrieval solutions with excellent retrieval as the end point. AIE is a platform for enterprises seeking to unify structured and unstructured content across the enterprise, and from the web. By leveraging open source components they provide their customers with a platform that can be developed to enhance search for a particular solution, including bringing Web 2.0 social content into unity with enterprise content for further business intelligence analysis.

Coveo has steadily marched into a dominant position across all vertical industries with its efficiently packaged and reasonably priced enterprise search solutions, since I was first introduced to them in 2007. Their customers are always enthusiastic presenters at KMWorld, representing a population of implementers who seek to make enterprise search available to users quickly, and with a minimum of fuss. This year, Shelley Norton from Children’s Hospital Boston did not disappoint. She ticked off steps in an efficient selection, implementation and deployment process for getting enterprise search up and running smoothly to deliver trustworthy and accurate results to the hospital’s constituents. I always value and respect customer story-telling.

Darwin Awareness Engine was named the KMWorld Promise Award Winner for 2010. Since their founder is local to our home-base and a frequent participant in the Boston KM Forum (KMF) meetings, we are pretty happy for their official arrival on the scene and the recognition. It was just a year ago that they presented the prototype at the KMF. Our members were excited to see the tool exposing layers of news feeds to hone in on topics of interest to see what was aggregated and connected in really “real-time.” Darwin content presentation is unique in that the display reveals relationships and patterns among topics in the Web 2.0 sphere that are suddenly apparent due to their visual connections in the display architecture. The public views are only an example of what a very large enterprise might reveal about its own internal communications through social tools within the organization.

The newest newcomer, RAMP, was introduced to me by Nate Treloar in the closing hours of KMWorld. Nate came to this start-up from Microsoft and the FAST group and is excited about this new venture. Neither exhibiting, nor presenting, Nate was anxious to reach out to analysts and potential partners to share the RAMP vision for converting speech from audio and video feeds to reliable searchable text. This would enable the unification of audio, video and other content to finally be searched from its “full text” on the Web in a single pass. Now, we depend on the contribution of explicit metadata by contributors of non-text content. Long awaiting excellence in speech to indexing for search, I was “all ears” during our conversation and look forward to seeing more of RAMP at future meetings.

Whatever the strategic business need, the ability to deliver a view of information that is unified, cohesive and contextually understandable will be a winning outcome. With the Beacon as a checklist for your decision process, information integration is attainable by making the right software selection for your enterprise application.

Coherence and Augmentation: KM-Search Connection

This space is not normally used to comment on knowledge management (KM), one of my areas of consulting, but a recent conference gives me an opening to connect the dots between KM and search. Dave Snowden and Tom Stewart always have worthy commentary on KM and as keynote speakers they did not disappoint at KMWorld. It may seem a stretch but by taking a few of their thoughts out of context, I can synthesize a relationship between KM and search.

KMWorld, Enterprise Search Summit, SharePoint Symposium and Taxonomy Boot Camp moved to Washington D.C. for the 2010 Fall Conference earlier this month. I attended to teach a workshop on building a semantic platform, and to participate in a panel discussion to wrap up the conference with two other analysts, Leslie Owen and Tony Byrne with Jane Dysart moderating.

Comments from the first and last keynote speakers of the conference inspired my final panel comments, counseling attendees to lead by thoughtfully leveraging technology only to enhance knowledge. But there were other snippets that prompt me to link search and KM.

Tom Stewart’s talk was entitled, Knowledge Driven Enterprises: Strategies & Future Focus, which he couched in the context of achieving a “coherent” winning organization. He explained that to reach the coherence destination requires understanding of different types of knowledge and how we need to behave for attaining each type (e.g. “knowable complicated “knowledge calls for experts and research; “emergent complex” knowledge calls for leadership and “sense-making.”).

Stewart describes successful organizations as those in which “the opportunities outside line up with the capabilities inside.” He explains that those “companies who do manage to reestablish focus around an aligned set of key capabilities” use their “intellectual capital” to identify their intangible assets,” human capability, structural capital, and customer capital. They build relationship capital from among these capabilities to create a coherent company. Although Stewart does not mention “search,” it is important to note that one means to identify intangible assets is well-executed enterprise search with associated analytical tools.

Dave Snowden also referenced “coherence,” (messy coherence), even as he spoke about how failures tend to be more teachable (memorable) than successes. If you follow Snowden, you know that he founded the Cognitive Edge and has developed a model for applying cognitive learning to help build resilient organizations. He has taught complexity analysis and sense-making for many years and his interest in human learning behaviors is deep.

To follow the entire thread of Snowden’s presentation on the “The Resilient Organization” follow this link. I was particularly impressed with his statement about the talk, “one of the most heart-felt I have given in recent years.” It was one of his best but two particular comments bring me to the connection between KM and search.

Dave talked about technology as “cognitive augmentation,” its only truly useful function. He also puts forth what he calls the “three Golden rules: Use of distributed cognition, wisdom but not foolishness of crowds; finely grained objects, information and organizational; and disintermediation, putting decision makers in direct contact with raw data.”

Taking these fragments of Snowden’s talk, a technique he seems to encourage, I put forth a synthesized view of how knowledge and search technologies need to be married for consequential gain.

We live and work in a highly chaotic information soup, one in which we are fed a steady diet of fragments (links, tweets, analyzed content) from which we are challenged as thinkers to derive coherence. The best knowledge practitioners will leverage this messiness by detecting weak signals and seek out more fragments, coupling them thoughtfully with “raw data” to synthesize new innovations, whether they be practices, inventions or policies. Managing shifting technologies, changing information inputs, and learning from failures (our own, our institution’s and others) contributes to building a resilient organization.

So where does “search” come in? Search is a human operation and begins with the workforce. Going back to Stewart who commented on the need to recognize different kinds of knowledge, I posit that different kinds of knowledge demand different kinds of search. This is precisely what so many “enterprise search” initiatives fail to deliver. Implementers fail to account for all the different kinds of search, search for facts, search for expertise, search for specific artifacts, search for trends, search for missing data, etc.

When Dave Snowden states that “all of your workforce is a human scanner,” this could also imply the need for multiple, co-occurring search initiatives. Just as each workforce member brings a different perspective and capability to sensory information gathering, so too must enterprise search be set up to accommodate all the different kinds of knowledge gathering. And when Snowden notes that “There are limits to semantic technologies: Language is constantly changing so there is a requirement for constant tuning to sustain the same level of good results,” he is reminding us that technology is only good for cognitive augmentation. Technology is not a “plug ‘n play,” install and reap magical cognitive insights. It requires constant tuning to adapt to new kinds of knowledge.

The point is one I have made before; it is the human connection, human scanner and human understanding of all the kinds of knowledge we need in order to bring coherence to an organization. The better we balance these human capabilities, the more resilient we’ll be and the better skilled at figuring out what kinds of search technologies really make sense for today, and tomorrow we had better be ready for another tool for new fragments and new knowledge synthesis.

Lucene Open Source Community Commits to a Future in Search

It has been nearly two years since I commented on an article in Information Week, Open Source, Its Time has Come, Nov. 2008. My main point was the need for deep expertise to execute enterprise search really well. I predicted the growth of service companies with that expertise, particularly for open source search. Not long after I announced that, Lucid Imagination was launched, with its focus on building and supporting solutions based on Lucene and, its more turnkey version, Solr.

It has not taken long for Lucid Imagination (LI) to take charge of the Lucene/Solr community of practice (CoP), and to launch its own platform built on Solr, Lucidworks Enterprise. Open source depends on deep and sustained collaboration; LI stepped into the breach to ensure that the hundreds of contributors, users and committers have a forum. I am pretty committed to CoPs myself and know that nurturing a community for the long haul takes dedicated leadership. In this case it is undoubtedly enlightened self-interest that is driving LI. They are poised to become the strongest presence for driving continuous improvements to open source search, with Apache Lucene as the foundation.

Two weeks ago LI hosted Lucene Revolution, the first such conference in the US. It was attended by over 300 in Boston, October 7-8 and I can report that this CoP is vibrant, enthusiastic. Moderated by Steve Arnold, the program ran smoothly and with excellent sessions. Those I attended reflected a respectful exchange of opinions and ideas about tools, methods, practices and priorities. While there were allusions to vigorous debate among committers about priorities for code changes and upgrades, the mood was collaborative in spirit and tinged with humor, always a good way to operate when emotions and convictions are on stage.

From my 12 pages of notes come observations about the three principal categories of sessions:

  1. Discussions, debates and show-cases for significant changes or calls for changes to the code
  2. Case studies based on enterprise search applications and experiences
  3. Case studies based on the use of Lucene and Solr embedded in commercial applications

Since the first category was more technical in nature, I leave the reader with my simplistic conclusions: core Apache Lucene and Solr will continue to evolve in a robust and aggressive progression. There are sufficient committers to make a serious contribution. Many who have decades of search experience are driving the charge and they have cut their teeth on the more difficult problems of implementing enterprise solutions. In announcing Lucidworks Enterprise, LI is clearly bidding to become a new force in the enterprise search market.

New and sustained build-outs of Lucene/Solr will be challenged by developers with ideas for diverging architectures, or “forking” code, on which Eric Gries, LI CEO, commented in the final panel. He predicted that forking will probably be driven by the need to solve specific search problems that current code does not accommodate. This will probably be more of a challenge for the spinoffs than the core Lucene developers, and the difficulty of sustaining separate versions will ultimately fail.

Enterprise search cases reflected those for whom commercial turnkey applications will not or cannot easily be selected; for them open source will make sense. Coming from LI’s counterpart in the Linux world, Red Hat, are these earlier observations about why enterprises should seek to embrace open source solutions, in short the sorry state of quality assurance and code control in commercial products. Add to that the cost of services to install, implement and customize commercial search products. The argument would be to go with open source for many institutions when there is an imperative or call for major customization.

This appears to be the case for two types of enterprises that were featured on the program: educational institutions and government agencies. Both have procurement issues when it comes to making large capital expenditures. For them it is easier to begin with something free, like open source software, then make incremental improvements and customize over time. Labor and services are cost variables that can be distributed more creatively using multiple funding options. Featured on the program were the Smithsonian, Adhere Solutions doing systems integration work for a number of government agencies, MITRE (a federally funded research laboratory), U. of Michigan, and Yale. CISCO also presented, a noteworthy commercial enterprise putting Lucene/Solr to work.

The third category of presenters was, by far, the largest contingent of open source search adopters, producers of applications that leverage Lucene and Solr (and other open source software) into their offerings. They are solidly entrenched because they are diligent committers, and share in this community of like-minded practitioners who serve as an extended enterprise of technical resources that keeps their overhead low. I can imagine the attractiveness of a lean business that can run with an open source foundation, and operates in a highly agile mode. This must be enticing and exciting for developers who wilt at the idea of working in a constrained environment with layers of management and political maneuvering.

Among the companies building applications on Lucene that presented were: Access Innovations, Twitter, LinkedIn, Acquia, RivetLogic and Salesforce.com. These stand out as relatively mature adopters with traction in the marketplace. There were also companies present that contribute their value through Lucene/Solr partnerships in which their products or tools are complementary including: Basis Technology, Documill, and Loggly.

Links to presentations by organizations mentioned above will take you to conference highlights. Some will appeal to the technical reader for there was a lot of code sharing and technical tips in the slides. The diversity and scale of applications that are being supported by Lucene and Solr was impressive. Lucid Imagination and the speakers did a great job of illustrating why and how open source has a serious future in enterprise search. This was a confidence building exercise for the community.

Two sentiments at the end summed it up for me. On the technical front Eric Gries observed that it is usually clear what needs to be core (to the code) and what does not belong. Then there is a lot of gray area, and that will contribute to constant debate in the community. For the user community, Charlie Hull, of flax opined that customers don’t care whether (the code) is in the open source core or in the special “secret sauce” application, as long as the product does what they want.

What an Analyst Needs to Do What We Do

Semantic Software Technologies: Landscape of High Value Applications for the Enterprise is now posted for you to download for free; please do so. The topic is one I’ve followed for many years and was convinced that the information about it needed to be captured in a single study as the number of players and technologies had expanded beyond my capacity for mental organization.

As a librarian, it was useful to employ a genre of publications known as “bibliography of bibliographies” on any given topic when starting a research project. As an analyst, gathering the baskets of emails, reports, and publications on the industry I follow, serves a similar purpose. Without a filtering and sifting of all this content, it had become overwhelming to understand and comment on the individual components in the semantic landscape.

Relating to the process of report development, it is important for readers to understand how analysts do research and review products and companies. Our first goal is to avoid bias toward one vendor or another. Finding users of products and understanding the basis for their use and experiences is paramount in the research and discovery process. With software as complex as semantic applications, we do not have the luxury of routine hands-on experience, testing real applications of dozens of products for comparison.

The most desirable contacts for learning about any product are customers with direct experience using the application. Sometimes we gain access to customers through vendor introductions but we also try very hard to get users to speak to us through surveys and interviews, often anonymously so that they do not jeopardize their relationship with a vendor. We want these discussions to be frank.

To get a complete picture of any product, I go through numerous iterations of looking at a company through its own printed and online information, published independent reviews and analysis, customer comments and direct interviews with employees, users, former users, etc. Finally, I like to share what I have learned with vendors themselves to validate conclusions and give them an opportunity to correct facts or clarify product usage and market positioning.

One of the most rewarding, interesting and productive aspects of research in a relatively young industry like semantic technologies is having direct access to innovators and seminal thinkers. Communicating with pioneers of new software who are seeking the best way to package, deploy and commercialize their offerings is exciting. There are many more potential products than those that actually find commercial success, but the process for getting from idea to buyer adoption is always a story worth hearing and from which to learn.

I receive direct and indirect comments from readers about this blog. What I don’t see enough of is posted commentary about the content. Perhaps you don’t want to share your thoughts publicly but any experiences or ideas that you want to share with me are welcomed. You’ll find my direct email contact information through Gilbane.com and you can reach me on Twitter at lwmtech. My research depends on getting input from all types of users and developers of content software applications, so, please raise your hand and comment or volunteer to talk.

Sophia Launches Sophia Search for Intelligent Enterprise Search and Contextual Discovery

Sophia, the provider of contextually aware enterprise search solutions, announced Sophia Search, a new search solution which uses a Semiotic-based linguistic model to identify intrinsic terms, phrases and relationships within unstructured content so that it can be recovered, consolidated and leveraged. Use of Sophia Search is designed to minimize compliance risk and reduce the cost of storing and managing enterprise information. Sophia Search is able to deliver a “three-dimensional” solution to discover, consolidate and optimize enterprise data, regardless of its data type or domain. Sophia Search helps organizations manage and analyze critical information by discovering the themes and intrinsic relationships behind their information, without taxonomies or ontologies, so that more relevant information may be discovered. By identifying both duplicates and near duplicates, Sophia Search allows organizations to effectively consolidate information and minimizing storage and management costs. Sophia Search features a patented Contextual Discovery Engine (CDE) which is based on the linguistic model of Semiotics, the science behind how humans understand the meaning of information in context. Sophia Search is available now to both customers and partners. Pricing starts at $30,000. http://www.sophiasearch.com/

Leveraging Two Decades of Computational Linguistics for Semantic Search

Over the past three months I have had the pleasure of speaking with Kathleen Dahlgren, founder of Cognition, several times. I first learned about Cognition at the Boston Infonortics Search Engines meeting in 2009. That introduction led me to a closer look several months later when researching auto-categorization software. I was impressed with the comprehensive English language semantic net they had doggedly built over a 20+ year period.

A semantic net is a map of language that explicitly defines the many relationships among words and phrases. It might be very simple to illustrate something as fundamental as a small geographical locale and all named entities within it, or as complex as the entire base language of English with every concept mapped to illustrate all the ways that any one term is related to other terms, as illustrated in this tiny subset. Dr. Dahlgren and her team are among the few companies that have created a comprehensive semantic net for English.

In 2003, Dr. Dahlgren established Cognition as a software company to commercialize its semantic net, designing software to apply it to semantic search applications. As the Gilbane Group launched its new research on Semantic Software Technologies, Cognition signed on as a study co-sponsor and we engaged in several discussions with them that rounded out their history in this new marketplace. It was illustrative of pioneering in any new software domain.

Early adopters are key contributors to any software development. It is notable that Cognition has attracted experts in fields as diverse as medical research, legal e-discovery and Web semantic search. This gives the company valuable feedback for their commercial development. In any highly technical discipline, it is challenging and exciting to finding subject experts knowledgeable enough to contribute to product evolution and Cognition is learning from client experts where the best opportunities for growth lie.

Recent interviews with Cognition executives, and those of other sponsors, gave me the opportunity to get their reactions to my conclusions about this industry. These were the more interesting thoughts that came from Cognition after they had reviewed the Gilbane report:

  • Feedback from current clients and attendees at 2010 conferences, where Dr. Dahlgren was a featured speaker, confirms escalating awareness of the field; she feels that “This is the year of Semantics.” It is catching the imagination of IT folks who understand the diverse and important business problems to which semantic technology can be applied.
  • In addition to a significant upswing in semantics applied in life sciences, publishing, law and energy, Cognition sees specific opportunities for growth in risk assessment and risk management. Using semantics to detect signals, content salience, and measures of relevance are critical where the quantity of data and textual content is too voluminous for human filtering. There is not much evidence that financial services, banking and insurance are embracing semantic technologies yet, but it could dramatically improve their business intelligence and Cognition is well positioned to give support to leverage their already tested tools.
  • Enterprise semantic search will begin to overcome the poor reputation that traditional “string search” has suffered. There is growing recognition among IT professionals that in the enterprise 80% of the queries are unique; these cannot be interpreted based on popularity or social commentary. Determining relevance or accuracy of retrieved results depends on the types of software algorithms that apply computational linguistics, not pattern matching or statistical models.

In Dr. Dahlgren’s view, there is no question that a team approach to deploying semantic enterprise search is required. This means that IT professionals will work side-by-side with subject matter experts, search experts and vocabulary specialists to gain the best advantage from semantic search engines.

The unique language aspects of an enterprise content domain are as important as the software a company employs. The Cognition baseline semantic net, out-of-the-box, will always give reliable and better results than traditional string search engines. However, it gives top performance when enhanced with enterprise language, embedding all the ways that subject experts talk about their topical domain, jargon, acronyms, code phrases, etc.

With elements of its software already embedded in some notable commercial applications like Bing, Cognition is positioned for delivering excellent semantic search for an enterprise. They are taking on opportunities in areas like risk management that have been slow to adopt semantic tools. They will deliver software to these customers together with services and expertise to coach their clients through the implementation, deployment and maintenance essential to successful use. The enthusiasm expressed to me by Kathleen Dahlgren about semantics confirms what I also heard from Cognition clients. They are confident that the technology coupled with thoughtful guidance from their support services will be the true value-added for any enterprise semantic search application using Cognition.

The free download of the Gilbane study and deep-dive on Cognition was announced on their Web site at this page.

Semantically Focused and Building on a Successful Customer Base

Dr. Phil Hastings and Dr. David Milward spoke with me in June, 2010, as I was completing the Gilbane report, Semantic Software Technologies: A Landscape of High Value Applications for the Enterprise. My interest in a conversation was stimulated by several months of discussions with customers of numerous semantic software companies. Having heard perspectives from early adopters of Linguamatics’ I2E and other semantic software applications, I wanted to get some comments from two key officers of Linguamatics about what I heard from the field. Dr. Milward is a founder and CTO, and Dr. Hastings is the Director of Business Development.

A company with sustained profitability for nearly ten years in the enterprise semantic market space has credibility. Reactions from a maturing company to what users have to say are interesting and carry weight in any industry. My lines of inquiry and the commentary from the Linguamatics officers centered around their own view of the market and adoption experiences.

When asked about growth potential for the company outside of pharmaceuticals where Linguamatics already has high adoption and very enthusiastic users, Drs. Milward and Hastings asserted their ongoing principal focus in life sciences. They see a lot more potential in this market space, largely because of the vast amounts of unstructured content being generated, coupled with the very high-value problems that can be solved by text mining and semantically analyzing the data from those documents. Expanding their business further in the life sciences means that they will continue engaging in research projects with the academic community. It also means that Linguamatics semantic technology will be helping organizations solve problems related to healthcare and homeland security.

The wisdom of a measured and consistent approach comes through strongly when speaking with Linguamatics executives. They are highly focused and cite the pitfalls of trying to “do everything at once,” which would be the case if they were to pursue all markets overburdened with tons of unstructured content. While pharmaceutical terminology, a critical component of I2E, is complex and extensive, there are many aids to support it. The language of life sciences is in a constant state of being enriched through refinements to published thesauri and ontologies. However, in other industries with less technical language, Linguamatics can still provide important support to analyze content in the detection of signals and patterns of importance to intelligence and planning.

Much of the remainder of the interview centered on what I refer to as the “team competencies” of individuals who identify the need for any semantic software application; those are the people who select, implement and maintain it. When asked if this presents a challenge for Linguamatics or the market in general, Milward and Hastings acknowledged a learning curve and the need for a larger pool of experts for adoption. This is a professional growth opportunity for informatics and library science people. These professionals are often the first group to identify Linguamatics as a potential solutions provider for semantically challenging problems, leading business stakeholders to the company. They are also good advocates for selling the concept to management and explaining the strong benefits of semantic technology when it is applied to elicit value from otherwise under-leveraged content.

One Linguamatics core operating principal came through clearly when talking about the personnel issues of using I2E, which is the necessity of working closely with their customers. This means making sure that expectations about system requirements are correct, examples of deployments and “what the footprint might look like” are given, and best practices for implementations are shared. They want to be sure that their customers have a sense of being in a community of adopters and are not alone in the use of this pioneering technology. Building and sustaining close customer relationships is very important to Linguamatics, and that means an emphasis on services co-equally with selling licenses.

Linguamatics has come a long way since 2001. Besides a steady effort to improve and enhance their technology through regular product releases of I2E, there have been a lot of “show me” and “prove it” moments to which they have responded. Now, as confidence in and understanding of the technology ramps up, they are getting more complex and sophisticated questions from their customers and prospects. This is the exciting part as they are able to sell I2E’s ability to “synthesize new information from millions of sources in ways that humans cannot.” This is done by using the technology to keep track of and processing the voluminous connections among information resources that exceed human mental limits.

At this stage of growth, with early successes and excellent customer adoption, it was encouraging to hear the enthusiasm of two executives for the evolution of the industry and their opportunities in it.

The Gilbane report and a deep dive on Linguamatics are available through this Press Release on their Web site.

Semantic Technology: Sharing a Large Market Space

It is always interesting to talk shop with the experts in a new technology arena. My interview with Luca Scagliarini, VP of Strategy and Business Development for Expert System, and Brooke Aker, CEO of Expert System USA was no exception. They had been digesting my research on Semantic Software Technologies and last week we had a discussion about what is in the Gilbane report.

When asked if they were surprised by anything in my coverage of the market, the simple answer was “not really, nothing we did not already know.” The longer answer related to the presentation of our research illustrating the scope and depth of the marketplace. These two veterans of the semantic industry admitted that the number of players, applications and breadth of semantic software categories is impressive when viewed in one report. Mr. Scagliarini commented on the huge amount of potential still to be explored by vendors and users.

Our conversation then focused on where we think the industry is headed and they emphasized that this is still an early stage and evolving area. Both acknowledged the need for simplification of products to ease their adoption. It must be straightforward for buyers to understand what they are licensing, the value they can expect for the price they pay; implementation, packaging and complementary services need to be easily understood.

Along the lines of simplicity, they emphasized the specialized nature of most of the successful semantic software applications, noting that these are not coming from the largest software companies. State-of-the-art tools are being commercialized and deployed for highly refined applications out of companies with a small footprint of experienced experts.

Expert System knows about the need for expertise in such areas as ontologies, search, and computational linguistic applications. For years they have been cultivating a team of people for their development and support operations. It has not always been easy to find these competencies, especially right out of academia. Aker and Scagliarini pointed out the need for a lot of pragmatism, coupled with subject expertise, to apply semantic tools for optimal business outcomes. It was hard in the early years for them to find people who could leverage their academic research experiences for a corporate mission.

Human resource barriers have eased in recent years as younger people who have grown up with a variety of computing technologies seem to grasp and understand the potential for semantic software tools more quickly.

Expert System itself is gaining traction in large enterprises that have segmented groups within IT that are dedicated to “learning” applications, and formalized ways of experimenting with, testing and evaluating new technologies. When they become experts in tool use, they are much better at proving value and making the right decisions about how and when to apply the software.

Having made good strides in energy, life sciences, manufacturing and homeland security vertical markets, Expert System is expanding its presence with the Cogito product line in other government agencies and publishing. The executives reminded me that they have semantic nets built out in Italian, Arabic and German, as well as English. This is unique among the community of semantic search companies and will position them for some interesting opportunities where other companies cannot perform.

I enjoyed listening and exchanging commentary about the semantic software technology field. However, Expert System and Gilbane both know that the semantic space is complex and they are sharing a varied landscape with a lot of companies competing for a strong position in a young industry. They have a significant share already.

For more about Expert System and the release of this sponsored research you can view their recent Press Release.

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑