Curated for content, computing, and digital experience professionals

Topic: Technology (Page 2 of 5)

The word technology refers to the making, modification, usage, and knowledge of tools, machines, techniques, crafts, systems, and methods of organization, in order to solve a problem, improve a preexisting solution to a problem, achieve a goal, handle an applied input/output relation or perform a specific function. It can also refer to the collection of such tools, including machinery, modifications, arrangements and procedures.

What an Analyst Needs to Do What We Do

Semantic Software Technologies: Landscape of High Value Applications for the Enterprise is now posted for you to download for free; please do so. The topic is one I’ve followed for many years and was convinced that the information about it needed to be captured in a single study as the number of players and technologies had expanded beyond my capacity for mental organization.

As a librarian, it was useful to employ a genre of publications known as “bibliography of bibliographies” on any given topic when starting a research project. As an analyst, gathering the baskets of emails, reports, and publications on the industry I follow, serves a similar purpose. Without a filtering and sifting of all this content, it had become overwhelming to understand and comment on the individual components in the semantic landscape.

Relating to the process of report development, it is important for readers to understand how analysts do research and review products and companies. Our first goal is to avoid bias toward one vendor or another. Finding users of products and understanding the basis for their use and experiences is paramount in the research and discovery process. With software as complex as semantic applications, we do not have the luxury of routine hands-on experience, testing real applications of dozens of products for comparison.

The most desirable contacts for learning about any product are customers with direct experience using the application. Sometimes we gain access to customers through vendor introductions but we also try very hard to get users to speak to us through surveys and interviews, often anonymously so that they do not jeopardize their relationship with a vendor. We want these discussions to be frank.

To get a complete picture of any product, I go through numerous iterations of looking at a company through its own printed and online information, published independent reviews and analysis, customer comments and direct interviews with employees, users, former users, etc. Finally, I like to share what I have learned with vendors themselves to validate conclusions and give them an opportunity to correct facts or clarify product usage and market positioning.

One of the most rewarding, interesting and productive aspects of research in a relatively young industry like semantic technologies is having direct access to innovators and seminal thinkers. Communicating with pioneers of new software who are seeking the best way to package, deploy and commercialize their offerings is exciting. There are many more potential products than those that actually find commercial success, but the process for getting from idea to buyer adoption is always a story worth hearing and from which to learn.

I receive direct and indirect comments from readers about this blog. What I don’t see enough of is posted commentary about the content. Perhaps you don’t want to share your thoughts publicly but any experiences or ideas that you want to share with me are welcomed. You’ll find my direct email contact information through Gilbane.com and you can reach me on Twitter at lwmtech. My research depends on getting input from all types of users and developers of content software applications, so, please raise your hand and comment or volunteer to talk.

Data Mining for Energy Independence

Mining content for facts and information relationships is a focal point of many semantic technologies. Among the text analytics tools are those for mining content in order to process it for further analysis and understanding, and indexing for semantic search. This will move enterprise search to a new level of research possibilities.

Research for a forthcoming Gilbane report on semantic software technologies turned up numerous applications used in the life sciences and publishing. Neither semantic technologies nor text mining are mentioned in this recent article Rare Sharing of Data Leads to Progress on Alzheimer’s in the New York Times but I am pretty certain that these technologies had some role in enabling scientists to discover new data relationships and synthesize new ideas about Alzheimer’s biomarkers. The sheer volume of data from all the referenced data sources demands computational methods to distill and analyze.

One vertical industry poised for potential growth of semantic technologies is the energy field. It is a special interest of mine because it is a topical area in which I worked as a subject indexer and searcher early in my career. Beginning with the 1st energy crisis, oil embargo of the mid-1970s, I worked in research organizations that involved both fossil fuel exploration and production, and alternative energy development.

A hallmark of technical exploratory and discovery work is the time gaps between breakthroughs; there are often significant plateaus between major developments. This happens if research reaches a point that an enabling technology is not available or commercially viable to move to the next milestone of development. I observed that the starting point in the quest for innovative energy technologies often began with decades-old research that stopped before commercialization.

Building on what we have already discovered, invented or learned is one key to success for many “new” breakthroughs. Looking at old research from a new perspective to lower costs or improve efficiency for such things as photovoltaic materials or electrochemical cells (batteries) is what excellent companies do.
How does this relate to semantic software technologies and data mining? We need to begin with content that was generated by research in the last century; much of this is just now being made electronic. Even so, most of the conversion from paper, or micro formats like fîche, is to image formats. In order to make the full transition to enable data mining, content must be further enhanced through optical character recognition (OCR). This will put it into a form that can be semantically parsed, analyzed and explored for facts and new relationships among data elements.

Processing of old materials is neither easy nor inexpensive. There are government agencies, consortia, associations, and partnerships of various types of institutions that often serve as a springboard for making legacy knowledge assets electronically available. A great first step would be having DOE and some energy industry leaders collaborating on this activity.

A future of potential man-made disasters, even when knowledge exists to prevent them, is not a foregone conclusion. Intellectually, we know that energy independence is prudent, economically and socially mandatory for all types of stability. We have decades of information and knowledge assets in energy related fields (e.g. chemistry, materials science, geology, and engineering) that semantic technologies can leverage to move us toward a future of energy independence. Finding nuggets of old information in unexpected relationships to content from previously disconnected sources is a role for semantic search that can stimulate new ideas and technical research.

A beginning is a serious program of content conversion capped off with use of semantic search tools to aid the process of discovery and development. It is high time to put our knowledge to work with state-of-the-art semantic software tools and by committing human and collaborative resources to the effort. Coupling our knowledge assets of the past with the ingenuity of the present we can achieve energy advances using semantic technologies already embraced by the life sciences.

Leveraging Language in Enterprise Search Deployments

It is not news that enterprise search has been relegated to the long list of failed technologies by some. We are at the point where many analysts and business writers have called for a moratorium on the use of the term. Having worked in a number of markets and functional areas (knowledge management/KM, special libraries, and integrated library software systems) that suffered the death knell, even while continuing to exist, I take these pronouncements as a game of sorts.

Yes, we have seen the demise of vinyl phonograph records, cassette tapes and probably soon musical CD albums, but those are explicit devices and formats. When you can’t buy or play them any longer, except in a museum or collector’s garage, they are pretty dead in the marketplace. This is not true of search in the enterprise, behind the firewall, or wherever it needs to function for business purposes. People have always needed to find “stuff” to do their work. KM methods and processes, special libraries and integrated library systems still exist, even as they were re-labeled for PR and marketing purposes.

What is happening to search in the enterprise is that it is finding its purpose, or more precisely its hundreds of purposes. It is not a monolithic software product, a one-size-fits-all. It comes in dozens of packages, models, and price ranges. It may be embedded in other software or standalone. It may be procured for a point solution to support retrieval of content for one business unit operating in a very narrow topical range, or it may be selected to give access to a broad range of documents that exist in numerous enterprise domains on many subjects.

Large enterprises typically have numerous search solutions in operation, implementation, and testing, all at the same time. They are discovering how to deploy and leverage search systems and they are refining their use cases based on what they learn incrementally through their many implementations. Teams of search experts are typically involved in selecting, deploying and maintaining these applications based on their subject expertise and growing understanding of what various search engines can do and how they operate.

After years of hearing about “the semantic Web,” the long sought after “holy grail” of Web search, there is a serious ramping of technology solutions. Most of these applications can also make search more semantically relevant behind the firewall. These technologies have been evolving for decades beginning with so-called artificial intelligence, and now supported by some categories of computational linguistics such as specific algorithms for parsing content and disambiguating terms. A soon to-be released study featuring some of noteworthy applications reveals just how much is being done in enterprises for specific business purposes.

With this “teaser” on what is about to be published, I leave you with one important thought, meaningful search technologies depend on rich linguistically-based technologies. Without a cornucopia of software tools to build terminology maps and dictionaries, analyze content linguistically in context to elicit meaning, parse and evaluate unstructured text data sources, and manage vocabularies of ever more complex topical domains, semantic search could not exist.

Language complexities are challenging and even vexing. Enterprises will be finding solutions to leverage what they know only when they put human resources into play to work with the lingo of their most valuable domains.

Weighing In On The Search Industry With The Enterprise In Mind

Two excellent postings by executives in the search industry give depth to the importance of Dassault Système’s acquisition of Exalead. If this were simply a ho-hum failure in a very crowded marketplace, Dave Kellogg of Mark Logic Corporation and Jean Ferré of Sinequa would not care.

Instead they are picking up important signals. Industry segments as important as search evolve and its appropriate applications in enterprises are still being discovered and proven. Search may change, as could the label, but whatever it is called it is still something that will be done in enterprises.

This analyst has praise for the industry players who continue to persevere, working to get the packaging, usability, usefulness and business purposes positioned effectively. Jean Ferré is absolutely correct; the nature of the deal underscores the importance of the industry and the vision of the acquirers.

As we segue from a number of conferences featuring search (Search Engines, Enterprise Search Summit, Gilbane) to broader enterprise technologies (Enterprise 2.0) and semantic technologies (SemTech), it is important for enterprises to examine the interplay among product offerings. Getting the mix of software tools just right is probably more important than any one industry-labeled class of software, or any one product. Everybody’s software has to play nice in the sandbox to get us to the next level of adoption and productivity.

Here is one analyst cheering the champions of search and looking for continued growth in the industry…but not so big it fails.

Omniture Announces Integration with CrownPeak

Omniture, an Adobe company (NASDAQ:ADBE) announced an integration with CrownPeak that combines Omniture Test&Target with CrownPeak’s content management system (CMS) through Omniture Genesis. Designed to allow marketers to manage content for tests and targeted campaigns from an integrated interface, the combination allows for the creation and deployment of content to drive A/B tests, multivariate tests, and content targeting. As a result, marketers could benefit from the speed and control of Test&Target as well as from the content creation and management workflow of CrownPeak. Through the integration, content is built within CrownPeak’s CMS, then deployed and managed by Omniture Test&Target from within the CMS. The integration should provide the following: Continuous testing and targeting that can automatically promote top performing content; rapid implementation of integration and ongoing deployment of tests without requiring IT involvement, putting control in the hands of marketers; API Integration allows one-step live deployment of offers; easy management of any testing scenario via an integrated interface. www.omniture.com www.crownpeak.com/

Serendipity 1.5.2 released

Serendipity 1.5.2 has been released to address the outstanding issue of SQLite installations with Serendipity. Upgrading an earlier version of Serendipity prior to 1.5.1 to this version should work without any problems, fixing the database upgrades that were faulty in Serendipity 1.5.1. blog.s9y.org/

SDL Consolidates Brand

SDL plc has consolidated branding in order to be recognized globaly under a single brand. Through a series of acquisitions and research and development investment, SDL has expanded its technology footprint from Language Technology into Structured Content Technologies, Web Content Management and most recently eCommerce Technologies. To better convey SDL’s end-to-end content platform, the company is relaunching as a new consolidated brand. As of now, all business units in SDL, including SDL Tridion, SDL XySoft and SDL TRADOS Technologies, SDL Enterprise Technologies and SDL Language Services, will be referred to as divisions of SDL. The different business units of SDL will become one of five specialized divisions‚ Structured Content Technologies; Web Content Management Solutions; eCommerce Technologies; Language Technologies; and Language Services. http://www.sdl.com/

EntropySoft Adds Open Text Vignette Connector

EntropySoft announced the commercial release of an Open Text Vignette Content Management connector. Features for the read/write connector include preparing Vignette content for search, e-discovery, Records Management or for daily document transfers. This connector aims to facilitate web publishing of content coming from different sources and speed up the update of sites. The connector should also be able to integrate vertical applications working with Vignette content. The EntropySoft Vignette Content Management connector works with the Vignette Management Console API. The connector is a single java library, a .jar file. The EntropySoft Vignette connector allows the creation and modification of Vignette objects such as sites, channels, projects and documents. The Vignette connector will be CMIS-compliant when the specification is available. The Vignette connector can also be integrated in third-party applications or used in conjunction with EntropySoft’s content hub or Content ETL. http://www.entropysoft.net

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑