The Gilbane Advisor

Curated for content, computing, and digital experience professionals

Page 271 of 918

The Future of Enterprise Search

We’ve been especially focused on enterprise search this year. In addition to Lynda’s blog and our normal conference coverage, we have released two extensive reports, one authored by Lynda and one by Stephen Arnold, and Udi Manber VP Engineering, Search, Google, keynoted our San Francisco conference. We are continuing this focus at our upcoming Boston conference where Prabhakar Raghavan, Head of Yahoo! Research, will provide the opening keynote.

Prabhakar’s talk is titled “The Future of Search”. The reason I added “enterprise” to the title of the post, is that Prabhakar’s talk will be of special interest to enterprises because of its emphasis on complex data in databases and marked-up content repositories. Prabhakar’s background includes stints CTO at Verity and IBM so enterprise (or, if you prefer “behind-the-firewall”, or “intranet”) search requirements are not new to him.

Here is the description from the conference site:

Web content continues to grow, change, diversify, and fragment. Meanwhile, users are performing increasingly sophisticated and open-ended tasks online, connecting broadly to content and services across the Web. The simple search result page of blue text links needs to evolve to address these complex tasks, and this evolution includes a more formal understanding of user’s intent, and a deeper model of how particular pieces of Web content can help. Structured databases power a significant fraction of Web pages, and microformats and other forms of markup have been proposed as mechanisms to expose this structure. But uptake of these mechanisms remains limited, as content owners await the killer application for this technology. That application is search. If search engines can make deep use of structured information about content, provided through open standards, then search engines and site owners can together bring consumers a far richer experience. We are entering a period of massive change to enable search engines to handle more complex content. Prabhakar Raghavan, head of Yahoo! Research, will address the future of search: how search engines are becoming more sophisticated, what the breakthrough point will be for semantics on the Web and what this means for developers and publishers.

Join us on December 3rd at 8:30am at the Boston Westin Copley. Register.

Vignette Launches QuickSite to Speed Web Site Development

Vignette announced the worldwide availability of QuickSite, a new service offering that simplifies the Vignette Content Management implementation process and enables organizations to launch new Web sites faster. QuickSite delivers a consistent infrastructure, helping marketing departments to launch multiple microsites and branded sites without having to recreate Web pages from scratch. The service deployment includes content management processes, templates and business adoption workshops before the customer is asked to determine additional site requirements. QuickSite also includes support for multilingual Web sites, displays of content information through tag libraries and CSS templates to manage the look and feel of a site with limited help from IT. Site Cloning allows organizations to replicate a site within minutes rather than days by reusing the templates. http://www.vignette.com

EPiServer Releases CMS 5 R2

EPiServer announced the introduction of multiple new features for its content management system, EPiServer CMS 5 R2, including solutions for mobility and the iPhone. EPiServer has worked with two partners, Mobiletech A/S and Mobizoft AB, to provide a mobile experience to the visitors of their site, including mobile rendering, video conversion and payments. iPhone support is available as open source templates enabling the system to be viewed from an iphone. Images can now be prepared directly in EPiServer CMS so that web editors no longer need to work on them in another application before moving onto the web page. New dynamic content features enable external data which appears in many places on the website, such as financial or legal text, to be updated throughout the site. Page Type Converter makes it easier to merge pages of different types, and change other page types. Five standard reports are now available— Non-published pages, published pages, modified pages, expiring/expired pages and an overview of simple addresses. External data such as an archive of articles at a media company can be integrated and displayed in a website using EPiServer CMS. The data will be appear as a native EPiServer CMS page. This enables structured data stored on another document management system to be converted to a webpage in EPiServer and viewed. EPiServer CMS now supports Oracle, Windows Server 2003 and 2008, as well as XP and Vista, Visual Studio 2008 and 2000 Express, and ASP Net 3.5 SP1 or later. http://www.EPiServer.com/

Dewey Decimal Classification, Categorization, and NLP

I am surprised how often various content organizing mechanisms on the Web are compared to the Dewey Decimal System. As a former librarian, I am disheartened to be reminded how often students were lectured on the Dewey Decimal system, apparently to the exclusion of learning about subject categorization schemes. They complemented each other but that seems to be a secret among all but librarians.

I’ll try to share a clearer view of the model and explain why new systems of organizing content in enterprise search are quite different than the decimal model.

Classification is a good generic term for defining physical organizing systems. Unique animals and plants are distinguished by a single classification in the biological naming system. So too are books in a library. There are two principal classification systems for arranging books on the shelf in Western libraries: Dewey Decimal and Library of Congress (LC). They each use coding (numeric for Dewey decimal and alpha-numeric for Library of Congress) to establish where a book belongs logically on a shelf, relative to other books in the collection, according to the book’s most prominent content topic. A book on nutrition for better health might be given a classification number for some aspect of nutrition or one for a health topic, but a human being has to make a judgment which topic the book is most “about” because the book can only live in one section of the collection. It is probably worth mentioning that the Dewey and LC systems are both hierarchical but with different priorities. (e.g. Dewey puts broad topics like Religion and Philosophy and Psychology at top levels and LC puts those two topics together while including more scientific and technical topics at the top of the list, like Agriculture and Military Science.)

So why classify books to reside in topic order? It requires a lot of labor to move the collections around to make space for new books. It is for the benefit of the users, to enable “browsing” through the collection, although it may be hard to accept that the term browsing was a staple of library science decades before the internet. Library leaders established eons ago the need for a system of physical organization to help readers peruse the book collection by topic, leading from the general to the specific.

You might ask what kind of help that was for finding the book on nutrition that was classified under “health science.” This is where another system, largely hidden from the public or often made annoyingly inaccessible, comes in. It is a system of categorization in which any content, book or otherwise, can be assigned an unlimited number of categories. Wondering through the stacks, one would never suspect this secret way of finding a nugget in a book about your favorite hobby if that book was classified to live elsewhere. The standard lists of terms for further describing books by multiple headings are called “subject headings” and you had to use a library catalog to find them. Unfortunately, they contain mysterious conventions called “sub-divisions,” designed to pre-coordinate any topic with other generic topics (e.g. Handbooks, etc. and United States). Today we would call these generic subdivision terms, facets. One reflects a kind of book and the other reveals a geographical scope covered by the book.

With the marvel of the Web page, hyperlinking, and “clicking through” hierarchical lists of topics we can click a mouse to narrow a search for handbooks on nutrition in the United States for better health beginning at any facet or topic and still come up with the book that meets all four criteria. We no longer have to be constrained by the Dewey model of browsing the physical location of our favorite topics, probably missing a lot of good stuff. But then we never did. The subject card catalog gave us a tool for finding more than we would by classification code alone. But even that was a lot more tedious than navigating easily through a hierarchy of subject headings, narrowing the results by facets on a browser tab and further narrowing the results by yet another topical term until we find just the right piece of content.

Taking the next leap we have natural language processing (NLP) that will answer the question, “Where do I find handbooks on nutrition in the United States for better health?” And that is the Holy Grail for search technology – and a long way from Mr. Dewey’s idea for browsing the collection.

Socialtext Delivers Socialtext 3.0

Socialtext released Socialtext 3.0, a trio of applications including Socialtext People and Socialtext Dashboard, as well as a major upgrade to its Socialtext Workspace enterprise wiki offering. These products are built on a modular and integrated platform that delivers connected collaboration with context to individuals, workgroups, organizations and extranet communities. People are able to discover, create, and utilize social networks, collaborate in shared workspaces, and work productively, with personalized widget-based dashboards. The company also announced Socialtext Signals, a Twitter-style microblogging interface that goes beyond simple “tweets” by integrating both automated and manual updates with social networking context, expanding the company’s business communications offerings for the enterprise. As with its proven Workspace wiki and weblog product, Socialtext will make all of its offerings available on a hosted ASP as well as an on-premise appliance basis. The entire Socialtext 3.0 trio of products is available immediately on the hosted service, and will be made available to appliance customers starting in October 2008. Socialtext 3.0 profile integration with LDAP or Microsoft Active Directory systems enable rapid population. REST APIs for workspace and profile content are now complemented with a Widget architecture and user interface for the creation of enterprise mashups. Productized Connectors are available with Microsoft Sharepoint and IBM Lotus Connections. You can immediately experience this new release in a free trial at http://socialtext.com/

Machine Translation (Finally) Comes of Age

In our Multilingual Communications as a Business Imperative report, we noted the fact that machine translation (MT) has long been the target of “don’t let this happen to you” jokes throughout the globalization industry. Unpredictable results and poor quality allowed humor to become the focus of MT discussions, making widespread adoption risky at best.

On the other hand, we also noted that scientists, researchers, and technologists have been determined to unlock MT potential since the 1950’s to solve the same core challenges the industry struggles with today: cost savings, speed, and linguist augmentation. Although the infamous report on Languages and Machines from the Automatic Language Processing Advisory Committee (ALPAC) published in 1966 discussed these challenges in some depth (albeit from a U.S. perspective), it sent a resounding message that “there is no emergency in the field of translation.” Research funding suffered; researcher Margaret King described the impact as effectively “killing machine translation research in the States.”

Borrowing from S.E. Hinton, that was then, this is now. Technology advancements and pure computing power have made machine translation not only viable, but also potentially game-changing. A global economy, the volume and velocity of content required to run a global business, and customer expectations is steadily shifting enterprise postures from “not an option” to “help me understand where MT fits.” Case in point — participants in our study identified MT as one of the top three valuable technologies for the future.

There’s lots of game-changing news for our readers to digest.

  • An excellent place to start is with our colleagues at Multilingual Magazine, who dedicated the April-May issue to this very subject. Don Osborn over at the Multidisciplinary Perspectives blog provides an excellent summary, posing the question: “Is there a paradigm shift on machine translation?”
  • Language Weaver predicts a potential $67.5 billion market for digital translation, fueled by MT. CEO Mark Tapling explains why.
  • SYSTRAN, one of the earliest MT software developers provides research and education here.
  • And finally (for today), there’s no way to deny the Google impact — here’s their FAQ about the beta version of Google Translate. TAUS weighs in on the subject here.

Mary and I will be at Localization World Madison to provide practical advice and best practices for making the enterprise business case for multilingual communications investments as part of a Global Content Value Chain. But we’re also looking forward to the session focused on MT potential, issues, and vendor approaches. The full grid is here. Join us!

CM Pros Summit in Boston

The Content Management Professionals Association (CM Pros) will once again be holding their annual Fall Summit in conjunction with Gilbane Boston in December. There are details over on our Events blog which I won’t duplicate here, or even better, go right to the source at http://summit.cmprofessionals.org/. If you are a member we hope to see you, and if you are not you can find out about joining on the CM Pros site at http://cmprofessionals.org/

Webinar: New Generation Knowledge Management

Tuesday, October 7th, 2008
11:00am PT / 2:00pm ET


Organizations are faced with critical knowledge management issues including knowledge capture, IP retention, search and discovery, and fostering innovation. The failure to properly address these issues results in companies wasting millions of dollars through inefficient information discovery and poor collaboration techniques. Today’s knowledge management systems must blend social media technologies with enterprise search, access, and discovery tools to give users a 360-degree view of their information assets. This blend is the foundation for new generation knowledge management.
Moderated by Andy Moore, Publisher of KMWorld Magazine, join Senior Analyst Leonor Ciarlone and Phil Green, CTO at Inmagic for a discussion on perspectives from Gilbane’s report on Collaboration and Social Media 2008, the power of Social Knowledge Networks, and an introduction to Inmagic® Presto.
Space is limited, register here!

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑