Curated content for content, computing, and digital experience professionsals

Month: February 2011 (Page 1 of 2)

Google Debuts iOS Translation App

The official Google Translate for iPhone app is now available for download from the App Store. The new app has all of the features of the web app, as well as some new additions designed to improve translation experience. The new app accepts voice input for 15 languages, and—just like the web app—you can translate a word or phrase into one of more than 50 languages. For voice input, just press the microphone icon next to the text box and say what you want to translate. You can also listen to your translations spoken out loud in one of 23 different languages. This feature uses the same new speech synthesizer voices as the desktop version of Google Translate introduced last month. Another feature is the ability to easily enlarge the translated text to full-screen size. This way, it’s easier to read the text on the screen, or show the translation to the person you are communicating with. Just tap on the zoom icon to quickly zoom in. And the app also includes all of the major features of the web app, including the ability to view dictionary results for single words, access your starred translations and translation history even when offline, and support romanized text like Pinyin and Romaji. You can download Google Translate now from the App Store globally. The app is available in all iOS supported languages, but you’ll need an iPhone or iPod touch iOS version 3 or later. http://itunes.apple.com/us/app/google-translate/

iCore CMS Released

2011 marks the launch of iCore CMS, a new web content management system designed for managing the entire workings of an online business. The iCore CMS is the brain child of Instani, a Microsoft Certified Partner delivering web design, SEO and mobile application development services to a global client base. iCore CMS allows businesses to manage all aspects of product management and customer relations through one user interface. Users can choose from a variety of free customer facing template designs with the option of custom design and development by the Instani team. All system updates are instantaneous and free for users, and affordable monthly payment plans allow businesses to choose a package best tailored to their requirements. iCore is a fully hosted CMS, particularly ideal for web designers who require something customisable and plug and play for their clients. iCore Content Management System is fully rebrandable and compatible with Dreamweaver software. iCore challenges the current capabilities of open source CMS by providing an unrestricted, highly secure and fully supported platform. www.icorecms.com

How Far Does Semantic Software Really Go?

A discussion that began with a graduate scholar at George Washington University in November, 2010 about semantic software technologies prompted him to follow up with some questions for clarification from me. With his permission, I am sharing three questions from Evan Faber and the gist of my comments to him. At the heart of the conversation we all need to keep having is, how far does this technology go and does it really bring us any gains in retrieving information?

1. Have AI or semantic software demonstrated any capability to ask new and interesting questions about the relationships among information that they process?

In several recent presentations and the Gilbane Group study on Semantic Software Technologies, I share a simple diagram of the nominal setup for the relationship of content to search and the semantic core, namely a set of terminology rules or terminology with relationships. Semantic search operates best when it focuses on a topical domain of knowledge. The language that defines that domain may range from simple to complex, broad or narrow, deep or shallow. The language may be applied to the task of semantic search from a taxonomy (usually shallow and simple), a set of language rules (numbering thousands to millions) or from an ontology of concepts to a semantic net with millions of terms and relationships among concepts.

The question Evan asks is a good one with a simple answer, “Not without configuration.” The configuration needs human work in two regions:

  • Management of the linguistic rules or ontology
  • Design of search engine indexing and retrieval mechanisms

When a semantic search engine indexes content for natural language retrieval, it looks to the rules or semantic nets to find concepts that match those in the content. When it finds concepts in the content with no equivalent language in the semantic net, it must find a way to understand where the concepts belong in the ontological framework. This discovery process for clarification, disambiguation, contextual relevance, perspective, meaning or tone is best accompanied with an interface making it easy for a human curator or editor to update or expand the ontology. A subject matter expert is required for specialized topics. Through a process of automated indexing that both categorizes and exposes problem areas, the semantic engine becomes a search engine and a questioning engine.

The entire process is highly iterative. In a sense, the software is asking the questions: “What is this?”, “How does it relate to the things we already know about?”, “How is the language being used in this context?” and so on.

2. In other words, once they [the software] have established relationships among data, can they use that finding to proceed – without human intervention- to seek new relationships?

Yes, in the manner described for the previous question. It is important to recognize that the original set of rules, ontologies, or semantic nets that are being applied were crafted by human beings with subject matter expertise. It is unrealistic to think that any team of experts would be able to know or anticipate every use of the human language to codify it in advance for total accuracy. The term AI is, for this reason, a misnomer because the algorithms are not thinking; they are only looking up “known-knowns” and applying them. The art of the software is in recognizing when something cannot be discerned or clearly understood; then the concept (in context) is presented for the expert to “teach” the software what to do with the information.

State-of-the-art software will have a back-end process for enabling implementer/administrators to use the results of search (direct commentary from users or indirectly by analyzing search logs) to discover where language has been misunderstood as evidenced by invalid results. Over time, more passes to update linguistic definitions, grammar rules, and concept relationships will continue to refine and improve the accuracy and comprehensiveness of search results.

3. It occurs to me that the key value added of semantic technologies to decision-making is their capacity to link sources by context and meaning, which increases situational awareness and decision space. But can they probe further on their own?

Good point on the value and in a sense, yes, they can. Through extensive algorithmic operations, instructions can be embedded (and probably are for high-value situations like intelligence work), instructing the software what to do with newly discovered concepts. Instructions might then place these new discoveries into categories of relevance, importance, or associations. It would not be unreasonable to then pass documents with confounding information off to other semantic tools for further examination. Again, without human analysis along the continuum and at the end point, no certainty about the validity of the software’s decision-making can be asserted.

I can hypothesize a case in which a corpus of content contains random documents in foreign languages. From my research, I know that some of the semantic packages have semantic nets in multiple languages. If the corpus contains material in English, French, German and Arabic, these materials might be sorted and routed off to four different software applications. Each batch would be subject to further linguistic analysis, followed by indexing with some middleware applied to the returned results for normalization, and final consolidation into a unified index. Does this exist in the real world now? Probably there are variants but it would take more research to find the cases, and they may be subject to restrictions that would require the correct clearances.

Discussions with experts who have actually employed enterprise specific semantic software, underscores the need for subject expertise, and some computational linguistics training coupled with an aptitude for creative inquiry. These scientists informed me that individuals, who are highly multi-disciplinary and facile with electronic games and tools, did the best job of interacting with the software and getting excellent results. Tuning and configuration over time by the right human players is still a fundamental requirement.

Really Strategies Announces RSuite Cloud

Really Strategies announced the availability of RSuite Cloud, a web-based editorial and production system for automated multilingual publishing to print, web, and eBook formats.  RSuite Cloud is a hosted end-to-end content management and publishing system for book publishers to create, manage, and distribute single-source content to multiple channels. The system also provides language translation tools to publish in 70 languages, including all major European, Asian, and bidirectional languages.

RSuite Cloud is available on a per-user license or Pay-Per-Page model. Pay-Per-Page is a payment model where the software is free of charge and the publisher only pays for final pages published from the system.

RSuite Cloud accepts Microsoft Word manuscripts into the system and automatically converts the Word files to XML for web-based copyediting and automated page composition. Production workflows can be set up to generate page proofs and eBook drafts for content review and approval. The system is configured to automatically publish print-ready PDF files, HTML output, and eBook formats. http://www.reallysi.com/

New Paper: Taking Online Engagement to the Cloud

I am pleased to say that my third paper for Outsell’s Gilbane Group was published yesterday, in which I return to thinking about cloud computing and the benefits it offers for deploying web experience and engagement technologies.

Titled Taking Online Engagement to the Cloud this short beacon paper looks to provide a guide to digital marketers, senior IT folks and business analysts faced with the decision to deploy these technologies outside the server room. In it we set out to answer the following questions:

  • What do we mean by the cloud? There is a great deal of hype, sales, and marketing messaging around “the cloud.” We explore what it really is and the opportunities it represents for digital marketers.
  • What are the deployment options when working with a cloud platform partner? The decision around deploying to the cloud is not always a binary choice to host in the server room or not. We look at possible solution architecture options and the benefits of each.
  • What do organizations need to look for in a WEM solution in the cloud? If deploying into the cloud is an attractive option for an organization, we consider the key attributes that organizations should build into their selection criteria when choosing a solution.

As with all of our papers, once you register you can download it for free from the Beacon area of our website. While you are there, I suggest taking a look at our Whitepapers section, scrolling down a little to the Engage Me! paper by Mary Laplante. I think it’s a great introduction to our research on the business practice of web engagement and web experience.

I hope you enjoy the paper and I’d very much like to hear your feedback – either here or you can find me on Twitter (@iantruscott)

The paper was sponsored by FatWire and we are looking forward to joining them on a webinar to explore this subject further – follow us on Twitter for an announcement on that. 

« Older posts