Curated for content, computing, and digital experience professionals

Category: Semantic technologies (Page 1 of 57)

Our coverage of semantic technologies goes back to the early 90s when search engines focused on searching structured data in databases were looking to provide support for searching unstructured or semi-structured data. This early Gilbane Report, Document Query Languages – Why is it so Hard to Ask a Simple Question?, analyses the challenge back then.

Semantic technology is a broad topic that includes all natural language processing, as well as the semantic web, linked data processing, and knowledge graphs.

Blue Prism intelligent automation now in AWS Marketplace

Blue Prism announced the availability of Blue Prism intelligent automation software in AWS Marketplace, giving Amazon Web Services (AWS) and Blue Prism customers another avenue for automation in the cloud. The listing includes Blue Prism on an Amazon Machine Image (AMI) instance with a set number of digital workers, plus connectors for Amazon Textract, Amazon Rekognition, and Amazon Comprehend machine learning capabilities. The Blue Prism offering in AWS Marketplace gives customers an easy way to purchase digital worker licenses and start automating faster via AWS. It includes:

  • Blue Prism Enterprise license for either 1, 3, or 5 digital workers for one year, plus the ability to add more as needed. Digital workers come equipped with embedded AWS machine learning capabilities, including:
    • Amazon Comprehend: A natural language processing (NLP) service that uses machine learning to find insights and relationships in text.
    • Amazon Rekognition: A service that makes it easy to add image and video analysis to users’ applications using proven, scalable, deep learning technology that requires no machine learning expertise to use.
    • Amazon Textract: A managed machine learning service that automatically extracts printed text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
  • Access to resources, tutorials, and training materials that demonstrate work queues and possible automations. Users just need an AWS account to get started.

AWS announces QuickSight Q

Amazon Web Services, Inc. (AWS) announced QuickSight Q to make it simpler for end-users to get more value from their business data using machine learning, among other new analytics analytics capabilities.

Amazon QuickSight Q is a machine learning-powered capability for Amazon QuickSight that lets users type natural language questions about their business data and receive accurate answers in seconds. As users begin typing their questions, Amazon QuickSight Q provides auto-complete suggestions with key phrases and business terms, and automatically performs spell-check and acronym and synonym matching, so users do not have to worry about typos or remembering the exact business terms for the data. Amazon QuickSight Q uses deep learning and machine learning (natural language processing, schema understanding, and semantic parsing for SQL code generation) to generate a data model that automatically understands the meaning of and relationships between business data, so users receive accurate answers to their business questions and do not have to wait for a data model to be built. Amazon QuickSight Q comes pre-trained on data from various domains and industries like sales, marketing, operations, retail, human resources, pharmaceuticals, insurance, and energy.,

Ontotext Platform 3.3 streamlines knowledge graph lenses & GraphQL interfaces

The new version of the Platform introduces a web-based administration tool that enables engineering teams to generate, enrich, validate and manage knowledge graph schemas, and comes with a major new component included, Ontotext Platform Workbench, a web-based administration interface to the platform. This simplifies the work of the subject matter experts by lowering the burden of knowing all platform configuration endpoints and commands and streamlines adoption with a graphical interface. The Ontotext Platform Workbench provides the ability to generate, validate and manage schemas using a wizard that guides the user through the process step by step.

Schemas, comprised of declarative definitions of semantic objects, are at the heart of the zero-code approach for access and management of knowledge graphs in Ontotext Platform 3. These schemas act like a lens to focus on specific parts of a large-scale knowledge graph, enabling querying and updates via GraphQL interfaces. This makes it easier for application developers to access knowledge graphs without tedious development of back-end APIs or complex SPARQL. The underlying Semantic Object service implements an efficient GraphQL to SPARQL translation as well as a generic configurable security model.

In earlier versions the setup of the Platform license required some technical skills, and often users without a strong IT operations background had difficulties configuring the license in the docker compose file. With the new version all users can use the Workbench and set up the license much easier and avoid issues related to license path, operation system specifics and others.

Stardog announces cloud-native Enterprise Knowledge Graph Platform

Stardog announced Stardog Cloud, cloud-native Enterprise Knowledge Graph Platform. Stardog Cloud connects data in every cloud as well as on-premise environments. Deployed as a managed service, Stardog Cloud transforms existing enterprise data infrastructure into a comprehensive data fabric and answers complex queries across data silos, unifies data across the enterprise ecosystem based on its meaning, and context to create a connected network of knowledge. Highlights of Stardog include:

  • Data Virtualization: Allows organizations to leave data within existing data sources and silos and query it where it lives – whether on-premise or in the cloud – and perform complex queries across silos.
  • Semantic Models: Rationalizes the meaning between legacy applications on-premise, and new remote, cloud or on-premise applications in a flexible scalable way. Seamlessly supports multiple apps and data models in order to bring context to data and support better decision-making.
  • Inference Engine: Connects data without having to rely only on explicit key matching. Leverages machine learning and inferencing regardless of the data domain or subject area and then uses this rich web of information to discover new relationships.

SYSTRAN announces Translation Widget to help SMBs globalize websites

SYSTRAN announced its Translation Widget to allow SMBs to easily translate their website to reach global audiences. SYSTRAN Translation Widget is inserted directly into the website to translate text for all visitors and activates based on the visitor’s settings, cookies and preferred browser language. The widget can be deployed across most Internet browsers and internal company Intranets and visitors can access on PC, Mac laptops, tablets and smartphones. Users are also able to create customized user dictionaries that helps better translate special terminology, acronyms and industry-specific language. The new JavaScript Translation Widget uses SYSTRAN‘s Marketplace Catalog that has hundreds of language combinations in different domains so translations are adapted to businesses’ industry and professional jargon to provide a better and more meaningful experience for their website visitors. launches new tools for NLP apps and edge AI is launching two products at this week’s KMWorld & Text Analytics Forum Connect 2020: Studio, an intelliJ plug-in for simplifying natural language processing applications (NLP) development, and Edge NL API for enabling seamless artificial intelligence (AI) deployment on premise or on a private cloud. Studio leverages natural language understanding (NLU) abilities to streamline the development of NLP applications. Users can take advantage of core AI features for categorization, extraction, and sentiment analysis to build language based custom applications. With the out-of the-box knowledge graph, developers and data scientists can reduce development time and training costs while gaining precise comprehension of their content so that it can be used more efficiently and at scale to support business operations. Studio provides a friendly dashboard to support advanced testing through a rich set of metrics that provides input to reach a high level of accuracy. Edge NL API enables developers and data scientists to run NLP applications built with Studio locally or on their private cloud as well as apply capabilities to other information-intensive applications or integrate them in any pre-existing workflow, database or legacy product. Studio and Edge API are free, and come with sample projects and software development kits (SDKs) to help developers get started.

Inrupt releases enterprise version of Solid Server

Tim Berners-Lee‘s (unedited) announcement…

Today marks a huge milestone in Inrupt‘s journey to deliver on my vision for a vibrant web of shared benefit and opportunity. I’m thrilled that the first enterprise-ready version of a Solid Server, Inrupt’s ESS, is now available for businesses and organizations. It’s the fruit of two years of work by our outstanding team. These technologies will fundamentally change how organizations connect people with their data and create value together. It’s going to drive groundbreaking new opportunities that not only restore trust in data but also enhance our lives.

We’ve reached this milestone alongside a trusted cohort of early adopters including the BBC, NatWest Bank, the UK’s National Health Service, and the Flanders Government. They’re each proving what’s possible for their users by changing the way they think about, share, and use data. You can read more about these important pilots in this blog post from Inrupt CEO, John Bruce.

The web was always meant to be a platform for creativity, collaboration, and free invention – but that’s not what we are seeing today. Today, business transformation is hampered by different parts of one’s life being managed by different silos, each of which looks after one vertical slice of life, but where the users and teams can’t get the insight from connecting that data. Meanwhile, that data is exploited by the silo in question, leading to increasing, very reasonable, public skepticism about how personal data is being misused. That in turn has led to increasingly complex data regulations.

There had to be a better way. The Solid architecture provides that better way.

I founded Inrupt to trigger an inevitable shift in how the web operates, to mobilize resources and set a long-term direction in motion. Today that shift takes a significant step.

The technologies we’re releasing today are a component of a much-needed course correction for the web. It’s exciting to see organizations using Solid to improve the lives of everyday people – through better healthcare, more efficient government services and much more.

These first major deployments of the technology will kick off the network effect necessary to ensure the benefits of Solid will be appreciated on a massive scale. Once users have a Solid Pod, the data there can be extended, linked, and repurposed in valuable new ways. And Solid’s growing community of developers can be rest assured that their apps will benefit from the widespread adoption of reliable Solid Pods, already populated with valuable data that users are empowered to share.

Ultimately, this new foundation of trust and cooperation will lead to entirely new business models that actually benefit users as well.

Starting today, more organizations worldwide can take the first step towards building a trusted web where innovation flourishes, and everyone – businesses, developers, and web users – share the benefits. We hope you’ll join us on this exciting journey.

Microsoft adds Hindi to Text Analytics service to strengthen Sentiment Analysis

Microsoft announced the addition of Hindi as the latest language under its Text Analytics service to support businesses and organizations with customer Sentiment Analysis. Text Analytics is part of the Microsoft Azure Cognitive Services. Using this service, organizations can find out what people think of their brand or topic as this enables analyzing Hindi text for clues about positive, neutral, or negative sentiment. The Text Analytics service can be used for any textual/audio input or feedback in combination with Azure Speech-to-Text service. Microsoft’s Text Analytics service uses the latest AI models to analyze content in Hindi, using Natural Language Processing (NLP) for text mining and text analysis. The functionality provided by Text Analytics include sentiment analysis, opinion mining, key phrase extraction, language detection, named entity recognition, and PII detection. Sentiment analysis currently supports more than 20 languages including Hindi.

Microsoft Text Analytics service’s Sentiment Analysis feature evaluates text and returns confidence scores between 0 and 1 for positive, neutral, and negative sentiment for each document and sentences within a document. The service also provides sentiment labels (such as “negative”, “neutral” and “positive”) based on the highest confidence score at a sentence and document-level. It can be accessed from Azure cloud and on-prem using Containers. This helps brands in detecting positive and negative tonality in customer reviews, social media & call center conversations, and forum discussions, among other channels no matter where their data resides.

« Older posts

© 2020 The Gilbane Advisor

Theme by Anders NorenUp ↑