Obie announced the launch of its new browser extension to democratize access to Obie’s core search, access, and knowledge sharing functionality. They also launched Personal Pro and Personal Free plans that decouple Obie’s search functionality from Slack for the first time to expand availability for individuals and remote teams. Obie uses natural language processing (NLP) to understand complex queries, as well as machine learning (ML) to improve results with every document search. Users can also manually add “FAQs” to store templates, text snippets, and frequently accessed information by simply highlighting the information in their browser and adding it to Obie.
Category: Semantic technologies (Page 18 of 72)
Our coverage of semantic technologies goes back to the early 90s when search engines focused on searching structured data in databases were looking to provide support for searching unstructured or semi-structured data. This early Gilbane Report, Document Query Languages – Why is it so Hard to Ask a Simple Question?, analyses the challenge back then.
Semantic technology is a broad topic that includes all natural language processing, as well as the semantic web, linked data processing, and knowledge graphs.
Serviceaide, Inc., a provider of intelligent, enterprise service management solutions, announced the launch of Luma Knowledge, a self-learning, knowledge-centered product that optimizes access, creation and reuse of enterprise knowledge to service and support needs of users and customers. The maker of the AI-powered Luma Virtual Agent, Serviceaide is leveraging AI technologies like natural language processing and machine learning in digital interactions, knowledge and automation to bring advanced capabilities and business value to service and support functions across the enterprise. Features and capabilities:
- The Luma Knowledge hub provides a common tool to actively correlate and access information federated across the enterprise.
- Luma Knowledge offers a common semantic pathway to all enterprise knowledge.
- Natural Language Processing auto extracts topics and pulls text from complex documents to auto-create FAQs.
- A dynamic guided search capability, based on available knowledge, helps users access the right information even when they don’t know exactly what to ask for, and don’t know what is in the knowledge base.
- Automated learning leverages machine learning to auto-tune retrievals and identify missing content or other related issues.
- Knowledge Sharing – Federating across multiple knowledge bases, semantic searches and guiding requests deliver accurate knowledge
- Knowledge Discovery – Proactively discovering knowledge both inside an organization and from external sources
- Knowledge Improvement – Continuous monitoring of knowledge and feedback to provide recommendations for needed knowledge, correcting knowledge and searches, and retiring unused knowledge
A graph database uses graph structures with nodes, edges, and properties to represent and store data. By definition, a graph database is any storage system that provides index-free adjacency. This means that every element contains a direct pointer to its adjacent element and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
Ontotext (OT) and Semantic Web Company (SWC) announced a strategic partnership to meet the requirements of enterprise architects such as deployment, monitoring, resilience, and interoperability with other enterprise IT systems and security. Users will be able to work with a feature-rich toolset to manage a graph composed of billions of edges that is hosted in data centers around the world. The companies have implemented an integration of the PoolParty Semantic SuiteTM v.8 with the GraphDB and Ontotext Platform, which offers benefits for numerous use cases:
- GraphDB powering PoolParty: Most of the knowledge graph management tools out there bundle open-source solutions that are good at managing thousands of concepts, whereas PoolParty bundled with GraphDB manages millions of concepts and entities—without extra deployment overheads.
- PoolParty linked to high-availability GraphDB cluster: GraphDB can now be used as an external store for PoolParty, which offers a combination of performance, scalability and resilience. This is particularly relevant for organizations intent on developing tailor-made knowledge graph platforms integrated into their existing data and content management infrastructure.
- Dynamic text analysis using big knowledge graphs: PoolParty can be used to edit big knowledge graphs in order to tune the behavior of Ontotext’s text analysis pipelines, which employ vast amounts of domain knowledge to boost precision. This way the power and comprehensiveness of generic off-the-shelf natural language processing (NLP) pipelines can be custom-tailored to an enterprise.
- GraphQL benefits for PoolParty: Application developers can now access the knowledge graph via GraphQL to build end-user applications or integrate knowledge graph services with the functionality of existing systems. Ontotext Platform uses semantic business objects, defined by subject matter experts and business analysts, to generate GraphQL interfaces and transform them into SPARQL.
ProQuest is improving the accessibility of subscription and open access content on its platform with a series of enhancements designed to boost research, teaching and learning outcomes. These enhancements include:
- A new starting point for research: Now, users can begin their search from the open web by visiting search.proquest.com. Through their search results, they’ll be delivered straight to the resources their library subscribes to.
- New preview feature: Users can search, find and preview the content of nearly a billion ProQuest documents directly from the open web for better discoverability.
- Broader discovery of open access content: Researchers can access an ever-expanding universe of scholarly full-text open access sources directly – all indexed and delivered with the same level of quality and precision as ProQuest’s subscription content.
These enhancements are now live, with no action required by libraries or their users to activate. They’re part of ProQuest’s larger, ongoing initiative to add value to its solutions, expand pathways to access and help libraries increase usage of their resources.
Doctor Evidence (DRE) has updated their newly launched DOC Analytics (“Digital Outcome Conversion”) platform with network meta-analysis (NMA) capabilities. DOC Analytics provides immediate quantitative insights into the universe of medical information using artificial intelligence/machine learning (AI/ML) and natural language processing (NLP). With the addition of indirect treatment comparison and landscape analysis using NMA, DOC Analytics is a critical, daily-use tool for strategic functions in life sciences companies. DOC Analytics allows users to conduct analyses comprised of real-time results from clinical trials, real-world evidence (RWE), published literature, and any custom imported data to yield insightful direct meta-analysis, network-meta analysis, cohort analysis, or bespoke statistical outputs. Analyses are informed by AI/ML and can be made fit-to-purpose with filters for demographics, comorbidities, sub-populations, inclusion/exclusion selections, and other relevant parameters.
OpenAI API announce they were releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now request access in order to integrate the API into your product, develop an entirely new application, or help us explore the strengths and limits of this technology. Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can “program” it by showing it just a few examples of what you’d like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers. The API is designed to be both simple for anyone to use but also flexible enough to make machine learning teams more productive. In fact, many OpenAI teams are now using the API so that they can focus on machine learning research rather than distributed systems problems. Today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
The field’s pace of progress means that there are frequently surprising new applications of AI, both positive and negative. We will terminate API access for obviously harmful use-cases, such as harassment, spam, radicalization, or astroturfing. But we also know we can’t anticipate all of the possible consequences of this technology, so we are launching today in a private beta rather than general availability, building tools to help users better control the content our API returns, and researching safety-relevant aspects of language technology (such as analyzing, mitigating, and intervening on harmful bias). We’ll share what we learn so that our users and the broader community can build more human-positive AI systems.
Newgen Software, a global provider of low code digital automation platform for managing content, processes, and communication, announced it has launched an enhanced version of its document classification service for enabling the high-volume document-handling environment. Intelligent Document Classifier 1.0 allows users to gain hidden insights by classifying documents, based on structural features and/or textual features. It uses machine learning (ML) and artificial intelligence (AI), to enable layout- and content-based document classification. Organizations can leverage the solution to automatically classify various documents such as sales/purchase orders, enrollment and claim forms, legal documents, mailroom documents, contracts, correspondences, and others. This helps ensure important information is available thereby reducing risks and costs associated with manual document management.
Key features include:
- Image Classification – Allows users to automatically classify images using neural networks and deep learning algorithms based on structural features
- Content Classification – Enables document classification based on content, in the absence of structural features
- Trainable Machine Learning – Auto-learns definitions and features of a document class and creates a trained model
- Admin Dashboard – Generates analytics reports for a 360-degree view of the process
- Integration Capabilities – Facilitates easy integration with core business applications, content management platforms, and document capture applications