Curated for content, computing, and digital experience professionals

Category: Semantic technologies (Page 2 of 72)

Our coverage of semantic technologies goes back to the early 90s when search engines focused on searching structured data in databases were looking to provide support for searching unstructured or semi-structured data. This early Gilbane Report, Document Query Languages – Why is it so Hard to Ask a Simple Question?, analyses the challenge back then.

Semantic technology is a broad topic that includes all natural language processing, as well as the semantic web, linked data processing, and knowledge graphs.


Ontotext releases Metadata Studio 3.2

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, released Ontotext Metadata Studio version 3.2. The metadata management and tagging control solution helps organizations to transform content into knowledge. Users can utilize the taxonomical instance data in their knowledge graph to achieve explainable and customizable out-of-the-box taxonomy-driven tagging.

Ontotext Metadata Studio 3.2 makes it easy for users to determine whether a use case could be automated or not across any third-party text mining service, simplifies orchestrating complex text analysis across third-party services, and evaluates their quality against internal benchmarks or against one another.

With version 3.2, Ontotext Metadata Studio enables non-technical end users to create, evaluate, and improve the quality of their text analytics service by tagging and linking against their own business domain model. With extensive explainability and control features, users who are not proficient in text analytics techniques can understand the causal relationships between the underlying dataset, the specific text analytics service configuration, and the final output.

This enhancement enables efficient user intervention, making the human truly in the loop and completely in control of the whole extraction process. Ontotext Metadata Studio is domain neutral and applicable for various domains and use cases.

https://www.ontotext.com

Ontotext releases GraphDB 10.2

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, launched GraphDB 10.2, an RDF database for knowledge graph. GraphDB enables organizations to link diverse data, index it for semantic search, and enrich it via text analysis to build large scale knowledge graphs. With improved cluster backup and cloud support, GraphDB lowers traditional memory requirements, and provides a more transparent memory model.

Users can oversee system health and diagnose problems easier using industry-standard toolkit Prometheus or by monitoring performance directly within the GraphDB Workbench itself. The solution also includes support for X.509 client certificate authentication for greater flexibility when accessing a secured GraphDB instance.

Backups can also be stored directly in Amazon S3 storage to ensure the most up to date data is securely protected against inadvertent changes or hardware failures in local on-prem infrastructure.

Internal structures and moved memory usage from off-heap to the Java heap were also redesigned for a more straightforward memory configuration, where a single number i.e. (the Java maximum heap size) controls the maximum memory available to GraphDB. Memory used during RDF Rank computation was also optimized making it possible to compute the rank of larger repositories with less memory.

https://www.ontotext.com

TerminusDB launches TerminusCMS

TerminusDB announced the launch of a product called TerminusCMS that connects content, documentation, data, and processes to turn content management from a resource drain into a cross-functional semantic knowledge centre.

TerminusCMS is an open-source, headless, and developer-focused content and knowledge management system. Under the hood is an RDF graph database that connects JSON documents into a graph. It is schema-based and the schema prompts developers to model their knowledge management requirements. By modeling requirements and incorporating operational/transactional data, content, documentation, and media, businesses create an organization-wide knowledge graph. This knowledge graph bridges content and data silos but also includes business logic in the form of graph edges: the relationships between data and content.

Global organizations are complicated environments with huge supply chains, multi-regional teams, and local regulatory compliance needs. Semantic relationships between people, content, and data make the job of obtaining knowledge from day-to-day operations and transactions possible. TerminusCMS has an analytics engine that enables developers to use GraphQL as a proper graph query language. Often hidden transactional and operational data, and once siloed content, is discoverable and useable with TerminusCMS.

https://terminusdb.com/blog/category/content-knowledge/

OAGi releases IOF Ontology Version 202301

OAGi (Open Applications Group, Inc.) has released the 202301 suite of IOF (Industrial Ontologies Foundry) Ontology that includes IOF Core in the Released status and the Supply Chain and the Maintenance Reference Ontologies in the Provisional Status. Please consult the README file for the detail of the release. It is available for immediate download at IOF Release 202301.

IOF Core is a foundation for domain ontologies such as maintenance and supply chain. IOF Core represents thousands of person-hours of development, review, refinement, and quality-checking. IOF has established processes modeled after the proven approach used by the EDM Council for the collaborative development, testing, and publication of a number of industry ontologies, including the Financial Industry Business Ontology (FIBO) and the Identification of Medicinal Products (IDMP). The 202301 release also contains the maintenance and the supply chain reference ontologies in the provisional state. IOF will constantly improve IOF Core while working on domain ontologies based on it. IOF invites organizations to contribute to industrial ontology work.

https://oagi.orghttps://industrialontologies.org

Expert.ai announces new features to hybrid natural language platform

Expert.ai, experts in artificial intelligence (AI) for language understanding and language operations, released new features for its Natural Language (NL) platform enhancing natural language processing (NLP) workflow support. Employing a hybrid approach that combines NL techniques – including machine learning and knowledge-based, symbolic AI – the platform leverages unstructured data, like text in documents, applications and tools, to enable organizations across vertical domains to create new business models and optimize processes.

  • The new release enables the use of Kubernetes (K8s) to store core data on-premise, implement specific security measures or comply with specific regulatory requirements.
  • The release allows integration of 3rd-party external knowledge sources including Unified Medical Language System (UMLS) like MeSH, ICD9 and ICD10 and specific resources like the ones provided by WAND Inc., a source for domain specific taxonomies.
  • Developers can now interact with expert.ai APIs using visual documentation, making it easy for back-end implementation and client-side consumption. Development teams can now visualize and interact with the API resources using a familiar Swagger interface.
  • Navigation of Knowledge Graphs (KGs): Resulting in customized navigation of knowledge models to identify the strength of related concepts and connections.

https://www.expert.ai

Zeta Alpha integrates GPT with its semantic neural engine

Zeta Alpha, a neural search and discovery platform, announced they have integrated with OpenAI’s GPT with its semantic neural search engine, to provide more reliable and explainable AI generated answers to enterprise search queries. This capability gives workers the ability to leverage GPT to access knowledge hidden in troves of internal company data.

Generative AI models like GPT tend to ‘hallucinate,’ or give answers that seem plausible, but are not factually correct. This prevents organizations from adopting AI tools for enterprise search and knowledge management. The combination of Zeta Alpha’s intelligent neural search engine and advances in GPT-3 reduce this problem by applying natural language understanding. Other enhancements include:

  • InPars v2, a GPT-powered neural search model that enables fast tuning on synthetic in-domain data without the cost of creating terminology lists and taxonomies.
  • Zeta Alpha enables users to ask a question and get contextually relevant results, automatically saving text to a spreadsheet or note for further analysis, and mapping back to the location where the document is saved for future access.
  • Visualizing the information landscape in a semantic map and interpreting it with summaries by GPT can guide knowledge workers in the right direction to answer important strategic questions.

https://www.zeta-alpha.com/

Digital Science acquires metaphacts

Digital Science has completed the acquisition of metaphacts, which has become the newest member of the Digital Science family. Based in Germany, metaphacts is a knowledge graph and decision intelligence software company. Its main product metaphactory is a platform that supports customers in accelerating their adoption of knowledge graphs and driving knowledge democratization. metaphacts operates in the pharmaceutical, engineering, manufacturing, finance, insurance, retail and energy markets, and will be working most closely with Digital Science portfolio product Dimensions.

This acquisition will see metaphacts and Digital Science build new, joint knowledge democratization solutions, facilitating the interface between humans and machines, and helping transform raw data into human and machine-interpretable, actionable insights to power business decisions. metaphactory’s semantic knowledge modelling approach will be applied to the Dimensions linked information dataset to expose new, meaningful knowledge through metaphactory’s semantic search and graph exploration capabilities.

Customers can leverage this curated, packaged data solution and enrich and gain additional context for their proprietary knowledge. Additional integrations with complementary products from the Digital Science portfolio, such as OntoChem’s text analysis and data mining products, are also available.

https://metaphacts.comhttps://www.digital-science.com

European Broadcasting Union announces EBUCorePlus

The EBU (European Broadcasting Union) announced EBUCorePlus, a new media metadata standard ontology for media enterprises. It is defined by EBU Members for the media community. It follows up on two long-standing EBU ontologies: EBUCore and CCDM (Class Conceptual Data Model). The two were merged and revised. The result is EBUCorePlus, the new standard that can fully replace its predecessors. It inherits both the reliability of EBUCore and the end-to-end coverage of the media value chain of CCDM. EBUCorePlus is specified using the ontology web language and therefore strictly semantic.

EBUCorePlus serves as a plug and play framework. It can be used out of the box, either in its entirety or just a subset of its elements. But it may also be adapted and extended to enterprise-specific needs. Especially for system integration tasks and defining requirements, projects benefit from EBUCorePlus as a business – not technology – oriented language.

The EBU’s free CorePlus Demonstrator Kit (CDK) can help with extending development skills from entity-relationship models to ontologies, from tables to triples, and from SQL to SPARQL, and is available in cloud, hybrid and on- prem versions. It contains a graph database, populated with the EBUCorePlus ontology and sample data.

https://tech.ebu.ch/news/2022/12/from-data-to-knowledge-with-ebucoreplus

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑