Curated for content, computing, and digital experience professionals

Category: Semantic technologies (Page 1 of 71)

Our coverage of semantic technologies goes back to the early 90s when search engines focused on searching structured data in databases were looking to provide support for searching unstructured or semi-structured data. This early Gilbane Report, Document Query Languages – Why is it so Hard to Ask a Simple Question?, analyses the challenge back then.

Semantic technology is a broad topic that includes all natural language processing, as well as the semantic web, linked data processing, and knowledge graphs. launches AI platform for Life Sciences announced availability of the Platform for Life Sciences. With the Platform for Life Sciences, teams can access advanced natural language understanding capabilities, learning methodologies, 3rd-party large language models like BioBert and Bio-GPT as well as customizable pre-built knowledge models to build custom solutions.

Through a hybrid AI approach combining natural language tools, enterprise language models and machine learning, the Platform for Life Sciences shifts the way unstructured medical and scientific data is monitored, understood, analyzed and collated. Teams can access knowledge and insights trapped in medical articles, reports, press releases, clinical research, customer/patient interactions, consent forms, etc. as well as up-to-date knowledge available based on standards like MeSH, UMLS Conditions & Interventions and IUPAR. Pharmaceutical and Life Sciences teams can:

  • Confirm scientific claims against trusted public and private knowledge sources;
  • Extract connections between biomedical entities in literature for in-depth causality analysis to support researchers; 
  • Monitor clinical trials and social media sources filtered by any combination of indication, drug, mechanism of action, sponsor, or geography to gain insight for clinical trials; 
  • Accelerate the quality control process of clinical and preclinical reports analysis using sensitive and proprietary data sources prior to their submission to regulatory bodies.

Stardog introduces Stardog 9

Stardog, a provider of an enterprise knowledge graph platform, announced Stardog 9, with a range of new features and enhancements that enable organizations to easily connect data, people, and processes, and improve performance, scalability, and security. With this release, Stardog’s knowledge graph powered semantic layer has new integrations for Azure Synapse, Collibra Data Governance and Databricks. Benefits include:

  • Expanded Data Access: Stardog 9 supports federated access to Azure Synapse which enhances connectivity to data in Azure Data Lake Storage Gen-2 (ADLS2), reducing the friction in accessing and connecting data through meaning for self-serve analytics.
  • Activated Metadata: Stardog 9 extends Stardog’s Knowledge Catalog to harvest enterprise metadata with integrations for Collibra and Microsoft Purview Data catalogs (in-preview mode only), and any JDBC-accessible data source. These integrations make it easy to semantically-enrich technical metadata with business concepts and enable Data Governance teams and end users to search, query, and explore data assets with an Enterprise Metadata Knowledge Graph.
  • Smart, Automated Entity Linking Across Data Silos: Stardog 9 can identify and link data associated with business objects across data landscapes for better decisions in support of use-cases from Customer 360 to Digital Twin to Fraud Detection, leveraging Databricks Spark to process data. and Reveal Group partner to combine NLP and RPA

Expert[.]ai and Reveal Group announced a partnership to help organizations extend the value in intelligent automation programs with natural language processing and understanding (NLP/NLU). Robotic process automation (RPA) makes organizations more profitable and responsive, streamlining enterprise workflows and enhancing employee engagement and productivity by removing mundane tasks from their workdays. By adding NLP/NLU to RPA, enterprises now have the ability to increase the flexibility and scalability of automation, expanding deployment to more complex use cases and business processes by making sense of unstructured language data. Unstructured data is critical for organizations to be able to understand, analyze and use it to enable a real intelligent automation across the entirety of an enterprise data assets.
The hybrid AI platform complements the Reveal Group’s expertise in intelligent automation services. With, NLP outputs, including intent, automatic categorization, emotional and behavioral traits identification, entity extraction and sentiment analysis, can be deployed and delivered  by Reveal Group to automate multiple use cases, from common cross-industry use cases (email triage in customer services, data analysis, comparison and extraction in legal departments) to more industry-oriented processes (claims management in insurance companies, loan origination and customer onboarding in banking and financial services.).

Kobai launches Saturn Knowledge Graph

Kobai, a codeless knowledge graph platform, announced the availability of Kobai Saturn, a knowledge graph to harness the scale, performance, and cost efficiency of the lakehouse architecture. Kobai Saturn extends the capabilities of the Kobai Platform, integrating every use case and function into a single semantic layer.

Business users need quick insights to make day-to-day decisions, which require connected data from data from across the enterprise. With Kobai Saturn, organizations can leverage the ease of knowledge graphs with the scalability of a data warehouse. New capabilities include:

  • Direct integration: embedded in the data layer, organizations can query data without moving it from the lake or warehouse, following W3C and Lakehouse open standards for complete interoperability
  • Improved performance: on-demand and burstable compute leveraging the underlying data layer for faster graph queries and ML training without virtualization
  • Seamless collaboration: publish business question as SQL views to integrate with existing data science and business intelligence tools

Kobai’s codeless platform provides a business-first approach and a collaborative environment to rapidly share insights across the entire organization. The new Kobai Saturn knowledge graph works directly with Kobai’s Studio framework and Tower visualization products.

Ontotext releases Metadata Studio 3.2

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, released Ontotext Metadata Studio version 3.2. The metadata management and tagging control solution helps organizations to transform content into knowledge. Users can utilize the taxonomical instance data in their knowledge graph to achieve explainable and customizable out-of-the-box taxonomy-driven tagging.

Ontotext Metadata Studio 3.2 makes it easy for users to determine whether a use case could be automated or not across any third-party text mining service, simplifies orchestrating complex text analysis across third-party services, and evaluates their quality against internal benchmarks or against one another.

With version 3.2, Ontotext Metadata Studio enables non-technical end users to create, evaluate, and improve the quality of their text analytics service by tagging and linking against their own business domain model. With extensive explainability and control features, users who are not proficient in text analytics techniques can understand the causal relationships between the underlying dataset, the specific text analytics service configuration, and the final output.

This enhancement enables efficient user intervention, making the human truly in the loop and completely in control of the whole extraction process. Ontotext Metadata Studio is domain neutral and applicable for various domains and use cases.

Ontotext releases GraphDB 10.2

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, launched GraphDB 10.2, an RDF database for knowledge graph. GraphDB enables organizations to link diverse data, index it for semantic search, and enrich it via text analysis to build large scale knowledge graphs. With improved cluster backup and cloud support, GraphDB lowers traditional memory requirements, and provides a more transparent memory model.

Users can oversee system health and diagnose problems easier using industry-standard toolkit Prometheus or by monitoring performance directly within the GraphDB Workbench itself. The solution also includes support for X.509 client certificate authentication for greater flexibility when accessing a secured GraphDB instance.

Backups can also be stored directly in Amazon S3 storage to ensure the most up to date data is securely protected against inadvertent changes or hardware failures in local on-prem infrastructure.

Internal structures and moved memory usage from off-heap to the Java heap were also redesigned for a more straightforward memory configuration, where a single number i.e. (the Java maximum heap size) controls the maximum memory available to GraphDB. Memory used during RDF Rank computation was also optimized making it possible to compute the rank of larger repositories with less memory.

TerminusDB launches TerminusCMS

TerminusDB announced the launch of a product called TerminusCMS that connects content, documentation, data, and processes to turn content management from a resource drain into a cross-functional semantic knowledge centre.

TerminusCMS is an open-source, headless, and developer-focused content and knowledge management system. Under the hood is an RDF graph database that connects JSON documents into a graph. It is schema-based and the schema prompts developers to model their knowledge management requirements. By modeling requirements and incorporating operational/transactional data, content, documentation, and media, businesses create an organization-wide knowledge graph. This knowledge graph bridges content and data silos but also includes business logic in the form of graph edges: the relationships between data and content.

Global organizations are complicated environments with huge supply chains, multi-regional teams, and local regulatory compliance needs. Semantic relationships between people, content, and data make the job of obtaining knowledge from day-to-day operations and transactions possible. TerminusCMS has an analytics engine that enables developers to use GraphQL as a proper graph query language. Often hidden transactional and operational data, and once siloed content, is discoverable and useable with TerminusCMS.

OAGi releases IOF Ontology Version 202301

OAGi (Open Applications Group, Inc.) has released the 202301 suite of IOF (Industrial Ontologies Foundry) Ontology that includes IOF Core in the Released status and the Supply Chain and the Maintenance Reference Ontologies in the Provisional Status. Please consult the README file for the detail of the release. It is available for immediate download at IOF Release 202301.

IOF Core is a foundation for domain ontologies such as maintenance and supply chain. IOF Core represents thousands of person-hours of development, review, refinement, and quality-checking. IOF has established processes modeled after the proven approach used by the EDM Council for the collaborative development, testing, and publication of a number of industry ontologies, including the Financial Industry Business Ontology (FIBO) and the Identification of Medicinal Products (IDMP). The 202301 release also contains the maintenance and the supply chain reference ontologies in the provisional state. IOF will constantly improve IOF Core while working on domain ontologies based on it. IOF invites organizations to contribute to industrial ontology work.


« Older posts

© 2023 The Gilbane Advisor

Theme by Anders NorenUp ↑