Curated for content, computing, and digital experience professionals

Day: October 24, 2023

Altova announces Version 2024 with AI Assistants and PDF Data Mapping

Altova announced the release of Version 2024 of its desktop developer tools, server software, and regulatory solutions. New features across the product line include:

  • AI Assistant in XMLSpy boosts productivity for XML and JSON development tasks by generating schemas, instance documents, and sample data based on natural language prompts. The AI Assistant can also generate XSL, XPath, and XQuery code. Generated code can be copied, opened in a new document, or sent to the XPath/XQuery window for further review.
  • MapForce PDF Extractor is a visual utility for defining the structure of a PDF document and extracting data from it. That data is then available for mapping to other formats in MapForce, including Excel, JSON, databases, XML, etc., for conversion, data integration, and ETL processes.
  • AI integration in DatabaseSpy includes an AI Assistant for generating SQL statements, sample data, table relations, as well as AI extensions to explain, pretty print, and complete SQL statements.
  • Split output preview for XML and database report design in StyleVision lets designers see the changes they make in a design reflected in the output in real time. The side-by-side panes show the design and output in HTML, PDF, Word, or text at the same time.

DataStax launches new integration with LangChain

DataStax announced a new integration with LangChain, the popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra, as a new vector source in the LangChain framework. 

As companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that provides real-time updates with zero latency on critical production workloads.

Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as an AI-first toolkit to connect their application to different data sources.

The integration lets developers leverage the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates.

Sinequa integrates enterprise search with Google’s Vertex AI

Enterprise Search provider Sinequa announced it has expanded its partnership with Google Cloud by adding its generative AI capabilities to Sinequa’s supported integrations. By combining the conversational abilities of Google Cloud’s Vertex AI platform with the factual knowledge provided by Sinequa’s intelligent search platform, businesses can use generative AI and gain insights from their enterprise content. 

Sinequa’s approach to generative AI is agnostic, ensuring compatibility with all major generative AI APIs. Sinequa support to Google Cloud’s Vertex AI platform and its expanding library of large language models (LLMs) such as PaLM-2, enables Sinequa users to leverage Google Cloud’s generative AI technologies for Retrieval-Augmented Generation (RAG) within their existing Sinequa ecosystem.

In combination with generative AI, Sinequa’s Neural Search uses the most relevant information across all your content to ground generative AI in the truth of your enterprise’s knowledge. With search and generative AI together, you can engage in dialogue with your information just as you would talk with a knowledgeable colleague, and without concerns present with generative AI alone, such as hallucinations or security. This means you can converse with your content: conduct research, ask questions, explore nuances, all with more accurate, relevant results.

Ontotext GraphDB 10.4 enables users to chat with their knowledge graphs

Ontotext released 10.4 of GraphDB, their knowledge graph database engine. GraphDB 10.4 is now available on AWS Marketplace, adding to the flexibility of how enterprises can scale and maintain knowledge graph applications. The new AWS operational guide and improvements to backup support on AWS S3 storage increases the efficiency of deployment of GraphDB. 

Other new 10.4 features include user-defined Access Control Lists (ACLs) for more granular control over the security of your data. Connectors to external services now include one for ChatGPT that lets you customize the answers returned by the OpenAI API with data from your own knowledge graphs. Building on this, the Talk to Your Graph LLM-backed chatbot lets you ask natural language questions about your own data.

Several new features make maintenance of running servers easier and more efficient. The improved Cluster Management View shows a wider range of information about the status of each running cluster, and upgrades to the Backup and Snapshot Compression tools reduce backup time and necessary disk space. GraphDB 10.4’s ability to control the transaction log size minimizes the chance of running out of disk space, and greater control over transaction IDs makes it easier to analyze transaction behavior and identify potential issues.

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑