Elastic the company behind Elasticsearch, today announced Elasticsearch Query Language (ES|QL), its new piped query language designed to transform, enrich and simplify data investigation with concurrent processing. ES|QL enables site reliability engineers (SREs), developers and security professionals to perform data aggregation and analysis across a variety of data sources from a single query.
Over the last two decades, the data landscape has become more fragmented, opaque, and complex, driving the need for greater productivity and efficiency among developers, security professionals, and observability practitioners. Organizations need tools and services that offer iterative workflow, a broad range of operations, and central management to make security and observability professionals more productive. Elasticsearch Query Language key benefits include:
- Delivers a comprehensive and iterative approach to data investigation with ES|QL piped query syntax.
- Improves speed and efficiency regardless of data’s source or structure with a new ES|QL query engine that leverages concurrent processing.
- Streamlines observability and security workflows with a single user interface, which allows users to search, aggregate and visualize data from a single screen.
ES|QL is currently available as a technical preview. The general availability version, scheduled for release in 2024, will include additional features to further streamline data analysis and decision-making.
Enterprise Search provider Sinequa announced it has expanded its partnership with Google Cloud by adding its generative AI capabilities to Sinequa’s supported integrations. By combining the conversational abilities of Google Cloud’s Vertex AI platform with the factual knowledge provided by Sinequa’s intelligent search platform, businesses can use generative AI and gain insights from their enterprise content.
Sinequa’s approach to generative AI is agnostic, ensuring compatibility with all major generative AI APIs. Sinequa support to Google Cloud’s Vertex AI platform and its expanding library of large language models (LLMs) such as PaLM-2, enables Sinequa users to leverage Google Cloud’s generative AI technologies for Retrieval-Augmented Generation (RAG) within their existing Sinequa ecosystem.
In combination with generative AI, Sinequa’s Neural Search uses the most relevant information across all your content to ground generative AI in the truth of your enterprise’s knowledge. With search and generative AI together, you can engage in dialogue with your information just as you would talk with a knowledgeable colleague, and without concerns present with generative AI alone, such as hallucinations or security. This means you can converse with your content: conduct research, ask questions, explore nuances, all with more accurate, relevant results.
From the OpenLink blog…
We are pleased to announce the immediate availability of the OpenLink Personal Assistant, a practical application of Knowledge Graph-driven Retrieval Augmented Generation (RAG) showcasing the power of knowledge discovery and exploration enabled by a modern conversational user interface. This modern approach revitalizes the enduring pursuit of high-performance, secure data access, integration, and management by harnessing the combined capabilities of Large Language Models (LLMs), Knowledge Graphs, and RAG, all propelled by declarative query languages such as SPARQL, SQL, and SPASQL (SPARQL inside SQL).
GPT 4.0 and 3.5-turbo foundation models form the backbone of the OpenLink Assistant, offering a sophisticated level of conversational interaction. These models can interpret context, thereby providing a user experience that intuitively emulates aspects of human intelligence.
What truly sets OpenLink Assistant apart is state-of-the-art RAG technology, integrated seamlessly with SPARQL, SQL, and SPASQL (SPARQL inside SQL). This fusion, coupled with our existing text indexing and search functionality, allows for real-time, contextually relevant data retrieval from domain-specific knowledge bases deployed as knowledge graphs.
- Self-Describing, Self-Supporting Products: OpenLink Assistant adds a self-describing element to our Virtuoso, ODBC & JDBC Drivers products by simply installing the Assistant’s VAD (Virtuoso Application Distro) package.
- OpenAPI-Compliance: With YAML and JSON description documents, OpenLink Assistant offers hassle-free integration into existing systems. Any OpenAPI compliant service can be integrated into its conversation processing pipeline while also exposing core functionality to other service consumer apps.
- Device Compatibility: Whether you’re on a desktop or a mobile device, OpenLink Assistant delivers a seamless interaction experience.
- UI Customization: The Assistant can be skinned to align with your application’s UI, ensuring a cohesive user experience.
- Versatile Query Support: With support for SQL, SPARQL, and SPASQL, OpenLink Assistant can interact with a multitude of data, information, and knowledge sources.
MongoDB announced new capabilities, performance improvements, and a data-streaming integration for MongoDB Atlas Vector Search.
Developers can more easily aggregate and filter data, improving semantic information retrieval and reducing hallucinations in AI-powered applications. With new performance improvements for MongoDB Atlas Vector Search, the time it takes to build indexes is reduced to help accelerate application development. Additionally, MongoDB Atlas Vector Search is now integrated with fully managed data streams from Confluent Cloud to make it easier to use real-time data from a variety of sources to power AI applications.
MongoDB Atlas Vector Search provides the functionality of a vector database integrated as part of a unified developer data platform, allowing teams to store and process vector embeddings alongside virtually any type of data to more quickly and easily build generative AI applications.
StreamText, enterprise caption platform, announced the latest release of Automatic Speech Recognition (ASR) technology powered by artificial intelligence (AI). With the ability to create captions directly from an audio source, StreamText ASR features term glossaries to help finetune captioning AI for specific events and increase overall accuracy. The platform offers direct integrations with meeting software such as Zoom and Adobe Connect. It also supports over 50 source languages, including variants of English, French, and Spanish. While the quality of human captioning is often more accurate than AI counterparts, it may not always apply to all captioning needs. In these cases, StreamText ASR is a solution. ASR is useful in university settings, classrooms, government administration, and broadcast media.
Neo4j, a graph database and analytics company, announced that it has integrated native vector search as part of its core database capabilities. The result enables customers to achieve richer insights from semantic search and generative AI applications, and serve as long-term memory for LLMs, while reducing hallucinations.
Neo4j’s graph database can be used to create knowledge graphs, which capture and connect explicit relationships between entities, enabling AI systems to reason, infer, and retrieve relevant information effectively. The result ensures more accurate, explainable, and transparent outcomes for LLMs and other generative AI applications. By contrast, vector searches capture implicit patterns and relationships based on items with similar data characteristics, rather than exact matches, which are useful when searching for similar text or documents, making recommendations, and identifying other patterns.
This latest advancement follows from Neo4j’s recent product integration with Google Cloud’s generative AI features in Vertex AI in June, enabling users to transform unstructured data into knowledge graphs, which users can then query using natural language and ground their LLMs against factual set of patterns and criteria to prevent hallucinations.
dtSearch announced the release of version 2023.01 and beta release of version 2023.02 of its enterprise and developer product line for instantly searching terabytes of online and offline data. The product line’s proprietary document filters cover popular “Office” formats, website data, databases, compression formats, and emails with attachments. dtSearch products can run either “on premises” at organizations or in a cloud environment such as on Azure or AWS.
- The release adds a new search results display for dtSearch’s enterprise products.
- The beta adds sample code demonstrating use of the dtSearch Engine in an ASP.NET Core application running in a Windows (NanoServer) or Linux Docker container.
- The beta adds sample code demonstrating use of the dtSearch Engine in an ASP.NET Core application running in a Windows or Linux Docker container.
- The beta also adds sample code demonstrating how to build NuGet packages to deploy the dtSearch Engine with associated dependencies including ICU, CMAP files, stemming rules, and the external file parsers.
SearchStax, a cloud search platform enabling web teams to deliver search in an easy and cost-effective way, announced the launch of a new program, SearchStax for Good, that provides web and mobile development teams a frictionless way to simplify the management of Apache Solr workloads in the cloud.
SearchStax for Good is designed specifically for non-profits to help eliminate both the infrastructure management problem, as well as to remove the high barrier to entry from a budgetary perspective. By offering an extended no-cost period of full-featured service, SearchStax for Good enables qualifying non-profits a way to get search infrastructure up and running immediately, without having to figure out how to re-allocate budget, or needing to first get budget approval.
Upon the initial launch of SearchStax for Good at DrupalCon Pittsburgh 2023, SearchStax for Good will offer non-profit organizations 6 months free of SearchStax Cloud Serverless, a solution that delivers fast, scalable, and cost-effective Solr, thereby giving web and product teams the ability to build quickly and scale automatically while optimizing resource utilization. After the initial six-month period ends, participating organizations can continue using the service at a 40% discounted rate.