Neo4j announced a multi-year Strategic Collaboration Agreement (SCA) with Amazon Web Services (AWS) to enable enterprises to achieve better generative artificial intelligence (AI) outcomes through a combination of knowledge graphs and native vector search that reduces generative AI hallucinations while making results more accurate, transparent, and explainable.
Neo4j also anncounced a new integration with Amazon Bedrock, a managed service that makes foundation models from AI companies accessible via an API to build and scale generative AI applications. Neo4j’s native integration with Amazon Bedrock enables:
- Reduced Hallucinations: Neo4j with Langchain and Amazon Bedrock can now work together using Retrieval Augmented Generation (RAG) to create virtual assistants grounded in enterprise knowledge.
- Personalized experiences: Neo4j’s context-rich knowledge graphs integration with Amazon Bedrock can invoke an ecosystem of foundation models that generate personalized text generation and summarization for end users.
- Get complete answers during real-time search: Developers can leverage Amazon Bedrock to generate vector embeddings from unstructured data (text, images, and video) and enrich knowledge graphs using Neo4j’s new vector search and store capability.
- Kickstart a knowledge graph creation: Developers can leverage new generative AI capabilities using Amazon Bedrock to process unstructured data so it becomes structured and load it into a knowledge graph.
Ontotext, a semantic data and knowledge graph technology provider, and TopQuadrant, a provider of software tools for data governance and semantics, announced a partnership to bring advantages to their shared customer base. With TopQuadrant, Ontotext clients gain a knowledge graph creation and curation tool that enables new data governance use cases, while TopQuadrant clients benefit from improved scalability, usability, and performance. The combination of front and back-end systems enables:
- Scalability and Performance for Large Data Sets: With Ontotext’s RDF database GraphDB, semantic data products such as taxonomies, tag lists, metadata stores, and code lists, can now scale to handle master data management and enterprise data quality and validation efforts.
- Policy Enforcement and Automation: TopQuadrant’s expertise in data governance and metadata management will help clients enforce policies across organizations’ full data landscape, mitigating risk of regulatory fines such as for GDPR.
- Pharma R&D Semantic Solution: This solution will enable data models to capture data and improve data quality, enhancing collaboration and efficiency, automating regulatory reporting and ultimately enabling new insights into drug discovery.
- Semantic Data Catalog: The semantic approach for active metadata management harmonizes data and metadata across an entire organization. The semantic data catalog actively populates metadata and makes data more interoperable and reusable.
Fivetran announced its support for Microsoft OneLake through integration with Microsoft Fabric as a new data lake destination, and that Fivetran has been named a Microsoft Fabric Interoperability Partner. Together with support for Delta Lake on Azure Data Lake Storage (ADLS) Gen2, also announced today, Fivetran customers now have two Microsoft data lake destinations to securely consolidate their data workloads with any of Fivetran’s 400-plus pre-built, fully managed data pipelines.
Because Fivetran automates data extraction, cleansing, conforming and converting data to Delta Lake format, customers are able to move faster in developing AI and generative AI-based projects.
OneLake serves as the unified data foundation for Microsoft Fabric, making it simple for customers to access their data through a file explorer, similar to Microsoft’s OneDrive for files. With OneLake, customers can create multiple workspaces within a single tenant.
Fivetran offers the flexibility and scalability that enterprises need to build a solid data lake foundation, across on-premise, cloud-based and third-party sources. Whether an organization has a hybrid or multi-cloud environment, Fivetran provides high-volume data movement with enterprise-ready reliability and uptime, and industry-standard practices for data encryption, with GDPR, IS0 27001 and SOC 2 Type II compliance.
Elastic the company behind Elasticsearch, today announced Elasticsearch Query Language (ES|QL), its new piped query language designed to transform, enrich and simplify data investigation with concurrent processing. ES|QL enables site reliability engineers (SREs), developers and security professionals to perform data aggregation and analysis across a variety of data sources from a single query.
Over the last two decades, the data landscape has become more fragmented, opaque, and complex, driving the need for greater productivity and efficiency among developers, security professionals, and observability practitioners. Organizations need tools and services that offer iterative workflow, a broad range of operations, and central management to make security and observability professionals more productive. Elasticsearch Query Language key benefits include:
- Delivers a comprehensive and iterative approach to data investigation with ES|QL piped query syntax.
- Improves speed and efficiency regardless of data’s source or structure with a new ES|QL query engine that leverages concurrent processing.
- Streamlines observability and security workflows with a single user interface, which allows users to search, aggregate and visualize data from a single screen.
ES|QL is currently available as a technical preview. The general availability version, scheduled for release in 2024, will include additional features to further streamline data analysis and decision-making.
From the OpenAI Blog…
We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home, and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers…
Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data…
Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions…
You can also define custom actions by making one or more APIs available to the GPT. Like plugins, actions allow GPTs to integrate external data or interact with the real-world. Connect GPTs to databases, plug them into emails, or make them your shopping assistant…
DataStax announced the launch of RAGStack, an out-of-the-box RAG solution designed to simplify implementation of retrieval augmented generation (RAG) applications built with LangChain. RAGStack reduces the complexity and overwhelming choices that developers face when implementing RAG for their generative AI applications with a streamlined, tested, and efficient set of tools and techniques for building with LLMs.
With RAGStack, companies benefit from a preselected set of open-source software for implementing generative AI applications, providing developers with a ready-made solution for RAG that leverages the LangChain ecosystem including LangServe, LangChain Templates and LangSmith, along with Apache Cassandra and the DataStax Astra DB vector database. This removes the hassle of having to assemble a bespoke solution and provides developers with a simplified, comprehensive generative AI stack.
RAG combines the strengths of both retrieval-based and generative AI methods for natural language understanding and generation, enabling real-time, contextually relevant responses that underpin much of the innovation happening with this technology.
With specifically curated software components, abstractions to improve developer productivity and system performance, enhancements that improve existing vector search techniques, and compatibility with most generative AI data components, RAGStack provides overall improvements to the performance, scalability, and cost of implementing RAG in generative AI applications.
Fivetran announced the launch of two new software developer kits (SDKs) for data source connectors and target destinations. These new SDKs enable third-party vendors to develop new connectors and destinations on Fivetran’s platform – unlocking compatibility with their product and Fivetran’s network of 400+ connectors, 14 destinations and 45,000+ users.
More complex databases and API-enabled software vendors can become Fivetran source partners by writing their own integration on the Connector SDK. Fivetran connectors add value by providing customers with an easy, automated and reliable way to move their data to their destination of choice, efficiently and in an analytic-ready format, for analysis and enrichment with other data.
Data warehouse, data lake and storage vendors can leverage the destination SDK for Fivetran to allow joint customers to load their critical business data from any of Fivetran’s 400+ connectors to their destination platform. Centralizing data into a single destination empowers customers to access analytical and transactional data for reporting, efficiencies and predictive analytics. The gRPC-based SDK allows connectors and destinations to be written in any supported programming language.
SnapLogic announced it has entered a multifaceted partnership with Acolad, a provider of content and language solutions. Together they will develop and deliver generative AI translation services from Acolad based on the generative integration solutions from SnapLogic.
The collaboration is meant to go beyond a conventional business alliance to deliver new solutions that benefit both language and integration professionals with accelerated productivity, increase revenue streams, and to introduce new services to technical and non-technical users. Acolad will create pre-built integration connectors for instant document translation, allowing any SnapLogic user to immediately add Acolad’s multi-level AI-powered translation service to new and existing integration pipelines without any coding knowledge.
The solution will employ Acolad’s proprietary two-stage AI-process to provide accuracy for both language translation and intent, in near real-time. This allows any enterprise to immediately leverage the global language translation services, eliminating language barriers with customers, partners, and employees. Acolad will leverage SnapLogic’s generative integration interface, SnapGPT, to automate integration processes to create new translation services more quickly, create new revenue streams and service packages, and increase customer satisfaction among Acolad’s customer base.
https://www.snaplogic.com ■ https://www.acolad.com