Curated for content, computing, and digital experience professionals

Category: Enterprise software & integration (Page 5 of 30)

OpenAI introduces custom GPTs

From the OpenAI Blog…

We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home, and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers…

Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data…

Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions…

You can also define custom actions by making one or more APIs available to the GPT. Like plugins, actions allow GPTs to integrate external data or interact with the real-world. Connect GPTs to databases, plug them into emails, or make them your shopping assistant…

https://openai.com/blog/introducing-gpts

DataStax launches RAGStack

DataStax announced the launch of RAGStack, an out-of-the-box RAG solution designed to simplify implementation of retrieval augmented generation (RAG) applications built with LangChain. RAGStack reduces the complexity and overwhelming choices that developers face when implementing RAG for their generative AI applications with a streamlined, tested, and efficient set of tools and techniques for building with LLMs. 

With RAGStack, companies benefit from a preselected set of open-source software for implementing generative AI applications, providing developers with a ready-made solution for RAG that leverages the LangChain ecosystem including LangServe, LangChain Templates and LangSmith, along with Apache Cassandra and the DataStax Astra DB vector database. This removes the hassle of having to assemble a bespoke solution and provides developers with a simplified, comprehensive generative AI stack. 

RAG combines the strengths of both retrieval-based and generative AI methods for natural language understanding and generation, enabling real-time, contextually relevant responses that underpin much of the innovation happening with this technology.

With specifically curated software components, abstractions to improve developer productivity and system performance, enhancements that improve existing vector search techniques, and compatibility with most generative AI data components, RAGStack provides overall improvements to the performance, scalability, and cost of implementing RAG in generative AI applications.

https://www.datastax.com/products/ragstack

Fivetran unveils new SDKs for connectors and destinations

Fivetran announced the launch of two new software developer kits (SDKs) for data source connectors and target destinations. These new SDKs enable third-party vendors to develop new connectors and destinations on Fivetran’s platform – unlocking compatibility with their product and Fivetran’s network of 400+ connectors, 14 destinations and 45,000+ users.

More complex databases and API-enabled software vendors can become Fivetran source partners by writing their own integration on the Connector SDK. Fivetran connectors add value by providing customers with an easy, automated and reliable way to move their data to their destination of choice, efficiently and in an analytic-ready format, for analysis and enrichment with other data. 

Data warehouse, data lake and storage vendors can leverage the destination SDK for Fivetran to allow joint customers to load their critical business data from any of Fivetran’s 400+ connectors to their destination platform. Centralizing data into a single destination empowers customers to access analytical and transactional data for reporting, efficiencies and predictive analytics. The gRPC-based SDK allows connectors and destinations to be written in any supported programming language.

https://www.fivetran.com

SnapLogic and Acolad partner to provide generative AI translation solutions

SnapLogic announced it has entered a multifaceted partnership with Acolad, a provider of content and language solutions. Together they will develop and deliver generative AI translation services from Acolad based on the generative integration solutions from SnapLogic. 

The collaboration is meant to go beyond a conventional business alliance to deliver new solutions that benefit both language and integration professionals with accelerated productivity, increase revenue streams, and to introduce new services to technical and non-technical users. Acolad will create pre-built integration connectors for instant document translation, allowing any SnapLogic user to immediately add Acolad’s multi-level AI-powered translation service to new and existing integration pipelines without any coding knowledge.

The solution will employ Acolad’s proprietary two-stage AI-process to provide accuracy for both language translation and intent, in near real-time. This allows any enterprise to immediately leverage the global language translation services, eliminating language barriers with customers, partners, and employees. Acolad will leverage SnapLogic’s generative integration interface, SnapGPT, to automate integration processes to create new translation services more quickly, create new revenue streams and service packages, and increase customer satisfaction among Acolad’s customer base.

https://www.snaplogic.comhttps://www.acolad.com

Cloudera and Pinecone announce strategic partnership

Cloudera, Inc., a data company for enterprise artificial intelligence (AI), and Pinecone, a vector database company providing long-term memory for AI, announced a strategic partnership that integrates Pinecone’s AI vector database expertise into Cloudera’s open data platform, aimed to help organizations use AI to streamline operations and improve customer experiences.

Pinecone is optimized to store AI representations of data (vector embeddings) and search through them by semantic similarity. This capability is necessary for adding context to queries against applications that use Large Language Models (LLMs) to reduce erroneous outputs and helps search and Generative AI applications deliver more accurate and relevant responses.

Pinecone’s vector database will also be integrated into Cloudera Data Platform (CDP), and includes the release of a new Applied ML Prototype (AMP) that will allow developers to more quickly create and augment new knowledge bases from data on their own website, as well as pre-built connectors that will enable customers to quickly set up ingest pipelines in AI applications.

Customers can use this same architecture to set up or improve support chatbots or internal support search systems to reduce operational costs and improve customer experience by decreasing human case-handling efforts and faster resolution times.

https://www.cloudera.comhttps://www.pinecone.io

Altova announces Version 2024 with AI Assistants and PDF Data Mapping

Altova announced the release of Version 2024 of its desktop developer tools, server software, and regulatory solutions. New features across the product line include:

  • AI Assistant in XMLSpy boosts productivity for XML and JSON development tasks by generating schemas, instance documents, and sample data based on natural language prompts. The AI Assistant can also generate XSL, XPath, and XQuery code. Generated code can be copied, opened in a new document, or sent to the XPath/XQuery window for further review.
  • MapForce PDF Extractor is a visual utility for defining the structure of a PDF document and extracting data from it. That data is then available for mapping to other formats in MapForce, including Excel, JSON, databases, XML, etc., for conversion, data integration, and ETL processes.
  • AI integration in DatabaseSpy includes an AI Assistant for generating SQL statements, sample data, table relations, as well as AI extensions to explain, pretty print, and complete SQL statements.
  • Split output preview for XML and database report design in StyleVision lets designers see the changes they make in a design reflected in the output in real time. The side-by-side panes show the design and output in HTML, PDF, Word, or text at the same time.

https://www.altova.com/whatsnew

DataStax launches new integration with LangChain

DataStax announced a new integration with LangChain, the popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra, as a new vector source in the LangChain framework. 

As companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that provides real-time updates with zero latency on critical production workloads.

Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as an AI-first toolkit to connect their application to different data sources.

The integration lets developers leverage the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates. 

https://www.datastax.com/blog/llamaindex-and-astra-db-building-petabyte-scale-genai-apps-just-got-easier

Sinequa integrates enterprise search with Google’s Vertex AI

Enterprise Search provider Sinequa announced it has expanded its partnership with Google Cloud by adding its generative AI capabilities to Sinequa’s supported integrations. By combining the conversational abilities of Google Cloud’s Vertex AI platform with the factual knowledge provided by Sinequa’s intelligent search platform, businesses can use generative AI and gain insights from their enterprise content. 

Sinequa’s approach to generative AI is agnostic, ensuring compatibility with all major generative AI APIs. Sinequa support to Google Cloud’s Vertex AI platform and its expanding library of large language models (LLMs) such as PaLM-2, enables Sinequa users to leverage Google Cloud’s generative AI technologies for Retrieval-Augmented Generation (RAG) within their existing Sinequa ecosystem.

In combination with generative AI, Sinequa’s Neural Search uses the most relevant information across all your content to ground generative AI in the truth of your enterprise’s knowledge. With search and generative AI together, you can engage in dialogue with your information just as you would talk with a knowledgeable colleague, and without concerns present with generative AI alone, such as hallucinations or security. This means you can converse with your content: conduct research, ask questions, explore nuances, all with more accurate, relevant results.

https://www.sinequa.com

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑