The Gilbane Advisor

Curated for content, computing, and digital experience professionals

Page 2 of 900

Brightcove and Socialive partner

Brightcove announced today an expanded partnership with Socialive, a video content creation platform designed for enterprises, to provide customers additional remote video production capabilities. Enhancing Brightcove’s comprehensive and reliable live streaming solution with Socialive’s features provides customers greater control over the creation, management and distribution of live and on-demand internal video content. Through the integration, customers can:

  • Easily produce high-quality dynamic live video content through Socialive’s Studio, where they can combine multiple presenters in layouts, add graphics, sound effects, and pre-recorded videos.
  • Access Socialive’s Virtual Green Room feature that simplifies guest talent management during live virtual events from any device.
  • Stream and manage live video and on-demand content globally across various formats and devices, and leverage Brightcove’s analytics and insights to measure the impact of video content.
  • Integrate an end-to-end video creation, distribution, and management solution that easily combines Socialive and Brightcove.


TransPerfect announces integrated translation solution for Sitecore XM Cloud

TransPerfect announced the launch of the first translation integration to support Sitecore XM Cloud, giving users the ability to create, manage, and deliver relevant multilingual content with an enterprise-ready CMS.

GlobalLink for Sitecore XM Cloud is TransPerfect’s solution to initiate, automate, control, track, and complete all facets of the translation process within Sitecore’s user interface. By marrying Sitecore’s hybrid CMS, XM Cloud, with GlobalLink’s extended localization workflow capabilities, this integration offers users a holistic solution for crafting and launching multilingual digital experiences. The solution allows users to:

  • Automatically import translated content back to Sitecore XM Cloud target locales
  • View dashboards depicting complete submission statistics, status of submissions at target locales, and translation jobs at each target locale level
  • View a visual tree structure in the GCC UI for easy navigation (includes system folder for translations)
  • View a list of selected pages before submitting for translation
  • Submit a single page for translation
  • Automatically publish translated content
  • Translate alt text images, single-line text, and rich text
  • Configure templates and template field types in the connector
  • Copy the source content that does not need translation into the target language
  • Automatically include all page and page components for translation in recursive page submissions


OpenAI introduces custom GPTs

From the OpenAI Blog…

We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home, and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers…

Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data…

Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions…

You can also define custom actions by making one or more APIs available to the GPT. Like plugins, actions allow GPTs to integrate external data or interact with the real-world. Connect GPTs to databases, plug them into emails, or make them your shopping assistant…

DataStax launches RAGStack

DataStax announced the launch of RAGStack, an out-of-the-box RAG solution designed to simplify implementation of retrieval augmented generation (RAG) applications built with LangChain. RAGStack reduces the complexity and overwhelming choices that developers face when implementing RAG for their generative AI applications with a streamlined, tested, and efficient set of tools and techniques for building with LLMs. 

With RAGStack, companies benefit from a preselected set of open-source software for implementing generative AI applications, providing developers with a ready-made solution for RAG that leverages the LangChain ecosystem including LangServe, LangChain Templates and LangSmith, along with Apache Cassandra and the DataStax Astra DB vector database. This removes the hassle of having to assemble a bespoke solution and provides developers with a simplified, comprehensive generative AI stack. 

RAG combines the strengths of both retrieval-based and generative AI methods for natural language understanding and generation, enabling real-time, contextually relevant responses that underpin much of the innovation happening with this technology.

With specifically curated software components, abstractions to improve developer productivity and system performance, enhancements that improve existing vector search techniques, and compatibility with most generative AI data components, RAGStack provides overall improvements to the performance, scalability, and cost of implementing RAG in generative AI applications.

Fivetran unveils new SDKs for connectors and destinations

Fivetran announced the launch of two new software developer kits (SDKs) for data source connectors and target destinations. These new SDKs enable third-party vendors to develop new connectors and destinations on Fivetran’s platform – unlocking compatibility with their product and Fivetran’s network of 400+ connectors, 14 destinations and 45,000+ users.

More complex databases and API-enabled software vendors can become Fivetran source partners by writing their own integration on the Connector SDK. Fivetran connectors add value by providing customers with an easy, automated and reliable way to move their data to their destination of choice, efficiently and in an analytic-ready format, for analysis and enrichment with other data. 

Data warehouse, data lake and storage vendors can leverage the destination SDK for Fivetran to allow joint customers to load their critical business data from any of Fivetran’s 400+ connectors to their destination platform. Centralizing data into a single destination empowers customers to access analytical and transactional data for reporting, efficiencies and predictive analytics. The gRPC-based SDK allows connectors and destinations to be written in any supported programming language.

SnapLogic and Acolad partner to provide generative AI translation solutions

SnapLogic announced it has entered a multifaceted partnership with Acolad, a provider of content and language solutions. Together they will develop and deliver generative AI translation services from Acolad based on the generative integration solutions from SnapLogic. 

The collaboration is meant to go beyond a conventional business alliance to deliver new solutions that benefit both language and integration professionals with accelerated productivity, increase revenue streams, and to introduce new services to technical and non-technical users. Acolad will create pre-built integration connectors for instant document translation, allowing any SnapLogic user to immediately add Acolad’s multi-level AI-powered translation service to new and existing integration pipelines without any coding knowledge.

The solution will employ Acolad’s proprietary two-stage AI-process to provide accuracy for both language translation and intent, in near real-time. This allows any enterprise to immediately leverage the global language translation services, eliminating language barriers with customers, partners, and employees. Acolad will leverage SnapLogic’s generative integration interface, SnapGPT, to automate integration processes to create new translation services more quickly, create new revenue streams and service packages, and increase customer satisfaction among Acolad’s customer base.


Cloudera and Pinecone announce strategic partnership

Cloudera, Inc., a data company for enterprise artificial intelligence (AI), and Pinecone, a vector database company providing long-term memory for AI, announced a strategic partnership that integrates Pinecone’s AI vector database expertise into Cloudera’s open data platform, aimed to help organizations use AI to streamline operations and improve customer experiences.

Pinecone is optimized to store AI representations of data (vector embeddings) and search through them by semantic similarity. This capability is necessary for adding context to queries against applications that use Large Language Models (LLMs) to reduce erroneous outputs and helps search and Generative AI applications deliver more accurate and relevant responses.

Pinecone’s vector database will also be integrated into Cloudera Data Platform (CDP), and includes the release of a new Applied ML Prototype (AMP) that will allow developers to more quickly create and augment new knowledge bases from data on their own website, as well as pre-built connectors that will enable customers to quickly set up ingest pipelines in AI applications.

Customers can use this same architecture to set up or improve support chatbots or internal support search systems to reduce operational costs and improve customer experience by decreasing human case-handling efforts and faster resolution times.


Gilbane Advisor 11-1-23 — RAG challenges, computation rules

This week we feature articles from Agustinus Nalwan, and Stephen Wolfram.

Additional reading comes from Victoria Song, Nidhi Hebbar & Christopher Savčak, and Shayne Longpre & Sara Hooker.

News comes from Ontotext, Sinequa, DataStax & LangChain, and Altova.

All previous issues are available at

Opinion / Analysis

The untold side of RAG: addressing its challenges in domain-specific searches

This is the best kind of case study, clearly written by Agustinus Nalwan, the lead executive and project manager, includes detailed use-case examples, resources used, challenges, learnings, and solutions to date. This will be especially valuable for those planning or building Retrieval Augmented Generation (RAG) applications. (29 min)

How to think computationally about AI, the universe and everything

If you are not familiar with Stephen Wolfram, his Ted Talk from a couple of weeks ago is a good place to start. You’ll need to put your abstraction hat on and be prepared to be awed, but you will learn something. For others, the talk or the transcript are a quick way to reacquaint yourself with this remarkable thinker. (transcript 12 min, Ted Talk 18 min)

More Reading

All Gilbane Advisor issues

Content technology news

Ontotext GraphDB 10.4 enables users to chat with their knowledge graphs

The new release offers finer grained security, improved flexibility, easier cluster administration and monitoring, and natural language queries.

Sinequa integrates enterprise search with Google’s Vertex AI

The integration brings advanced Retrieval-Augmented Generation (RAG) capabilities to the workplace.

DataStax launches new integration with LangChain

Support for Astra DB Vector Database and Apache Cassandra now available out-of-the-box for LangChain users for retrieval augmented generation.

Altova announces version 2024 with AI assistants and PDF Data Mapping

XMLSpy boosts productivity for XML and JSON development tasks by generating schemas, instance documents, and sample data based on NLP prompts.

All content technology news

The Gilbane Advisor is authored by Frank Gilbane and is ad-free, cost-free, and curated for content, computing, web, data, and digital experience technology and information professionals. We publish recommended articles and content technology news weekly. We do not sell or share personal data.

Subscribe | View online | Editorial policy | Privacy policy | Contact

« Older posts Newer posts »

© 2023 The Gilbane Advisor

Theme by Anders NorenUp ↑