Curated for content, computing, and digital experience professionals

Category: Content technology news (Page 1 of 616)

Curated information technology news for content technology, computing, and digital experience professionals. News items are edited to remove hype, unhelpful jargon, iffy statements, and quotes, to create a short summary — mostly limited to 200 words — of the important facts with a link back to a useful source for more information. News items are published using the date of the original source here and in our weekly email newsletter.

We focus on product news, but also include selected company news such as mergers and acquisitions and meaningful partnerships. All news items are edited by one of our analysts under the NewsShark byline.  See our Editorial Policy.

Note that we also publish news on X/Twitter. Follow us  @gilbane

DataStax and LlamaIndex partner to make building RAG applications easier

DataStax announced its retrieval augmented generation (RAG) solution, RAGStack, is now generally available powered by LlamaIndex as an open source framework, in addition to LangChain. DataStax RAGStack for LlamaIndex also supports an integration (currently in public preview) with LlamaIndex’s LlamaParse, which gives developers using Astra DB an API to parse and transform complex PDFs into vectors in minutes. 

LlamaIndex is a framework for ingesting, indexing, and querying data for building generative AI applications and addresses the ingestion pipelines needed for enterprise-ready RAG. LlamaParse is LlamaIndex’s new offering that targets enterprise developers building RAG over complex PDFs; it enables clean extraction of tables by running recursive retrieval, promising more accurate parsing of the complex documents often found in business.

RAGStack with LlamaIndex offers a solution tailored to address the challenges encountered by enterprise developers in implementing RAG solutions. Benefits include a curated Python distribution available on PyPI for integration with Astra DB, DataStax Enterprise (DSE), and Apache Cassandra, and a live RAGStack test matrix and GenAI app templates.

Users can use LlamaIndex alone, or in combination with LangChain and their ecosystem including LangServe, LangChain Templates, and LangSmith.

Acquia enhances brand management capabilities

Acquia announced new integrations for its digital asset management solution, Acquia DAM, that expand its brand management capabilities. These integrations — with Acquia Campaign Studio, Adobe Stock, and Google Translate reduce the complexity of maintaining a consistent brand experience across digital channels.

Acquia DAM is now integrated with Acquia Campaign Studio, the company’s marketing automation solution. The integration leverages Acquia’s instant search connector tool, so once a user is authenticated in the DAM connector within Campaign Studio, they can search, view, and select the asset of their choice within Campaign Studio’s email and landing page builders. Pictures in email and landing page builders dynamically change when updated in Acquia DAM.

An Adobe Stock integration automatically syncs a customer’s newly licensed Adobe Stock assets with Acquia DAM, bringing in essential metadata and offering smoother workflows. Creative pros can choose which types of Adobe Stock assets to monitor and sync, and the integration handles file copying and categorization in Acquia DAM. Customers can now use Google Translate to automatically translate text from selected metadata fields within Acquia DAM. The DAM automatically repopulates these fields with translated content in up to 20 languages.

Adobe announces AI Assistant in Reader and Acrobat

Adobe introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat. Integrated into Reader and Acrobat workflows, AI Assistant instantly generates summaries and insights from long documents, answers questions and formats information for sharing in emails, reports and presentations.

AI Assistant leverages the same artificial intelligence and machine learning models behind Acrobat Liquid Mode, technology that supports responsive reading experiences for PDFs on mobile. These proprietary models provide a deep understanding of PDF structure and content, enhancing quality and reliability in AI Assistant outputs.

Acrobat Individual, Pro and Teams customers and Acrobat Pro trialists can use the AI Assistant beta to work more productively today. No complicated implementations required. Simply open Reader or Acrobat and start working with the new capabilities.

Reader and Acrobat customers will have access to the full range of AI Assistant capabilities through a new add-on subscription plan when AI Assistant is out of beta. Until then, the new AI Assistant features are available in beta for Acrobat Standard and Pro Individual and Teams subscription plans on desktop and web in English, with features coming to Reader desktop customers in English over the next few weeks at no additional cost.

Ontotext releases Ontotext Metadata Studio 3.7

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, announced the availability of Ontotext Metadata Studio (OMDS) 3.7, an all-in-one environment that facilitates the creation, evaluation, and quality improvement of text analytics services. This latest release provides out-of-the-box, rapid natural language processing (NLP) prototyping and development so organizations can iteratively create a text analytics service that best serves their domain knowledge. 

As part of Ontotext’s AI-in-Action initiative, which helps data scientists and engineers benefit from the AI capabilities of its products, the latest version enables users to tag content with Common English Entity Linking (CEEL), text analytics service. CEEL is trained to tag mentions of people, organizations, and locations to their representation in Wikidata – the public knowledge graph that includes close to 100 million entity instances. With OMDS, organizations can recognize approximately 40 million Wikidata concepts, and  streamline information extraction from text and enrichment of databases and knowledge graphs. Organizations can:

  • Automate tagging and categorization of content to facilitate more efficient discovery, reviews, and knowledge synthesis. 
  • Enrich content, achieve precise search, improve SEO, and enhance the performance of LLMs and downstream analytics.
  • Streamline information extraction from large volumes of unstructured content and analyze market trends.

Grammarly announces general availability of App Actions

Grammarly announced the general availability of its app actions feature for all business, individual, and education customers. App actions enable customers to complete actions in popular third-party applications wherever writing with Grammarly is happening, making it easier to get work done without wasting time switching between tools, so teams stay focused and efficient. 

Grammarly is able to provide a connective layer across apps and workflows because it works where people do, on over 500,000 apps and websites. With app actions, customers can: 

  • Find, link to, or create new tasks to manage work in Asana, Atlassian Jira,, Smartsheet, and Wrike
  • Find and link to a file or page in Atlassian Confluence, Google Drive, Microsoft OneDrive, and Microsoft SharePoint
  • Reference, link to, or create a new contact in HubSpot
  • Access, format, and share links to schedule meetings via Calendly
  • Find and insert animations and images from GIPHY and Unsplash

The app actions feature maintains all of Grammarly’s enterprise-grade security and privacy practices and commitment to responsible AI. All app actions are available today for all Grammarly Business, Premium, and Education customers, and Grammarly Free users benefit from connections to GIPHY and Unsplash.

Optimizely integration with Writer now live

Optimizely, a digital experience platform (DXP) provider, has announced that a new product integration with Writer, the enterprise-focused generative AI platform, is now live. The integration comes after the official partnership announcement in October and equips the Optimizely Content Marketing Platform (CMP) with AI capabilities that enable joint customers to use industry-specific LLMs to develop content that is relevant, compliant, and consistent with their existing brand tone and voice, and tailored to industry audiences to simplify the content marketing lifecycle.

Writer’s integration into Optimizely will leverage the Palmyra, the Writer-built family of large language models, to enhance AI-powered content generation capabilities and chat features across Optimizely applications. Palmyra LLMs are transparent and auditable, top-scoring on benchmarks like Stanford HELM, and keep customers’ data private. They are coupled with the Writer-built graph-based RAG Knowledge Graph, AI guardrails to enforce brand and compliance rules, and a flexible application layer that serves a wide range of use cases, resulting in an AI platform that meets unique enterprise needs.

https://www.optimizely.com announces Meeting GenAI, an AI-powered meeting assistant introduced Meeting GenAI, a set of AI tools that unlocks insights of your company’s meeting history. Otter is already integrating advanced GenAI across its platform, elevating the role of meeting minutes from passive records to dynamic repositories of collective knowledge and actionable insights. Highlights of Meeting GenAI: 

  • Otter AI Chat across all your meetings: Get answers to questions and generate content like emails and status updates using Otter AI Chat which now can access all of your meetings, not just a single meeting.
  • AI Chat in Channels: Chat with Otter AI Chat and team members using a collaborative AI Chat, making it easier to keep the team aligned and drive work forward with greater transparency and speed.
  • AI Conversation Summary View: Identify action items with assignments in real time and get a live narrative summary to ensure swift execution and accountability.

The heart of Meeting GenAI lies in the new multi-conversation capabilities for Otter AI Chat. This feature transcends AI chat focused on individual meetings by allowing users to tap into the collective wisdom gleaned from past meetings no matter which platform those discussions happened.

Bard becomes Gemini: new Ultra 1.0 and mobile app    

via the Google Blog…

Gemini represents our most capable family of models. To reflect this, Bard will now simply be known as Gemini.

You can already chat with Gemini with our Pro 1.0 model and now, we’re bringing you two new experiences — Gemini Advanced and a mobile app — to help you easily collaborate with the best of Google AI.

Gemini Advanced — gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. With Ultra 1.0 Gemini Advanced is far more capable at complex tasks like coding, logical reasoning, following nuanced instructions and collaborating on creative projects. Gemini Advanced not only allows you to have longer, more detailed conversations; it also better understands the context from your previous prompts.

Gemini Advanced is available as part of our new Google One AI Premium Plan for $19.99/month, starting with a two-month trial at no cost.

We’ve heard that you want an easier way to access Gemini on your phone. So today we’re starting to roll out a new mobile experience for Gemini and Gemini Advanced with a new app on Android and in the Google app on iOS.

« Older posts

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑