Curated for content, computing, and digital experience professionals

Category: Content technology news (Page 4 of 628)

Curated information technology news for content technology, computing, and digital experience professionals. News items are edited to remove hype, unhelpful jargon, iffy statements, and quotes, to create a short summary — mostly limited to 200 words — of the important facts with a link back to a useful source for more information. News items are published using the date of the original source here and in our weekly email newsletter.

We focus on product news, but also include selected company news such as mergers and acquisitions and meaningful partnerships. All news items are edited by one of our analysts under the NewsShark byline.  See our Editorial Policy.

Note that we also publish news on X/Twitter. Follow us  @gilbane

Brightcove unveils Brightcove AI Suite

Brightcove, a streaming technology company, announced the introduction of the Brightcove AI Suite. The new AI-powered capabilities will address growth-driving and cost-saving needs, including content creation, audience growth and engagement, increased revenue, and improved business efficiency. 

Brightcove AI Suite integrates into Brightcove’s video cloud platform and launches with five AI-powered solutions: AI Content Multiplier, AI Universal Translator, AI Metadata Optimizer, AI Engagement Maximizer, and AI Cost-to-Quality Optimizer. Brightcove is using models from Anthropic, AWS, and Google, and integrating AI solutions from CaptionHub and Frammer. Brightcove will initially focus on:

  • Content Creation: The AI Content Multiplier uses Gen AI to automate time-consuming tasks. The AI Universal Translator delivers translations across 130 languages with the ability to fine-tune.
  • Content Management and Optimization: will accelerate workflows and simplify managing content libraries, and turning them into a foundational data layer optimized for large language models (LLMs). AI Metadata Optimizer generates descriptions and transforms content into searchable and AI-optimizable data sets.
  • Content Engagement and Monetization: Brightcove AI Engagement Maximizer delivers automated video interactivity, personalization, and recommendations. AI Revenue Maximizer optimizes ad placements and durations.
  • Quality and Efficiency: AI Cost-to-Quality Optimizer drives down the cost of encoding, storage, and content delivery without sacrificing the viewer experience.

https://campaigns.brightcove.com/ai-solutions/

Adobe announces Adobe Express updates and special teams offer

Adobe announced new innovations in Adobe Express, which brings Adobe’s creative tools into an app that’s easy for smaller businesses to leverage across teams to create content.

Adobe AI features in Adobe Express are built into workflows not bolted on as an upsell. Adobe Firefly generative AI-powered features in Adobe Express are designed to be commercially safe, so businesses can protect their brand and publish business content with confidence.

Adobe Express for teams offers the best of Adobe in an app every employee with any skill level can use to create on-brand social posts, flyers, videos, presentations and more. Adobe Express for teams includes thousands of distinctive templates curated by Adobe professionals and thousands of Adobe assets, including stock photos, videos, audio files and premium fonts. Businesses can make content even more eye-catching with animations and use AI to generate new images and remove backgrounds instantly. 

The new Adobe Express for Teams offer is available immediately for $49.99 per user per year guaranteed for up to three years with a two-seat minimum and includes a 90-day free trial with payment. The offer runs through Sept. 30, 2024. Adobe Express offers qualified 501(c)(3) nonprofits free access to premium features.

https://news.adobe.com/news/news-details/2024/Adobe-Express-Updates-Deliver-More-Value-for-Solopreneurs-and-SMBs-with-Innovation-and-Special-Teams-Offer/default.aspx

Cloudera adds Accelerators for Machine Learning Projects (AMPs)

Cloudera, a hybrid platform for data, analytics and AI, announced new Accelerators for ML Projects (AMPs), designed to reduce time-to-value for enterprise AI use cases. The new additions focus on providing enterprises with cutting-edge AI techniques and examples within Cloudera that can assist AI integration and drive more impactful results.

AMPs are end-to-end machine learning (ML) based projects that can be deployed with a single-click directly from the Cloudera platform. Each AMP encapsulates industry practices for tackling complex ML challenges with workflows to facilitate seamless transitions. Cloudera AMPs are open source and include deployment instructions for any environment. Updates include:

  • Fine-Tuning Studio – Provides users with an all-encompassing application and “ecosystem” for managing, fine tuning, and evaluating LLMs.
  • RAG with Knowledge Graph – A demonstration of how to power a RAG (retrieval augmented generation) application with a knowledge graph to capture relationships and context not easily accessible by vector stores alone.
  • PromptBrew – Offers AI-powered assistance to create reliable prompts via a simple user interface.
  • Chat with Your Documents – Building upon the previous LLM Chatbot Augmented with Enterprise Data AMP, this accelerator enhances the responses of the LLM using context from an internal knowledge base created from the documents uploaded by the user.

https://www.cloudera.com/about/news-and-blogs/press-releases/2024-09-12-cloudera-unveils-new-suite-of-accelerators-for-machine-learning-projects-amps.html

SearchStax and Magnolia partner on personalized search solutions

SearchStax, a Search Experience Company, and Magnolia, a composable digital experience platform (DXP), announced a technology partnership to help marketing teams to deliver modern, personalized search experiences while driving marketing agility.

This strategic partnership merges the search capabilities of SearchStax Site Search with Magnolia’s flexible, enterprise-grade DXP, offering marketers and developers the tools they need to create next-level digital experiences throughout the customer journey. Combining SearchStax’s advanced search technology with Magnolia enables organizations to enhance website performance and user engagement by providing visitors with fast, accurate, and contextually relevant search results.

The integration module allows Magnolia managed content to be fed into the SearchStax index, augmenting the search experience for end users. The improved search results surface Magnolia content, such as editorial, campaign, FAQs and other relevant assets. This enhances the website search experience, reduces “no result” searches and increases conversions. 

Customers can now adopt SearchStax within their Magnolia DXP, and teams from both companies are ready to assist with implementation and optimization.

https://www.searchstax.com ■ https://www.magnolia-cms.com

Syncro Soft releases Oxygen AI Positron Assistant 3.0

Version 3.0 increases the efficiency of using the tool as certain actions now leverage the Retrieval-Augmented Generation (RAG) process to obtain context from the users’ current projects.

The new AI Positron Assistant drop-down widget offers a convenient way of accessing useful AI actions by displaying a floating contextual menu directly within the editing area. Users can customize their own AI actions to display as Quick Assist fixes in the editor. It is also now possible to choose the OpenAI model used in chat sessions and actions right from the AI Positron Assistant view.

A variety of new AI actions that are specific to working with DITA XML documents have been implemented, including a Proofread action that helps users identify potential issues in their content regarding logical consistency, grammar, spelling, readability, and comprehension.

Other newly implemented actions include the Improve Structure action that instructs the AI to enhance DITA XML documents by adding additional structure or inline elements, and the Add Structured Content action continues the content from a document with additional structured content generated based upon similar content from the current project, which gives the AI more context for formulating the new XML structure.

 https://www.oxygenxml.com/ai_positron_assistant.html

Anthropic announces Claude for Enterprise

Anthropic announced the Claude Enterprise plan to help organizations securely collaborate with Claude using internal knowledge. The Claude Enterprise plan offers an expanded 500K context window, more usage capacity, and a native GitHub integration so you can work on entire codebases with Claude. It also includes enterprise-grade security features—like SSO, role-based permissions, and admin tooling—that help protect your data and team.

With Claude, your organization’s knowledge is easier to share and reuse, enabling every individual on the team to quickly and consistently produce their best work. At the same time, your data is protected. We do not train Claude on your conversations and content. By integrating Claude with your organization’s knowledge, you can scale expertise across more projects, decisions and teams.

When you combine expanded context windows with Projects and Artifacts, Claude becomes an end-to-end solution to help your team take any initiative from idea to high-quality work output. For example, marketers can turn market trends into a compelling campaign. Product managers can upload product specifications for Claude to build an interactive prototype. Engineers can connect codebases for help on troubleshooting errors and identifying optimizations.

https://www.anthropic.com/news/claude-for-enterprise

Couchbase expands cloud database platform with Capella Columnar and vector search

Couchbase, Inc. launched Capella Columnar on AWS, to help organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform. Also generally available today is Couchbase Mobile with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge, and Capella Free Tier, a free developer environment.

Capella Columnar addresses the challenge of parsing, transforming and persisting JSON data into an analysis-ready columnar format. It supports real-time, multisource ingestion of data from Couchbase, and systems like Confluent Cloud to draw data from third-party JSON or SQL systems. Capella Columnar makes analysis easy by using Capella iQ, an AI coding assistant that writes SQL++ so the developer doesn’t need to wait for the BI team to run analytics for them. Once an important metric is calculated, it can be written back to the operational side of Capella, which can use the metric within the application.

Using vector search on-device with Couchbase Lite, the embedded database for mobile and IoT applications, mobile developers can now leverage vector search at the edge for building semantic search and retrieval-augmented generation (RAG) applications.

https://www.couchbase.com/blog/free-tier-capella-columnar-mobile-vector-search-and-more/

Elastic returns to open source license for Elasticsearch and Kibana

Elastic, a Search AI Company, announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).

With the addition of AGPL, an open source license approved by the Open Source Initiative (OSI), Elasticsearch and Kibana will be officially considered open source and enable Elastic’s customers and community to use, modify, redistribute, and collaborate on Elastic’s source code under a well-known open source license.

Adding AGPL will also enable greater engagement and adoption across our users in areas including vector search, further increasing the popularity of Elasticsearch as a runtime platform for RAG and building GenAI applications.

The addition of AGPL as a license option does not affect existing users working with either SSPL or ELv2, and there will be no change to Elastic’s binary distributions. Similarly, for users building applications or using plugins on Elasticsearch or Kibana, nothing changes — Elastic’s client libraries will continue to be licensed under Apache 2.0.

https://www.elastic.co/blog/elasticsearch-is-open-source-again

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑