Curated for content, computing, and digital experience professionals

Year: 2024 (Page 5 of 21)

Adobe announces Adobe Express updates and special teams offer

Adobe announced new innovations in Adobe Express, which brings Adobe’s creative tools into an app that’s easy for smaller businesses to leverage across teams to create content.

Adobe AI features in Adobe Express are built into workflows not bolted on as an upsell. Adobe Firefly generative AI-powered features in Adobe Express are designed to be commercially safe, so businesses can protect their brand and publish business content with confidence.

Adobe Express for teams offers the best of Adobe in an app every employee with any skill level can use to create on-brand social posts, flyers, videos, presentations and more. Adobe Express for teams includes thousands of distinctive templates curated by Adobe professionals and thousands of Adobe assets, including stock photos, videos, audio files and premium fonts. Businesses can make content even more eye-catching with animations and use AI to generate new images and remove backgrounds instantly. 

The new Adobe Express for Teams offer is available immediately for $49.99 per user per year guaranteed for up to three years with a two-seat minimum and includes a 90-day free trial with payment. The offer runs through Sept. 30, 2024. Adobe Express offers qualified 501(c)(3) nonprofits free access to premium features.

https://news.adobe.com/news/news-details/2024/Adobe-Express-Updates-Deliver-More-Value-for-Solopreneurs-and-SMBs-with-Innovation-and-Special-Teams-Offer/default.aspx

Cloudera adds Accelerators for Machine Learning Projects (AMPs)

Cloudera, a hybrid platform for data, analytics and AI, announced new Accelerators for ML Projects (AMPs), designed to reduce time-to-value for enterprise AI use cases. The new additions focus on providing enterprises with cutting-edge AI techniques and examples within Cloudera that can assist AI integration and drive more impactful results.

AMPs are end-to-end machine learning (ML) based projects that can be deployed with a single-click directly from the Cloudera platform. Each AMP encapsulates industry practices for tackling complex ML challenges with workflows to facilitate seamless transitions. Cloudera AMPs are open source and include deployment instructions for any environment. Updates include:

  • Fine-Tuning Studio – Provides users with an all-encompassing application and “ecosystem” for managing, fine tuning, and evaluating LLMs.
  • RAG with Knowledge Graph – A demonstration of how to power a RAG (retrieval augmented generation) application with a knowledge graph to capture relationships and context not easily accessible by vector stores alone.
  • PromptBrew – Offers AI-powered assistance to create reliable prompts via a simple user interface.
  • Chat with Your Documents – Building upon the previous LLM Chatbot Augmented with Enterprise Data AMP, this accelerator enhances the responses of the LLM using context from an internal knowledge base created from the documents uploaded by the user.

https://www.cloudera.com/about/news-and-blogs/press-releases/2024-09-12-cloudera-unveils-new-suite-of-accelerators-for-machine-learning-projects-amps.html

Gilbane Advisor 9-11-24 — LLM Agents & Architectures, Reckoning…

This week we feature articles from Aparna Dhinakaran, and Alex Russell.

Additional reading comes from Dries Buytaert, Bob DuCharme, Heather Hedden, and Kenrick Cai, Krystal Hu & Anna Tong.

News comes from Syncro Soft, Anthropic, Elastic, and Couchbase.

Our next issue will arrive September 18th.

All previous issues are available at https://gilbane.com/gilbane-advisor-index


Opinion / Analysis

Navigating the new types of LLM agents and architectures

“How can teams navigate the new frameworks and new agent directions? What tools are available, and which should you use to build your next application? As a leader at a company that recently built our own complex agent to act as a copilot within our product, we have some insights on this topic.”

Aparna Dhinakaran’s piece is a useful update with some good advice for developers, product architects, and business analysts. (10 min)

https://towardsdatascience.com/navigating-the-new-types-of-llm-agents-and-architectures-309382ce9f88

Reckoning: Part 1 — the landscape

Alex Russell’s passionate and detailed “… investigation into JavaScript-first frontend culture and how it broke US public services…” exposes the high cost cost and poor user experience of widely-deployed web services in general.

The focus on public services provides powerful examples, and his web technology experience combine for a compelling case. There are four parts to his investigation – links to the other three are included. Part 1 is (4 min).

https://infrequently.org/2024/08/the-landscape

More Reading

All Gilbane Advisor issues


Content technology news

Anthropic announces Claude for Enterprise

Claude Enterprise includes an expanded 500K context window, more usage capacity, a native GitHub integration, and enterprise-grade security features.
https://www.anthropic.com/news/claude-for-enterprise

Syncro Soft releases Oxygen AI Positron Assistant 3.0

The tool supports AI-generated content within Oxygen XML Editor/Author/Developer, Oxygen XML Web Author, and Oxygen Content Fusion.
https://www.oxygenxml.com/ai_positron_assistant.html

Elastic returns to open source license for Elasticsearch and Kibana

With the addition of AGPL, an open source license approved by the Open Source Initiative (OSI), Elasticsearch & Kibana will be officially considered open source.
https://www.elastic.co/blog/elasticsearch-is-open-source-again

Helps streamline development of adaptive applications by enabling real-time data analysis alongside operational workloads in a single database platform.
https://www.couchbase.com/blog/free-tier-capella-columnar-mobile-vector-search-and-more/

All content technology news


The Gilbane Advisor is authored by Frank Gilbane and is ad-free, cost-free, and curated for content, computing, web, data, and digital experience technology and information professionals. We publish recommended articles and content technology news most Wednesdays. We do not sell or share personal data.

Subscribe | View online | Editorial policy | Privacy policy | Contact

SearchStax and Magnolia partner on personalized search solutions

SearchStax, a Search Experience Company, and Magnolia, a composable digital experience platform (DXP), announced a technology partnership to help marketing teams to deliver modern, personalized search experiences while driving marketing agility.

This strategic partnership merges the search capabilities of SearchStax Site Search with Magnolia’s flexible, enterprise-grade DXP, offering marketers and developers the tools they need to create next-level digital experiences throughout the customer journey. Combining SearchStax’s advanced search technology with Magnolia enables organizations to enhance website performance and user engagement by providing visitors with fast, accurate, and contextually relevant search results.

The integration module allows Magnolia managed content to be fed into the SearchStax index, augmenting the search experience for end users. The improved search results surface Magnolia content, such as editorial, campaign, FAQs and other relevant assets. This enhances the website search experience, reduces “no result” searches and increases conversions. 

Customers can now adopt SearchStax within their Magnolia DXP, and teams from both companies are ready to assist with implementation and optimization.

https://www.searchstax.com ■ https://www.magnolia-cms.com

Syncro Soft releases Oxygen AI Positron Assistant 3.0

Version 3.0 increases the efficiency of using the tool as certain actions now leverage the Retrieval-Augmented Generation (RAG) process to obtain context from the users’ current projects.

The new AI Positron Assistant drop-down widget offers a convenient way of accessing useful AI actions by displaying a floating contextual menu directly within the editing area. Users can customize their own AI actions to display as Quick Assist fixes in the editor. It is also now possible to choose the OpenAI model used in chat sessions and actions right from the AI Positron Assistant view.

A variety of new AI actions that are specific to working with DITA XML documents have been implemented, including a Proofread action that helps users identify potential issues in their content regarding logical consistency, grammar, spelling, readability, and comprehension.

Other newly implemented actions include the Improve Structure action that instructs the AI to enhance DITA XML documents by adding additional structure or inline elements, and the Add Structured Content action continues the content from a document with additional structured content generated based upon similar content from the current project, which gives the AI more context for formulating the new XML structure.

 https://www.oxygenxml.com/ai_positron_assistant.html

Anthropic announces Claude for Enterprise

Anthropic announced the Claude Enterprise plan to help organizations securely collaborate with Claude using internal knowledge. The Claude Enterprise plan offers an expanded 500K context window, more usage capacity, and a native GitHub integration so you can work on entire codebases with Claude. It also includes enterprise-grade security features—like SSO, role-based permissions, and admin tooling—that help protect your data and team.

With Claude, your organization’s knowledge is easier to share and reuse, enabling every individual on the team to quickly and consistently produce their best work. At the same time, your data is protected. We do not train Claude on your conversations and content. By integrating Claude with your organization’s knowledge, you can scale expertise across more projects, decisions and teams.

When you combine expanded context windows with Projects and Artifacts, Claude becomes an end-to-end solution to help your team take any initiative from idea to high-quality work output. For example, marketers can turn market trends into a compelling campaign. Product managers can upload product specifications for Claude to build an interactive prototype. Engineers can connect codebases for help on troubleshooting errors and identifying optimizations.

https://www.anthropic.com/news/claude-for-enterprise

Couchbase expands cloud database platform with Capella Columnar and vector search

Couchbase, Inc. launched Capella Columnar on AWS, to help organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform. Also generally available today is Couchbase Mobile with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge, and Capella Free Tier, a free developer environment.

Capella Columnar addresses the challenge of parsing, transforming and persisting JSON data into an analysis-ready columnar format. It supports real-time, multisource ingestion of data from Couchbase, and systems like Confluent Cloud to draw data from third-party JSON or SQL systems. Capella Columnar makes analysis easy by using Capella iQ, an AI coding assistant that writes SQL++ so the developer doesn’t need to wait for the BI team to run analytics for them. Once an important metric is calculated, it can be written back to the operational side of Capella, which can use the metric within the application.

Using vector search on-device with Couchbase Lite, the embedded database for mobile and IoT applications, mobile developers can now leverage vector search at the edge for building semantic search and retrieval-augmented generation (RAG) applications.

https://www.couchbase.com/blog/free-tier-capella-columnar-mobile-vector-search-and-more/

Elastic returns to open source license for Elasticsearch and Kibana

Elastic, a Search AI Company, announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).

With the addition of AGPL, an open source license approved by the Open Source Initiative (OSI), Elasticsearch and Kibana will be officially considered open source and enable Elastic’s customers and community to use, modify, redistribute, and collaborate on Elastic’s source code under a well-known open source license.

Adding AGPL will also enable greater engagement and adoption across our users in areas including vector search, further increasing the popularity of Elasticsearch as a runtime platform for RAG and building GenAI applications.

The addition of AGPL as a license option does not affect existing users working with either SSPL or ELv2, and there will be no change to Elastic’s binary distributions. Similarly, for users building applications or using plugins on Elasticsearch or Kibana, nothing changes — Elastic’s client libraries will continue to be licensed under Apache 2.0.

https://www.elastic.co/blog/elasticsearch-is-open-source-again

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑