Curated for content, computing, and digital experience professionals

Category: Content management & strategy (Page 11 of 471)

This category includes editorial and news blog posts related to content management and content strategy. For older, long form reports, papers, and research on these topics see our Resources page.

Content management is a broad topic that refers to the management of unstructured or semi-structured content as a standalone system or a component of another system. Varieties of content management systems (CMS) include: web content management (WCM), enterprise content management (ECM), component content management (CCM), and digital asset management (DAM) systems. Content management systems are also now widely marketed as Digital Experience Management (DEM or DXM, DXP), and Customer Experience Management (CEM or CXM) systems or platforms, and may include additional marketing technology functions.

Content strategy topics include information architecture, content and information models, content globalization, and localization.

For some historical perspective see:

https://gilbane.com/gilbane-report-vol-8-num-8-what-is-content-management/

Box integrates with Microsoft Azure OpenAI Service

Box, Inc., a Content Cloud, announced a new integration with Microsoft Azure OpenAI Service to bring its advanced large language models to Box AI. The integration of Azure OpenAI Service enables Box customers to benefit from the advanced AI models, while bringing Box and Microsoft’s enterprise standards for security, privacy, and compliance to this technology.  

Guided by its AI Principles, Box has built Box AI on the company’s platform-neutral framework, allowing it to connect with today’s large language models. By integrating with Azure OpenAI Service, Box is applying advanced intelligence models to its Content Cloud to advance enterprise-grade AI. Microsoft and Box already help customers meet strict compliance requirements like FINRA, GxP, and FedRAMP. With today’s announcement, they will also empower organizations across highly-regulated industries to leverage AI for new use cases.

Box AI, including the integration with Azure OpenAI Service, is generally available today, and is included in all Enterprise Plus plans, with individual users having access to 20 queries per month and 2,000 additional queries available on a company level.

https://www.boxinvestorrelations.com/news-and-media/news/press-release-details/2024/Box-Expands-its-Collaboration-with-Microsoft-with-New-Azure-OpenAI-Service-Integration/default.aspx

Couchbase adds vector search to database platform

Couchbase, Inc. cloud database platform company, introduced vector search as a new feature in Couchbase Capella Database-as-a-Service (DBaaS) and Couchbase Server to help businesses bring to market a new adaptive applications that engage users in a hyper-personalized and contextualized way. The new feature offers vector search optimized for running onsite, across clouds, to mobile and IoT devices at the edge, so organizations can run adaptive applications anywhere.

While vector-only databases aim to solve the challenges of processing and storing data for LLMs, having multiple standalone solutions adds complexity to the enterprise IT stack and slows application performance. Couchbase’s multipurpose capabilities deliver a simplified architecture to improve the accuracy of LLM results. Couchbase also makes it easier and faster for developers to build such applications with a single SQL++ query using the vector index, removing the need to use multiple indexes or products. With vector search as a feature across all Couchbase products, customers gain:

  • Similarity and hybrid search, combining text, vector, range and geospatial search capabilities in one.
  • RAG to make AI-powered applications more accurate, safe and timely.
  • Enhanced performance because all search patterns can be supported within a single index to lower response latency.

https://www.couchbase.com/blog/announcing-vector-search/

IBM announces availability of open-source Mistral AI Model on watsonx 

IBM announced the availability of the popular open-source Mixtral-8x7B large language model (LLM), developed by Mistral AI, on its watsonx AI and data platform, as it continues to expand capabilities to help clients innovate with IBM’s own foundation models and those from a range of open-source providers.

The addition of Mixtral-8x7B expands IBM’s open, multi-model strategy to meet clients where they are and give them choice and flexibility to scale enterprise AI solutions across their businesses.

Mixtral-8x7B was built using a combination of Sparse modeling — a technique that finds and uses only the most essential parts of data to create more efficient models — and the Mixture-of-Experts technique, which combines different models (“experts”) that specialize in and solve different parts of a problem. The Mixtral-8x7B model is widely known for its ability to rapidly process and analyze vast amounts of data to provide context-relevant insights.

This week, IBM also announced the availability of ELYZA-japanese-Llama-2-7b, a Japanese LLM model open-sourced by ELYZA Corporation, on watsonx. IBM also offers Meta’s open-source models Llama-2-13B-chat and Llama-2-70B-chat and other third-party models on watsonx.

https://newsroom.ibm.com/2024-02-29-IBM-Announces-Availability-of-Open-Source-Mistral-AI-Model-on-watsonx,-Expands-Model-Choice-to-Help-Enterprises-Scale-AI-with-Trust-and-Flexibility

Flux launches full release of WordPress on decentralized platform

Flux, a global decentralized technology company specializing in cloud infrastructure, cloud computing, artificial intelligence and decentralized storage, officially launched WordPress on its platform following a successful Beta phase that commenced in February 2023, making the most popular content management system (CMS), WordPress, accessible on Flux’s decentralized infrastructure.

The backbone of this innovative offering lies in Flux’s extensive network, encompassing not only enterprise-grade nodes, but also nodes ranging from individual users to data centers, designed to guarantee peak performance for WordPress sites. This infrastructure addresses crucial web metrics such as bounce rate and conversion rates, which are significantly affected by loading speeds.

Flux’s decentralized WordPress solution offers an efficient, cost-effective, and scalable hosting option. The ease of deploying a WordPress site on Flux, demanding minimal technical expertise, ensures accessibility for a wider audience, which is reinforced by extensive support resources helping to make the technology user-friendly and empowering for all.

Key features include geolocation capabilities for optimizing site performance based on user concentration, alongside built-in redundancy to guarantee maximum availability. With a four-tiered pricing plan that competes favorably with traditional hosting services, Flux delivers web hosting for all project sizes and budgets with accessibility at its core.

https://wordpress.runonflux.io

DataStax and LlamaIndex partner to make building RAG applications easier

DataStax announced its retrieval augmented generation (RAG) solution, RAGStack, is now generally available powered by LlamaIndex as an open source framework, in addition to LangChain. DataStax RAGStack for LlamaIndex also supports an integration (currently in public preview) with LlamaIndex’s LlamaParse, which gives developers using Astra DB an API to parse and transform complex PDFs into vectors in minutes. 

LlamaIndex is a framework for ingesting, indexing, and querying data for building generative AI applications and addresses the ingestion pipelines needed for enterprise-ready RAG. LlamaParse is LlamaIndex’s new offering that targets enterprise developers building RAG over complex PDFs; it enables clean extraction of tables by running recursive retrieval, promising more accurate parsing of the complex documents often found in business.

RAGStack with LlamaIndex offers a solution tailored to address the challenges encountered by enterprise developers in implementing RAG solutions. Benefits include a curated Python distribution available on PyPI for integration with Astra DB, DataStax Enterprise (DSE), and Apache Cassandra, and a live RAGStack test matrix and GenAI app templates.

Users can use LlamaIndex alone, or in combination with LangChain and their ecosystem including LangServe, LangChain Templates, and LangSmith.

https://www.datastax.com/press-release/datastax-and-lamaIndex-partner-to-make-building-rag-applicationseasier-than-ever-for-genai-developers

Acquia enhances brand management capabilities

Acquia announced new integrations for its digital asset management solution, Acquia DAM, that expand its brand management capabilities. These integrations — with Acquia Campaign Studio, Adobe Stock, and Google Translate reduce the complexity of maintaining a consistent brand experience across digital channels.

Acquia DAM is now integrated with Acquia Campaign Studio, the company’s marketing automation solution. The integration leverages Acquia’s instant search connector tool, so once a user is authenticated in the DAM connector within Campaign Studio, they can search, view, and select the asset of their choice within Campaign Studio’s email and landing page builders. Pictures in email and landing page builders dynamically change when updated in Acquia DAM.

An Adobe Stock integration automatically syncs a customer’s newly licensed Adobe Stock assets with Acquia DAM, bringing in essential metadata and offering smoother workflows. Creative pros can choose which types of Adobe Stock assets to monitor and sync, and the integration handles file copying and categorization in Acquia DAM. Customers can now use Google Translate to automatically translate text from selected metadata fields within Acquia DAM. The DAM automatically repopulates these fields with translated content in up to 20 languages.

https://www.acquia.com/newsroom/press-releases/acquia-enhances-brand-management-capabilities

Adobe announces AI Assistant in Reader and Acrobat

Adobe introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat. Integrated into Reader and Acrobat workflows, AI Assistant instantly generates summaries and insights from long documents, answers questions and formats information for sharing in emails, reports and presentations.

AI Assistant leverages the same artificial intelligence and machine learning models behind Acrobat Liquid Mode, technology that supports responsive reading experiences for PDFs on mobile. These proprietary models provide a deep understanding of PDF structure and content, enhancing quality and reliability in AI Assistant outputs.

Acrobat Individual, Pro and Teams customers and Acrobat Pro trialists can use the AI Assistant beta to work more productively today. No complicated implementations required. Simply open Reader or Acrobat and start working with the new capabilities.

Reader and Acrobat customers will have access to the full range of AI Assistant capabilities through a new add-on subscription plan when AI Assistant is out of beta. Until then, the new AI Assistant features are available in beta for Acrobat Standard and Pro Individual and Teams subscription plans on desktop and web in English, with features coming to Reader desktop customers in English over the next few weeks at no additional cost.

https://news.adobe.com/news/news-details/2024/Adobe-Brings-Conversational-AI-to-Trillions-of-PDFs-with-the-New-AI-Assistant-in-Reader-and-Acrobat/default.aspx

Ontotext releases Ontotext Metadata Studio 3.7

Ontotext, a provider of enterprise knowledge graph (EKG) technology and semantic database engines, announced the availability of Ontotext Metadata Studio (OMDS) 3.7, an all-in-one environment that facilitates the creation, evaluation, and quality improvement of text analytics services. This latest release provides out-of-the-box, rapid natural language processing (NLP) prototyping and development so organizations can iteratively create a text analytics service that best serves their domain knowledge. 

As part of Ontotext’s AI-in-Action initiative, which helps data scientists and engineers benefit from the AI capabilities of its products, the latest version enables users to tag content with Common English Entity Linking (CEEL), text analytics service. CEEL is trained to tag mentions of people, organizations, and locations to their representation in Wikidata – the public knowledge graph that includes close to 100 million entity instances. With OMDS, organizations can recognize approximately 40 million Wikidata concepts, and  streamline information extraction from text and enrichment of databases and knowledge graphs. Organizations can:

  • Automate tagging and categorization of content to facilitate more efficient discovery, reviews, and knowledge synthesis. 
  • Enrich content, achieve precise search, improve SEO, and enhance the performance of LLMs and downstream analytics.
  • Streamline information extraction from large volumes of unstructured content and analyze market trends.

https://www.ontotext.com/products/ontotext-metadata-studio/

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑