Curated for content, computing, and digital experience professionals

Category: Content management & strategy (Page 16 of 471)

This category includes editorial and news blog posts related to content management and content strategy. For older, long form reports, papers, and research on these topics see our Resources page.

Content management is a broad topic that refers to the management of unstructured or semi-structured content as a standalone system or a component of another system. Varieties of content management systems (CMS) include: web content management (WCM), enterprise content management (ECM), component content management (CCM), and digital asset management (DAM) systems. Content management systems are also now widely marketed as Digital Experience Management (DEM or DXM, DXP), and Customer Experience Management (CEM or CXM) systems or platforms, and may include additional marketing technology functions.

Content strategy topics include information architecture, content and information models, content globalization, and localization.

For some historical perspective see:

https://gilbane.com/gilbane-report-vol-8-num-8-what-is-content-management/

DataStax launches new integration with LangChain

DataStax announced a new integration with LangChain, the popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra, as a new vector source in the LangChain framework. 

As companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that provides real-time updates with zero latency on critical production workloads.

Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as an AI-first toolkit to connect their application to different data sources.

The integration lets developers leverage the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates. 

https://www.datastax.com/blog/llamaindex-and-astra-db-building-petabyte-scale-genai-apps-just-got-easier

Ontotext GraphDB 10.4 enables users to chat with their knowledge graphs

Ontotext released 10.4 of GraphDB, their knowledge graph database engine. GraphDB 10.4 is now available on AWS Marketplace, adding to the flexibility of how enterprises can scale and maintain knowledge graph applications. The new AWS operational guide and improvements to backup support on AWS S3 storage increases the efficiency of deployment of GraphDB. 

Other new 10.4 features include user-defined Access Control Lists (ACLs) for more granular control over the security of your data. Connectors to external services now include one for ChatGPT that lets you customize the answers returned by the OpenAI API with data from your own knowledge graphs. Building on this, the Talk to Your Graph LLM-backed chatbot lets you ask natural language questions about your own data.

Several new features make maintenance of running servers easier and more efficient. The improved Cluster Management View shows a wider range of information about the status of each running cluster, and upgrades to the Backup and Snapshot Compression tools reduce backup time and necessary disk space. GraphDB 10.4’s ability to control the transaction log size minimizes the chance of running out of disk space, and greater control over transaction IDs makes it easier to analyze transaction behavior and identify potential issues.

https://www.ontotext.com/products/graphdb/

Language Weaver extends neural machine translation capabilities with LLMs

Language Weaver, a business unit within RWS’s Language Services and Technology division, is accelerating the development and deployment of generative AI technologies to enhance the capabilities of its secure, AI-powered machine translation platform. 

Language Weaver is using Amazon Web Services’ (AWS) machine learning and AI services, including Amazon SageMaker, to build new features and scale its Large Language Model (LLM) capabilities in a private, secure and protected environment.

The Language Weaver platform is a neural machine translation platform that combines cutting-edge machine learning, advanced artificial intelligence capabilities and linguistic expertise. The platform provides highly accurate, real-time translation across almost 3,500 language combinations.

RWS has now also joined the AWS Partner Network (APN). The APN is a global community of partners that leverages programs, expertise and resources to build, market and sell customer offerings. This network features 100,000 partners from more than 150 countries.

https://www.rws.com/language-weaver/

Netlify announces Composable Web Platform

Netlify announced the Netlify Composable Web Platform, a new platform for enterprises to build and implement modern, composable web architecture. Netlify’s Composable Web Platform offers enterprises a simplified path towards composable web architecture and a foundation for architects, developers, and marketers.

The Composable Web Platform unifies content, data sources, code and infrastructure, and allows developers to select components to integrate into a single workflow. The platform offers a single user interface for customers to access:

  • Netlify Core provides teams with the platform and workflow to focus on building websites and apps without labor-intensive operations. Core primitive advancements added to Netlify Core ensure that updating or rebuilding assets only happens where required, and make sure that customer applications are consistent, up to date, and performant.
  • Netlify Connect brings all content sources and CMS applications together in a single location, giving web teams the power to orchestrate and manage how and where content is served to all frontend digital experiences. A new private SDK allows any company to create a connection between their purchased or custom content source and Netlify Connect.
  • Netlify Create integrates with your chosen content systems, frameworks, and architectures, providing an intuitive visual editing experience.

https://www.netlify.com/platform/

dbt Labs announces the next generation of dbt Semantic Layer

dbt Labs has announced the next generation of the dbt Semantic Layer following its acquisition of Transform in February 2023. The dbt Semantic Layer enables organizations to centrally define business metrics in dbt Cloud and then query them from any integrated analytics tool. This allows organizations to ensure that critical definitions such as “revenue,” “customer count,” or “churn rate” are consistent in every data application.

dbt Labs is also shipping a new integration with Tableau for the dbt Semantic Layer. With this integration organizations that rely on Tableau’s analytics platform can benefit from business-critical metrics that are consistent, reliable, and reading from a single, verified source of truth. The Semantic Layer also integrates with Google Sheets, Hex, Klipfolio, Lightdash, Mode, and Push.ai. New features:

  • Dynamic join support: Join any number of tables together to produce metrics on top of an existing database.
  • Optimized query plans and SQL generation: Generates joins, filters and aggregations as an analyst would, with legible and performant SQL.
  • Complex metric types: Enables new aggregations and more flexible metric definitions, empowering users to define more metrics critical to measuring their business.
  • Expanded data platform support: Supports BigQuery, Databricks, Redshift, and Snowflake, including performance optimizations for each.

https://www.getdbt.com/

OpenLink Software introduces the OpenLink Personal Assistant

From the OpenLink blog…

We are pleased to announce the immediate availability of the OpenLink Personal Assistant, a practical application of Knowledge Graph-driven Retrieval Augmented Generation (RAG) showcasing the power of knowledge discovery and exploration enabled by a modern conversational user interface. This modern approach revitalizes the enduring pursuit of high-performance, secure data access, integration, and management by harnessing the combined capabilities of Large Language Models (LLMs), Knowledge Graphs, and RAG, all propelled by declarative query languages such as SPARQL, SQL, and SPASQL (SPARQL inside SQL).

GPT 4.0 and 3.5-turbo foundation models form the backbone of the OpenLink Assistant, offering a sophisticated level of conversational interaction. These models can interpret context, thereby providing a user experience that intuitively emulates aspects of human intelligence.

What truly sets OpenLink Assistant apart is state-of-the-art RAG technology, integrated seamlessly with SPARQL, SQL, and SPASQL (SPARQL inside SQL). This fusion, coupled with our existing text indexing and search functionality, allows for real-time, contextually relevant data retrieval from domain-specific knowledge bases deployed as knowledge graphs.

  1. Self-Describing, Self-Supporting Products: OpenLink Assistant adds a self-describing element to our Virtuoso, ODBC & JDBC Drivers products by simply installing the Assistant’s VAD (Virtuoso Application Distro) package.
  2. OpenAPI-Compliance: With YAML and JSON description documents, OpenLink Assistant offers hassle-free integration into existing systems. Any OpenAPI compliant service can be integrated into its conversation processing pipeline while also exposing core functionality to other service consumer apps.
  3. Device Compatibility: Whether you’re on a desktop or a mobile device, OpenLink Assistant delivers a seamless interaction experience.
  4. UI Customization: The Assistant can be skinned to align with your application’s UI, ensuring a cohesive user experience.
  5. Versatile Query Support: With support for SQL, SPARQL, and SPASQL, OpenLink Assistant can interact with a multitude of data, information, and knowledge sources.

https://medium.com/openlink-software-blog/introducing-the-openlink-personal-assistant-e74a76eb2bed

Optimizely debuts marketing operating system  

Optimizely, a digital experience platform provider, announced a new operating system for marketing practitioners built on its digital experience platform: Optimizely One. Optimizely One is enriched through a single, unified workflow accelerated by AI, making it simpler for marketers to create, manage, deliver, and optimize content all in one place with a:

Composable and decoupled stack:

  • CMS SaaS Core: Design entire web experiences with a low/no-code interface, and benefit from automatic upgrades, SLAs, and managed services
  • Graph: Aggregate content from any source, manage it from a single hub, and deliver it to any channel, app or device
  • Visual Experience Builder: Apply pre-built templates and customer data to create and preview engaging experiences via a new editor.

Unified Workflow:

  • Experiment Collaboration: Manage every aspect of your experimentation program with a purpose-built tool
  • Omnichannel Authoring: Create content once and pull it into multiple content types and formats, to deliver to any channel

Embedded AI capabilities:

  • Opal, a new identity for AI: Engage with Optimizely’s AI to surface insights, review recommendations, create new content.
  • Partnership with Writer: Utilize business-focused language learning models (LLMs) to create enhanced AI-generated content that includes specific industry knowledge.

https://www.optimizely.com

Adobe releases new Experience Manager for Enterprises

Adobe announced the availability of the all-new Adobe Experience Manager Sites, Adobe’s content management system (CMS) for enterprises. The enterprise application includes new capabilities that allow businesses to test and optimize web content to drive conversions and deliver better site speeds with optimized boiler plate code, phased page rendering, persistent caching and continual real-user monitoring. Word processing tools like Microsoft Word and Google Docs enable anyone to create and edit web pages. 

The Adobe Experience Manager Sites delivers:

  • Increased site performance: Experience Manager Sites comes with Adobe-developed performance tools – including optimized boilerplate code that gives developers a starting point, phased page rendering to ensure every page’s most prominent parts load first, persistence caching to avoid content loading delays, and continual real-user monitoring.
  • Web content optimization: New built-in experimentation tools help marketing teams quickly test brand experiences to better understand which content is driving engagements.
  • Simplified authoring: Content management is now faster and more accessible for all marketers through document-based authoring. The Adobe Experience Manager CMS enables any marketer to create and edit webpages with Microsoft Word or Google Docs.
  • Adobe Sensei GenAI: delivers LLM-agnostic tools for brands to write and modify copy in a brand’s voice within existing workflows.

https://business.adobe.com/products/experience-manager/sites/aem-sites.html

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑