Category: Content management & strategy(Page 100 of 485)
This category includes editorial and news blog posts related to content management and content strategy. For older, long form reports, papers, and research on these topics see our Resources page.
Adobe and ServiceNow announced the availability of its partnership integration connecting data from Adobe Experience Platform and ServiceNow’s Customer Service Management workflow product to enable more seamless, connected customer experiences. Connecting Adobe Experience Platform, its Customer Experience Management (CXM) platform, and ServiceNow’s Customer Service Management product provides brands with a more complete view of the customer. Through this integration, Adobe and ServiceNow joint customers can:
Establish Context to Drive Brand Loyalty
Enterprises are often challenged by navigating internal silos of data pertaining to interactions with their customers. This integration creates data workflows that removes those barriers and connects marketing and customer service organizations.
Gain Deeper Insights for Personalization
Great experiences are built on the understanding of a customer’s journey. Customers can streamline work between teams by aggregating data during the “evaluate” and “purchase” touchpoints, and capture service interactions to ultimately build rich, real-time customer profiles.
Improve Customer Experiences
A seamless customer experience allows for anticipating needs before they arise. With ServiceNow, organizations will understand which products or services the customer owns and uses, allowing organizations to drive towards greater personalization.
The Metadata Object Description Schema (MODS) is an XML-based bibliographic description schema developed by the United States Library of Congress’ Network Development and Standards Office. MODS was designed as a compromise between the complexity of the MARC format used by libraries and the extreme simplicity of Dublin Core metadata.
Smart content is structured content that also includes the semantic meaning of the information. The semantics can be in a variety of forms such as RDFa attributes applied to structured elements, or even semantically names elements. However it is done, the meaning is available to both humans and computers to process.
Global businesses recognize the need to address localization and translation in tandem with content creation and content management, but they are often stymied, even overwhelmed, by how to achieve this. Our research points to the emergence of what we define as the Global Content Value Chain, a strategy for meeting these challenges. Organizations embracing this strategy are leading the development of much-needed best practices.
‘Document computing’ was a term used to cover a collection of technologies that emerged as computer, or electronic publishing became a growing industry late 1980s and early 1990s. The idea was to differentiate the creation, management and delivery of unstructured data from the traditional and still prevalent structured data orientation of computing applications. It was one of the keynote topics at the first Documation conference in 1994 . Also see the more current, largely overlapping ‘content technology’.
Language afterthought syndrome refers to that pattern of treating language requirements as secondary considerations within content strategies and solutions. Global companes leak money and opportunity by failing to address language issues as integral to end-to-end solutions rather than ancillary post-processes. Examples abound. Source and translated content that should be reusable, but isn’t. Retrofitting content to meet regulatory requirments in different regions. Lost revenue because product and marketing content isn’t ready at launch time. Desktop publishing costs that are incurred soley due to reformatting in multiple languages. The list goes on and on.
“Content globalization” is the process of ensuring content is available in multiple languages. It is a subset of “internationalization” which also includes “localization” the process of ensuring the user interface and language use in a product is tailored to a local region.
In the early days of information technology (1950s – 1970s), computers were mostly mainframes and the information mostly structured data managed by information systems based on hierarchical and then relational databases.
With the emergence of descriptive markup languages such as SGML, XML, and JSON that add structure other forms of unstructured data or content such as text and streaming data, as well as NoSQL and graph database, linked data, and knowledge graph technologies, the distinction between structured and unstructured data or content is less relevant. Modern data lakes store structured, semi-structured, and unstructured data.