Curated for content, computing, data, information, and digital experience professionals

Category: Content management & strategy (Page 134 of 479)

This category includes editorial and news blog posts related to content management and content strategy. For older, long form reports, papers, and research on these topics see our Resources page.

Content management is a broad topic that refers to the management of unstructured or semi-structured content as a standalone system or a component of another system. Varieties of content management systems (CMS) include: web content management (WCM), enterprise content management (ECM), component content management (CCM), and digital asset management (DAM) systems. Content management systems are also now widely marketed as Digital Experience Management (DEM or DXM, DXP), and Customer Experience Management (CEM or CXM) systems or platforms, and may include additional marketing technology functions.

Content strategy topics include information architecture, content and information models, content globalization, and localization.

For some historical perspective see:

https://gilbane.com/gilbane-report-vol-8-num-8-what-is-content-management/

FatWire Unveils Integration with Google Analytics

FatWire Software announced that its FatWire Content Server fully integrates with Google Analytics to help customers measure and track the success of their FatWire websites. FatWire customers can download the integration module free of charge from FatWire to automatically generate Google tags and feed data directly into Google, for out-of-the-box monitoring and reporting. The integration will enable FatWire customers to use Google’s free analytics package to measure and optimize online content and campaigns, providing a better understanding of website effectiveness, including traffic, usage patterns and visitor behavior. The FatWire Analytics module, which is natively integrated with Content Server, provides granular tracking of content assets for specific customer segments and across dynamic, targeted web pages, enabling optimization of content on a granular level. Google Analytics provides complementary capabilities for tracking and measuring website and user behavior at a site and page level. With this packaged integration, customers can now combine FatWire’s platform with Google Analytics, thus providing a combination of page, behavior and granular content analytics. http://www.fatwire.com

eZ Systems Releases Extensions for eZ Publish 4.2

eZ Systems announced the immediate release of extensions for eZ Publish 4.2. eZ Publish Style Editor is a brand new extension providing a tool to change the overall look and feel of an eZ Publish-based website. Webmasters are able to switch in a ‘visual edit’ mode while managing sites featuring eZ Flow or the eZ Publish Website Interface. The visual edit mode provides a user interface for managing the look and feel of a site’s pages by editing the Cascading Style Sheet (CSS) and Images used by the site. This extension is immediately available as certified software, and supported as an add-on to ez Publish Premium. The eZ XML export extension helps you manage the content you provide to 3rd-party content platforms. It gives you control over which content is exported, the XML export format with support for XML Schema (XSD) and XSLT post-processing, and a configurable set of delivery options in order to industrialize and automatize the content delivery. This extension is also immediately available as certified software, and supported as an add-on to ez Publish Premium. Teamroom is a collaboration solution based on eZ Publish. Make your team members’ lives easier with simple management of teamroom members, information sharing, an open collaboration workflow, event and team document management, and confidentiality levels for your teamrooms. Teamroom is packaged as an extension, and available as a beta in the contribution area for eZ Publish extensions. Teamroom will have an official release in conjunction with the 4.3 release of eZ Publish. http://ez.no/

Focusing on the “Content” in Content Management

The growth in web-centric communication has created a major focus on content management, web content management , component content management, and so on. This interest is driven primarily by increasing demand for rich, interactive, accessible information products delivered via the Web. The focus is not misplaced but may be missing part of the point. To be specific, in our focus on the “management” part of CM, we may be missing the first word in the phrase…. “Content.”

It’s true that the application of increasing amounts of computer and brain power to the processes associated with preparing and delivering the kind of information demanded by today’s users can improve those products. But it does so within limits set by and at costs generated by the content “raw material” it gets from the content providers. In many cases, the content available to web product development processes is so structurally crude that it requries major clean-up and enhancement in order to adequately participate in the classification and delivery process. As the focus on elegant Web delivery increases, barring real changes in the condition of this raw content, the cost of enhancement is likely to grow proportionally, straining the involved organizations’ ability to support it.

The answer may be in an increased focus on the processes and tools used to create the original content. We know that the original creator of most content knows the most about how it should be logically structured and most about the best way to classify it for search and retrieval. Trouble is, in most cases, we provide no means of capturing what the creator knows about his or her intellectual product. Moreover, because many creators have never been able to fully populate the metadata needed to classify and deliver their content, in past eras, professional catalogers were employed to complete this final step. In today’s world, however, we have virtually eliminated the cataloger, assuming instead that the prodigious computer power available to us could develop the needed classification and structure from the content itself. That approach can and does work, but it will require better raw material if it is to achieve the level of effectiveness needed to keep the Web from becoming a virtual haystack in which finding the needle is more good luck than good measure. Native XML editors instead of today’s visually oriented word processors, spreadsheets, graphics and other media forms with content-specific XML under them, increased use of native XML databases and a host of rich content-centric resources are part of this content evolution.

Most important, however, may be promulgation of the realization across society that creating content includes more than just making it look good on the screen, and that the creator shares in that responsibility. This won’t be an easy or quick process, requiring more likely generations than years, but if we don’t begin soon, we may end up with a Web 3 or 4 or 5.0 trying to deliver content that isn’t even yet 1.0.

China-Based CSOFT Launches TermWiki

CSOFT International Ltd., a provider of multilingual localization, testing, and outsourced software development for the global market, announced the upcoming launch of TermWiki, a multilingual, collaborative and Wiki-based terminology management system for the the localization industry. TermWiki comes with enhanced Google-like fuzzy match search capabilities, automated notification features, detailed accessibility and user profile management, a structured dispute resolution infrastructure, image and video support, as well as customizable forms embedded in the system to facilitate compliance with relevant ISO standards for the presentation of terminological data categories. TermWiki is scheduled for public release this spring. http://www.csoftintl.com

In the end, good search may depend on good source.

As the world of search becomes more and more sophisticated (and that process has been underway for decades,) we may be approaching the limits of software’s ability to improve its ability to find what a searcher wants. If that is true, and I suspect that it is, we will finally be forced to follow the trail of crumbs up the content life cycle… to its source.

Indeed, most of the challenges inherent in today’s search strategy and products appears to grow from the fact that while we continually increase our demands for intelligence on the back end, we have done little if anything to address the chaos that exists on the front end. You name it, different word processing formats, spreadsheets, HTML tagged text, database delimited files, and so on are all dumped into what we think of as a coherent, easily searchable body of intellectual property. It isn’t and isn’t likely to become so any time soon unless we address the source.

Having spent some time in the library automation world, I can remember the sometimes bitter controversies over having just two major foundations for cataloging source material (Dewey and LC; add a third if you include the NICEM A/V scheme.) Had we known back then that the process of finding intellectual property would devolve into the chaos we now confront, with every search engine and database product essentialy rolling its own approach to rational search, we would have considered ourselves blessed. In the end, it seems, we must begin to see the source material, its physcial formats, its logical organization and its inclusion of rational cataloging and taxonomy elements as the conceptual raw material for its own location.

As long as the word processing world teaches that anyone creating anything can make it look like it should in a dozen different ways, ignoring any semblance of finding-aid inclusion, we probably won’t have a truly workable ability to find what we want without reworking the content or wading through a haystack of misses to find our desired hits.

Unfortunately, the solutions of yesteryear, including after-creation cataloging by a professional cataloger, probably won’t work now either, for cost if no other reason. We will be forced to approach the creators of valuable content, asking them for a minimum of preparation for searching their product, and providing the necessary software tools to make that possible.

We can’t act too soon because, despite the growth of software elegance and raw computer power, this situation will likely get worse as the sheer volume of valuable content grows. Regards, Barry Read more: Enterprise Search Practice Blog:  https://gilbane.com/search_blog/

Alfresco and RightScale Partner

Alfresco Software, Inc. and RightScale, Inc. announced the availability of a joint solution aimed at speeding the deployment time and automating the scaling of Alfresco software in the cloud. Utilizing RightScale’s software-as-a-service (SAAS) cloud management platform. RightScale’s cloud management platform is aimed at enabling organizations to deploy Alfresco open source ECM quickly and create a fully-configured, fault-tolerant and load-balanced Alfresco cluster using RightScale ServerTemplates.  http://www.alfresco.com/, http://www.rightscale.com/

EPiServer Adds E-commerce to Content Management Platform

EPiServer announced the addition of a complete, integrated e-commerce platform to its existing web content management and community platform. Through a strategic partnership with Mediachase, the EPiServer platform will provide commerce, content and community, and is aimed to enable companies in the retail and B2B vertical markets to deliver a compelling online experience. The Mediachase .NET e-Commerce Framework (ECF) provides an agile best practices architecture, with user experience controls, loosely coupled subsystems, like catalog management, order management, customer management, merchandising, promotions, and a fully exposed .NET developer framework (API). Combined with the extensible EPiServer content management system (CMS) and EPiServer Community platform, .NET web developers can build and deploy online stores, including multi-branding, multi-language, and multi-channel capabilities. Marketing tasks are streamlined through the interface and new capabilities to correlate visitor feedback and experience with store operation and order status at every step of the process. The EPiServer and Mediachase platforms are available now. The EPiServer integrated e-commerce platform is expected in the first half of 2010. http://www.episerver.com

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑