Category: Content management & strategy(Page 96 of 480)
This category includes editorial and news blog posts related to content management and content strategy. For older, long form reports, papers, and research on these topics see our Resources page.
‘Document computing’ was a term used to cover a collection of technologies that emerged as computer, or electronic publishing became a growing industry late 1980s and early 1990s. The idea was to differentiate the creation, management and delivery of unstructured data from the traditional and still prevalent structured data orientation of computing applications. It was one of the keynote topics at the first Documation conference in 1994 . Also see the more current, largely overlapping ‘content technology’.
Language afterthought syndrome refers to that pattern of treating language requirements as secondary considerations within content strategies and solutions. Global companes leak money and opportunity by failing to address language issues as integral to end-to-end solutions rather than ancillary post-processes. Examples abound. Source and translated content that should be reusable, but isn’t. Retrofitting content to meet regulatory requirments in different regions. Lost revenue because product and marketing content isn’t ready at launch time. Desktop publishing costs that are incurred soley due to reformatting in multiple languages. The list goes on and on.
“Content globalization” is the process of ensuring content is available in multiple languages. It is a subset of “internationalization” which also includes “localization” the process of ensuring the user interface and language use in a product is tailored to a local region.
In the early days of information technology (1950s – 1970s), computers were mostly mainframes and the information mostly structured data managed by information systems based on hierarchical and then relational databases.
With the emergence of descriptive markup languages such as SGML, XML, and JSON that add structure other forms of unstructured data or content such as text and streaming data, as well as NoSQL and graph database, linked data, and knowledge graph technologies, the distinction between structured and unstructured data or content is less relevant. Modern data lakes store structured, semi-structured, and unstructured data.
“Content technology” is a form of information technology that uses computing technology to create, retrieve, process, manage, store, share, and distribute unstructured data, such as narrative text and audio visual media, and typically incorporates or integrates with systems that manage structured data . The term emerged as early web content management systems proliferated, but includes any technology that processes some form of unstructured data, such as authoring, publishing, natural language processing, search and retrieval.
In September 2004 a group of thirty content management experts from around the world formed CM Professionals (CM Pros), a non-profit international community of content management professionals whose purpose is to further best practices based on shared experiences of experts and peers. CM Pros offers a members-only mailing list, a collaborative website, discussion forums, issue-oriented group blogs, knowledge wikis, syndicated web services, a job board, a professional directory and a calendar of face-to-face meeting opportunities. CM Pros will raise awareness of content management as an essential discipline that builds value, both financial and human, for companies and organizations.
The Gilbane Report on Open Information & Document Systems (ISSN 1067-8719) was periodical launched in March, 1993 by Publishing Technology Management Inc. which was founded by Frank Gilbane, its president, in June, 1987.
The Gilbane Report was sold to CAP Ventures Inc in December 1994, who published it until May, 1999, when it was bought by Bluebill Advisors, Inc. a consulting and advisory firm founded by Frank Gilbane. Bluebill Advisors continued to publish the Gilbane Report until March, 2005. The Gilbane Report issues from 1993 – 2005 remain available in either HTML or PDF (or both), on the Gilbane Advisor website, which is owned by Bluebill Advisors Inc.
Below is a link to the first issue of the Gilbane Report. There is also a PDF version.
Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation) is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.
In the 80s and 90s MT software was rule-based, but in the 2000s statistical analysis and the re-emergence of neural networking and more advanced machine learning techniques have proved to be far more successful.