Now more commonly known as machine translation (MT), refers to the the use of software to translate text or speech from one language to another. In the 80s and 90s MT software was rule-based, but in the 2000s statistical analysis and the re-emergence of neural networking and more advanced machine learning techniques have proved to be far more successful.
Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and machine learning or artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
Categorization is the process in which ideas and objects are recognized, differentiated, and understood. Categorization implies that objects are grouped into categories, usually for some specific purpose. Ideally, a category illuminates a relationship between the subjects and objects of knowledge. Categorization is fundamental in language, prediction, inference, decision making and in all kinds of environmental interaction.
Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.
“Information technology” (IT) likely first appeared in a Harvard Business Review article in November 1958, and refers to the use of computing technology to create, process, manage, store, retrieve, share, and distribute information (data).
Early use of the term did not discriminate between types of information or data, but in practice, until the late 1970s, business applications were limited to structured data that could be managed by information systems based on hierarchical and then relational databases. Also see content technology and unstructured data.
Microsoft puzzling announcements
Jean-Louis Gassée has some good questions, including… “Is Microsoft trying to implement a 21st century version of its old Embrace and Extend maneuver — on Google’s devices and collaboration software this time?” Read More

Integrated innovation and the rise of complexity
While Stephen O’Grady’ post isn’t addressing Microsoft’s recent Surface announcements as Gassée was, it is an interesting companion, or standalone read. Read More
Google and ambient computing
‘Ambient computing’ has mostly been associated with the Internet of Things (IoT). There are many types of computing things. But the most important, from a world domination perspective, are those at the center of (still human) experience and decision-making; that is mobile (and still desktop) computing devices. The biggest challenge is the interoperability required at scale. This is fundamental to computing platform growth and competitive strategies (see Gassée’s question above). Ben Thompson analyzes Google recent announcements in this context. Read More
Attention marketers: in 12 weeks, the CCPA will be the national data privacy standard. Here’s why
Now it’s 10 weeks. Tim Walters makes a good case for his prediction even though other states are working on their own legislation, and Nevada has a policy already in effect. Read More
Also…
The Gilbane Advisor curates content for content, computing, and digital experience professionals. We focus on strategic technologies. We publish more or less twice a month except for August and December.
The Internet of Things refers to uniquely identifiable objects (things) and their virtual representations in an Internet-like structure. The term Internet of Things was first used by Kevin Ashton in 1999. The concept of the Internet of Things first became popular through the Auto-ID Center and related market analysts publications. Radio-frequency identification is often seen as a prerequisite for the Internet of Things.
Artificial Intelligence (AI) is a branch of computer science that studies intelligent systems (i.e. software, computers, robots, etc.). Alternatively, it may be defined as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
For practical purposes, it is useful to distinguish between two different interpretations of ‘AI’:
- Artificial General Intelligence (AGI), where McCarthy’s “intelligent machines” have at least human level capabilities. AGI does not currently exist, and when, or if, it will is controversial.
- Machine learning (ML) is a discipline of AI that includes basic pattern recognition and deep learning and other techniques to train machines to identify and categorize large numbers of entities and data points. Basic machine learning has been used since the 80s and is responsible for many capabilities such as recommendation engines, spam detection, image recognition, and language translation. Advances in neural networks, and computing performance and storage, combined with vast data sets in the 2000s created a whole new level of sophisticated machine learning applications. This type of “AI” is ready for prime time. Yet, as powerful as these new techniques are, they are not AGI. i.e, “human level”.