Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.
Category: Computing & data (Page 81 of 91)
Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.
Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.
Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.
“Information technology” (IT) likely first appeared in a Harvard Business Review article in November 1958, and refers to the use of computing technology to create, process, manage, store, retrieve, share, and distribute information (data).
Early use of the term did not discriminate between types of information or data, but in practice, until the late 1970s, business applications were limited to structured data that could be managed by information systems based on hierarchical and then relational databases. Also see content technology and unstructured data.
Microsoft puzzling announcements
Jean-Louis Gassée has some good questions, including… “Is Microsoft trying to implement a 21st century version of its old Embrace and Extend maneuver — on Google’s devices and collaboration software this time?” Read More
Integrated innovation and the rise of complexity
While Stephen O’Grady’ post isn’t addressing Microsoft’s recent Surface announcements as Gassée was, it is an interesting companion, or standalone read. Read More
Google and ambient computing
‘Ambient computing’ has mostly been associated with the Internet of Things (IoT). There are many types of computing things. But the most important, from a world domination perspective, are those at the center of (still human) experience and decision-making; that is mobile (and still desktop) computing devices. The biggest challenge is the interoperability required at scale. This is fundamental to computing platform growth and competitive strategies (see Gassée’s question above). Ben Thompson analyzes Google recent announcements in this context. Read More
Attention marketers: in 12 weeks, the CCPA will be the national data privacy standard. Here’s why
Now it’s 10 weeks. Tim Walters makes a good case for his prediction even though other states are working on their own legislation, and Nevada has a policy already in effect. Read More
Also…
- Worthy thoughts on tech competition policy (PDF)… Mozilla on competition and interoperability via Mozilla blog
- For obsessive browser history buffs… Netscape Navigator via Quartz
- California will have an open Internet and so will lots of other states, despite the F.C.C.’s decision. via New York Times
- From the chief… Martech is now a $121.5 billion market worldwide via chiefmartec.com
The Gilbane Advisor curates content for content, computing, and digital experience professionals. We focus on strategic technologies. We publish more or less twice a month except for August and December.
The Internet of Things refers to uniquely identifiable objects (things) and their virtual representations in an Internet-like structure. The term Internet of Things was first used by Kevin Ashton in 1999. The concept of the Internet of Things first became popular through the Auto-ID Center and related market analysts publications. Radio-frequency identification is often seen as a prerequisite for the Internet of Things.
Artificial Intelligence (AI) is a branch of computer science that studies intelligent systems (i.e. software, computers, robots, etc.). Alternatively, it may be defined as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
For practical purposes, it is useful to distinguish between two different interpretations of ‘AI’:
- Artificial General Intelligence (AGI), where McCarthy’s “intelligent machines” have at least human level capabilities. AGI does not currently exist, and when, or if, it will is controversial.
- Machine learning (ML) is a discipline of AI that includes basic pattern recognition and deep learning and other techniques to train machines to identify and categorize large numbers of entities and data points. Basic machine learning has been used since the 80s and is responsible for many capabilities such as recommendation engines, spam detection, image recognition, and language translation. Advances in neural networks, and computing performance and storage, combined with vast data sets in the 2000s created a whole new level of sophisticated machine learning applications. This type of “AI” is ready for prime time. Yet, as powerful as these new techniques are, they are not AGI. i.e, “human level”.
Deep learning is a sub-field of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations. Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.
Machine learning (ML) is a discipline of AI that includes basic pattern recognition and deep learning and other techniques to train machines to identify and categorize large numbers of entities and data points. Basic machine learning has been used since the 80s and is responsible for many capabilities such as recommendation engines, spam detection, image recognition, and natural language processing applications such as language translation (machine translation). Advances in neural networks, and computing performance and storage, combined with vast data sets in the 2000s created a whole new level of sophisticated machine learning applications. This type of “AI” is ready for prime time. Yet, as powerful as these new techniques are, they are not AGI. i.e, “human level”.
Less than half of Google searches now result in a click
Some mixed news about Google for publishers and advertisers in the past few weeks. We’ll start with the not-so-good news about clicks, especially as it turns out, for mobile, detailed by Rand Fishkin…
We’ve passed a milestone in Google’s evolution from search engine to walled-garden. In June of 2019, for the first time, a majority of all browser-based searches on Google resulted in zero-clicks. Read More
Google moves to prioritize original reporting in search
Nieman Labs’ Laura Hazard Owen provides some context on the most welcome change Google’s Richard Gingras announced last week. Of course there are questions around what ‘original reporting’ means, for Google and all of us, and we’ll have to see how well Google navigates this fuzziness. Read More
Designing multi-purpose content
The efficiency and effectiveness of multi-purpose content strategies are well known, as are many techniques for successful implementation. What is not so easy is justifying, assembling, and educating a multi-discipline content team. Content strategist Michael Andrews provides a clear explanation and example of the benefits of multi-purpose content designed by a cross-functional team that is accessible for non-specialists. Read More
Face recognition, bad people and bad data
Benedict Evans…
We worry about face recognition just as we worried about databases – we worry what happens if they contain bad data and we worry what bad people might do with them … we worry what happens if it [facial recognition] doesn’t work and we worry what happens if it does work.
This comparison turns out to be a familiar and fertile foundation for exploring what can go wrong and what we should do about it.
The article also serves as a subtle and still necessary reminder that face recognition and other machine learning applications are vastly more limited than what ‘AI’ conjures up for many. Read More
Also…
A few more links in this issue as we catch up from our August vacation.
- It is limited… Should we still be selling responsive web design? via Browser London
- The need for personal control… Getting past broken cookie notices via Doc Searls Weblog
- For open web scholars… Linked research on the decentralised web via Sarven Capadisli
- For (not only) librarians… Creating library linked data with Wikibase via OCLC
- AMP still controversial… Google is tightening its grip on your website via Owen Williams
- Good & bad: changes to nofollow… Google’s robots changes, the web & the law via Joost de Valk
- Good for your thumbs at least… Bottom navigation pattern on mobile web pages: a better alternative? via Smashing Magazine
- Data portability for all… Apple joins Data Transfer Project. Club also includes Google, Microsoft, Twitter, and Facebook. via 9to5mac
The Gilbane Advisor curates content for content management, computing, and digital experience professionals. We focus on strategic technologies. We publish more or less twice a month except for August and December.