Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation) is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.
In the 80s and 90s MT software was rule-based, but in the 2000s statistical analysis and the re-emergence of neural networking and more advanced machine learning techniques have proved to be far more successful.
Artificial General Intelligence (AGI) refers to a type of artificial intelligence that is at least as capable as human intelligence. As powerful as machine learning has become with neural networking and deep learning techniques, it does not approach AGI, and when, or even if, it will is controversial.
Now more commonly known as machine translation (MT), refers to the the use of software to translate text or speech from one language to another. In the 80s and 90s MT software was rule-based, but in the 2000s statistical analysis and the re-emergence of neural networking and more advanced machine learning techniques have proved to be far more successful.
Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
Artificial Intelligence (AI) is a branch of computer science that studies intelligent systems (i.e. software, computers, robots, etc.). Alternatively, it may be defined as “the study and design of intelligent agents“, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
For practical purposes, it is useful to distinguish between two different interpretations of ‘AI’:
- Artificial General Intelligence (AGI), where McCarthy’s “intelligent machines” have at least human level capabilities. AGI does not currently exist, and when, or if, it will is controversial.
- Machine learning (ML) is a discipline of AI that includes basic pattern recognition and deep learning and other techniques to train machines to identify and categorize large numbers of entities and data points. Basic machine learning has been used since the 80s and is responsible for many capabilities such as recommendation engines, spam detection, image recognition, and language translation. Advances in neural networks, and computing performance and storage, combined with vast data sets in the 2000s created a whole new level of sophisticated machine learning applications. This type of “AI” is ready for prime time. Yet, as powerful as these new techniques are, they are not AGI. i.e, “human level”.
Deep learning is a sub-field of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations. Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.
“Machine learning is the science (and art) of programming computers so they can learn from data,” writes Aurélien Géron in Hands-on Machine Learning with Scikit-Learn and TensorFlow.
ML is a subset of the larger field of artificial intelligence (AI) that “focuses on teaching computers how to learn without the need to be programmed for specific tasks,” note Sujit Pal and Antonio Gulli in Deep Learning with Keras. “In fact, the key idea behind ML is that it is possible to create algorithms that learn from and make predictions on data.”
Examples of ML include the spam filter that flags messages in your email, the recommendation engine Netflix uses to suggest content you might like, and the self-driving cars being developed by Google and other companies.
Less than half of Google searches now result in a click
Some mixed news about Google for publishers and advertisers in the past few weeks. We’ll start with the not-so-good news about clicks, especially as it turns out, for mobile, detailed by Rand Fishkin…
We’ve passed a milestone in Google’s evolution from search engine to walled-garden. In June of 2019, for the first time, a majority of all browser-based searches on Google resulted in zero-clicks. Read More
Google moves to prioritize original reporting in search
Nieman Labs’ Laura Hazard Owen provides some context on the most welcome change Google‘s Richard Gingras announced last week. Of course there are questions around what ‘original reporting’ means, for Google and all of us, and we’ll have to see how well Google navigates this fuzziness. Read More
Designing multi-purpose content
The efficiency and effectiveness of multi-purpose content strategies are well known, as are many techniques for successful implementation. What is not so easy is justifying, assembling, and educating a multi-discipline content team. Content strategist Michael Andrews provides a clear explanation and example of the benefits of multi-purpose content designed by a cross-functional team that is accessible for non-specialists. Read More
Face recognition, bad people and bad data
We worry about face recognition just as we worried about databases – we worry what happens if they contain bad data and we worry what bad people might do with them … we worry what happens if it [facial recognition] doesn’t work and we worry what happens if it does work.
This comparison turns out to be a familiar and fertile foundation for exploring what can go wrong and what we should do about it.
The article also serves as a subtle and still necessary reminder that face recognition and other machine learning applications are vastly more limited than what ‘AI’ conjures up for many. Read More
A few more links in this issue as we catch up from our August vacation.
The Gilbane Advisor curates content for content, computing, and digital experience professionals. We focus on strategic technologies. We publish more or less twice a month except for August and December.