An executive guide to artificial intelligence - MZ Group
  • IPO & Empresas Listadas
  • Fundos e Gestoras
  • Empresas Privadas
Back

An executive guide to artificial intelligence

This article aims to relate the main concepts in the field of artificial intelligence as well as draw a current panorama and discuss possible unfolding events.

What is artificial intelligence or AI

The artificial intelligence field isn’t new. The concept was created by the mathematics professor John McCarthy in 1955 and the research field emerged in mid-1956 on the Dartmouth College conference. The “founding fathers” of the field, Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, had it been performed by a human being instead, we would say the human had to apply intelligence to accomplish the task. In a nutshell, we can say the AI core is composed of programs or machines that can be used to perform or augment human tasks.

We are still a few decades away from building machines with this degree of complexity. In this sense, the term “singularity” has been used to describe the moment when artificial intelligence (AI) gains self-consciousness and, in doing so, perfectly reflects the state of humanity. The futurist and founder of Singularity University, Ray Kurzweil, predicts that this point can be reached by 2045.

For now, we have seen this field evolve to help create systems that can now typically demonstrate at least some of the following behaviors: learning, problem-solving, knowledge representation, perception, motion, and some level of creativity.

What are the applications of AI

Today, artificial intelligence is all around us. Most of us interact with AI in some way on a daily basis. It is used to recommend what you should buy, we have chatbots participating in conversational commerce and virtual assistants such as Amazon’s Alexa, Google Assistant and Apple’s Siri to help in everyday activities. The list goes on: facial recognition, credit card fraud detection, spam or fake news and driverless cars.

And while AI is already in use in thousands of companies around the world, many opportunities are yet to appear. Entire sectors will need to incorporate AI into their business models, products, and processes. For example, we can cite Google’s DeepMind team that used ML systems to improve cooling efficiency in data centers by more than 15%, even after it had been optimized by human experts. JPMorgan Chase has introduced a system for reviewing commercial loan agreements. The work that would take credit officers 360,000 hours to complete can now be done in a few seconds. Moreover, artificial intelligence could have a dramatic impact on healthcare, helping radiologists to detect tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

The AI Economy

For more than two centuries, technological innovations have boosted economic growth. The most important are those that we can classify as general-purpose technologies. In this category, we include the steam engine, electricity and the internal combustion engine. Each one catalyzed waves of innovation and complementary opportunities. Currently, artificial intelligence is probably the biggest promise of general-purpose technologies. According to IDC, the adoption of cognitive systems and AI will drive worldwide revenues from nearly $8 billion in 2016 to more than $47 billion in 2020.

This runway is led by tech giants like Google, Amazon, Apple, Microsoft, Facebook and IBM. But it would be a mistake to think the US Companies have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from e-commerce to autonomous driving and China is pursuing a plan to turn AI into a core industry for the country by 2020.

As for employment, although AI does not replace all jobs, what seems to be certain is that AI will change the nature of work. For example, Amazon just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. Amazon has more than 100,000 robots in its fulfillment centers and is investing in new types of bot that can automate the remaining manual jobs. Jobs in administration won’t even require robotics as software gets better at automatically updating systems and flagging the information that is important. But not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers.

The Oxford University’s Future of Humanity Institute asked experts to predict AI capabilities in the next decades. Notable dates included truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031 and doing a surgeon’s work by 2053. They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

What drives the AI new cycle

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning. This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent. Over time, the major tech firms, like Google, have moved to using specialized chips. An example of one of these custom chips is Google’s Tensor Processing Unit (TPU).

Types of AI

At a very high-level, artificial intelligence can be split into two broad types: narrow AI and general AI. Nowadays, what we can do falls within the concept of “Narrow AI”, i.e. techniques that are capable of performing specific tasks, as well as, or better than, we humans. As representatives of these techniques, we have Machine Learning, Cognitive Computing, Machine Vision, Natural Language Processing (NLP) and Deep Learning.

On the other hand, general AI is the type of adaptable intellect found in humans, a form of intelligence capable of learning how to carry out vastly different tasks based on its accumulated experience. This is the sort of AI commonly seen in movies like Skynet in The Terminator.

What is machine learning?

Machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech. Machine learning is a subset of AI and is generally split in reinforcement learning, supervised and unsupervised learning. Supervised learning is a technique for training the system using a very large number of labeled examples. Here, the system is fed with a huge amount of data. In contrast, unsupervised learning algorithms try to identify patterns in data, looking for similarities that can be used to categorize that data. In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

What is deep learning?

Deep learning is another subset of machine learning, where neural networks (brain-inspired networks of interconnected layers of algorithms) are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data.

There are various types of neural networks like recurrent neural networks, convolutional neural networks and long short-term memory or LSTM. Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

The “hype”, risks and limitations

Although there are risks, limits and much “hype” about this type of technology, all of us and businesses in all industries can benefit from this technology. The important thing is not to be fascinated by a particular technique, as if this technology by itself were the answer. It is also important that us, government and companies remain engaged in the ongoing dialogue on the effects of these technologies on employment, education and society.

Thiago Trida, CTO

Originally published on Medium.