Myths & Concerns on AI Developments

There are many concerns and myths around the progress of Artificial Intelligence (AI). Let’s take a look at some of them.

Artificial neural networks mimic the human brain

The basic construct of the neural network has been inspired by the neurons in human brain, and the similarities between neural networks and brain ends there!

Functioning of the neural network is based on propagating errors between predicted value and actual value, backwards to update model parameters at various layers. Father of Deep Learning and one of the authors of back propagation algorithm, Geoff Hinton said in June, 2017 that the brain does not have any such back propagation process. According to him, if we were to mimic human brain, then we may have to look beyond back propagation algorithm and artificial neural networks, thereby even beyond the approaches being taken in Deep Learning space. We may have to come up with completely new approach.

Hence, whatever we have achieved so far is considered to be weak AI, since they are solving narrowly defined problems in specific areas, whereas human intelligence is generalized and works across many problem areas. These solutions lack human consciousness, empathy and emotional quotient that are exhibited by human beings.

Machines learn themselves without being programmed explicitly

There is a paradigm shift in the way traditional IT systems are designed and implemented, and the way Machine Learning (ML) systems are implemented. In traditional IT applications we provide input data and business logic to get desired output. In ML we provide input data and desired output and get business rules that map input to output.

Hence, ML algorithms are essentially developing logic to form a new representation of input data so that input can be mapped to the desired output. These representations take the form of mathematical equation, distance metrics in geometrical space, probability distribution or set of tensors (weights/parameters in various layers of neural network).

If input data changes drastically due to factors internal or external to the organization, then we will have to tweak the model features, parameters, architecture and retrain the algorithm. Hence, machines, by themselves cannot learn anything nor can they adapt to changing environments. Humans will have to provide data, desired output, design the model and supervise the training. ML models go through very similar administrative processes that traditional IT applications go through.

While it is possible to automate some of these retraining processes, machines can’t decide on their own, when to retrain themselves, what changes they need to go through in the model etc. However, in online training, typically used in recommendation engines, it is a continuous training/learning process as new reviews keep coming in. But most of Deep Learning systems are batch processes, not online.

Machines will soon outsmart humans

There is a growing perception that machines will get smarter than humans soon. In some way, machines have surpassed human capability in repetitive, labor intensive and programmable tasks long back. How long does it take for a human to compute square root of 2, compared to a calculator? How many people are required to move a block of bricks to 10th floor, compared to the crane? In many such tasks, machines far exceed human productivity, efficiency and costs.

But when it comes to perceptual, logical reasoning and cognitive tasks such as recognizing objects, abstraction, inference and deduction, consciousness of surrounding environment, emotional bonding with fellow human beings etc. machines still lack significantly.A lot of progress has been made in perceptual tasks as in autonomous driving, logical reasoning thru memory networks, but there is still long way to go to match human intelligence in many areas.

To be able to build machines that can match or outsmart human intelligence, we should first understand how human brain works objectively, why humans behave the way they do, why humans exhibit different response to same stimulus at different points in time etc. As of today, we are nowhere near that understanding.

Human intelligence consisting of perception, reasoning, emotions, consciousness, empathy, learning, problem solving etc. is derived from its biological substance. However, if machines were to acquire the properties of human beings, they also need to be made with the materials akin to humans, which means that they need to be made with biological materials. This is not feasible with currently available skills, knowledge and technology. However, developments in bio-technology and nano-technology could lead to such a possibility in future, making science fiction cyborgs a reality. But that is far away, at least for now.

Humans perceive the environment with their senses namely, touch (skin), sound (ear), vision (eye), smell (nose) and taste (tongue). They process these signals through the mind (rational), heart (emotional) and respond accordingly. Today technology can only capture vision, sound and touch, but smell and taste are beyond the reach of technology. To that extent machines are constrained to replicate complete human intelligence.

AI/Automation will take away jobs and create social unrest

Neither automation (machines replacing human labor), nor AI is new to our age. Automation is as old as human civilization, and AI is more than six decades old. Technology driven automation is deeply embedded into the evolution of human civilization right from the Stone Age. Along the way many labor-intensive tasks have been replaced with various tools, processes, machines and technologies.

More than a half century ago, erstwhile US President Lyndon B. Johnson established a national commission to examine the impact of technology on the economy and employment, declaring that automation did not have to destroy jobs but “can be the ally of our prosperity if we will just look ahead.”

Every time automation replaced a set of jobs, it also created many new jobs. People have moved onto these new set of jobs, and there was enough time in between for people to adapt. However, in the current digital age, innovations are happening at a much faster pace than people could reskill and adapt themselves. This is creating the gap between the demand and supply for new jobs.

Creation of labeled data for training current Deep Learning algorithms itself can create massive number of jobs, as this is a highly labor-intensive, and the quantity of labeled data needed for training is humongous. As more and more repetitive tasks get automated, humans will have to acquire higher intellectual skills to match with industry needs, starting with educational qualifications.

The current concept of one spell of education followed by a job till retirement may have to change. People may have to go back to school to enhance skills repetitively all through their life. Even the nature of employment may change from permanent, full-time employment to contract employment, and the so-called geek economy could be the new norm.

Many European countries, and countries like Japan, Singapore, and even China have a larger proportion of aging population compared to working population. So, they face a shortage of human labor, and automation is the only way for them, since cost of immigrants may be higher than the cost of automation.

AI could destroy the world

When fire was invented, people were concerned about getting burnt, then they invented fire extinguishers. Einstein’s theory of relativity (E=mc2) was used to develop atomic bombs that destroyed Hiroshima & Nagasaki, before they built nuclear power plants using the same theory. When automobiles were invented, people were concerned about road accidents killing people which was mitigated by seat belts and air bags. When IT systems gained momentum, there were concerns about virus, cyber-attacks, data security, but solutions for each of these concerns have been put in place, and IT and digital systems only increased. Similarly, there are concerns around AI destroying mankind, when they surpass human intelligence, and they start behaving like masters of their creators.

Elon Musk says AI is probably humanity’s “biggest existential threat.”, and future wars could be triggered by AI powered nations. According to him a few companies like Google, Facebook, Amazon controlling the whole power of AI can make it unsafe for people, as these companies would know more about us than we ourselves know. These companies may create something unintentionally that could cause havoc to humanity. Robots powered by Artificial General Intelligence (Strong AI or Human Level Intelligence) can go rogue and start killing people on the street.

Stuart Russel, a computer science professor and AI expert, also believes that AI can be destructive. He gives an example, if you order your robot to get coffee, and if you come in its way of getting coffee, it may kill you to meet its goal.

AI critics point out how chatbots created and deployed by Microsoft and Facebook misbehaved by providing answers with racist, misogynistic, and anti-Semitic bias. It should be noted that any learning algorithm will inherit the biases in the training data, and there are techniques available to neutralize these biases.

Mark Zuckerberg (Facebook), Larry Page (Google), Andrew NG (professor at Stanford) are on the other side of argument that AI can be made safe for humanity, and it is unwise to predict doomsday theories.

Finally, all the concerns expressed are when AI really achieves its intended goal of superseding human intelligence. As explained in various sections of this paper, we are nowhere near that state. By the time we get there, mitigating solutions also will fall in place.

Manipulating individual behavior

Social media platforms use AI algorithms to feed personalized news. These news items can be filtered in such a way that they influence the thinking of the person in a particular way. Even though all ads across all kinds of media influence people to buy specific products, social media platform can manipulate your view of the world by keep sending targeted messages till the intended behavior is achieved. Cambridge Analytica has been accused of using Facebook user data to influence voting pattern that favored Donald Trump in US elections.

Facebook is already under the scanner by various governments, so it will come up with solutions that prevent such manipulation of user behavior in future. Regulators also learn from such experiences and strengthen the regulatory framework to discourage such things in the future.

Data security and individual privacy

Ever since the IT revolution started, data security issues were on top on the minds of corporate leaders. Increased data, expanded BPO, distributed systems, growing employee base in the corporates kept increasing the risk of data security and privacy. With public cloud and social media these concerns increased exponentially.

Recently, Cambridge Analytica using Facebook user data for political campaigns created a huge noise across the globe. Several questions came up on who owns the data, what can organizations do with their user data, what they can’t do, what is the role of the government and regulatory bodies in ensuring safety of their citizens data etc.

Political parties have always used their citizen data for targeted campaigns, and there are businesses that thrive on user data such as syndicated data providers AC Neilson, IRI, credit scoring agencies like CRISIL, stock market data providers like Bloomberg etc. In most of these cases, users may not even know that their data is being used for commercial purposes.

In the case of Cambridge Analytica, users had proactively entered all their personal data on Facebook and had given their consent to Facebook to use the data in any way. While there is nothing new in this case, the question is whether Facebook was aware of Cambridge Analytica having access to this data and authorized them to use it. If not, it is a serious concern, as some other entity could also get access to this data, without Facebook’s knowledge, and they could use it for any anti-social activities.

The European Union (EU) has come up with a regulatory framework to protect their citizens privacy, which is called General Data Protection Regulation (GDPR). Any company doing business in the EU has to comply with this regulation. Similarly, other countries could come up with similar laws in their respective countries.

Apple is minimizing the data flows from consumer devices into their servers, and also anonymizing the data that flows into their servers, so that individual privacy can be maintained. Google is trying to come up with a federated learning model that does not require individual data movement to centralized servers, and the model learns the individual’s buying patterns on local machines. Blockchain technology, by design, is distributed, and privacy is not compromised. Hence, there is a concerted effort from organizations also to avoid data privacy issues using the technology itself.

While governments have to be mature enough to come up with adequate laws to protect their citizens, organizations will have to take the responsibility to ensure that data security and individual privacy are not compromised, and individuals will also need to be careful in putting sensitive personal data in public domain.

Only PhDs can succeed in AI careers

It is a fact that ML and Deep Learning applications use lot of complex mathematical and statistical tools like linear algebra, differential calculus, probability theory, Bayesian statistics, optimization theory etc. To implement Deep Learning algorithms from ground up is complex, and as complexity of neural network models increases, complexity of its implementation also increases significantly.

This was similar to early days of computer programming in business applications. Then many code generators, commercially off the shelf tools, functional specific packaged applications like CRM, SCM, HCM, FICO, followed by industry specific applications like IS-Retail, IS-Oil etc. came up that took away the complexity of programming by enabling higher level abstractions.

Even in statistical analysis, products like SAS, SPSS, and Microsoft Excel provides user friendly application interfaces that take away algorithmic/programming complexities so that business users can use them directly. Similarly, in ML and Deep Learning many such packages have come up that take away programming complexities. Scipy, Scikit_learn, nltk, gensim, Surprise, mlxtnd are some of such packages based on Python language that help in ML applications.

TensorFlow (Google), PyTorch (Facebook), Caffe2 (UC Berkeley), Microsoft Cognitive toolkit (CNTK) are some of the Deep Learning frameworks that take away algorithmic complexities and enable developers quickly implement and test the models. They are simple to deploy in production as well.

TensorFlow though started as a Deep Learning Library, is now incorporating all ML algorithms as well, so that it can become a one stop shop for all ML applications. It is gaining much more traction, followed by PyTorch for its simplicity as well as rich functionality. Keras and Gluon are even higher-level abstractions of Deep Learning. Keras works on top of TensorFlow and Gloun is a combined initiative from Microsoft and Amazon that currently supports only Apache MXNet, but may extend to CNTK in future.

With these high-level libraries most mathematical complexities are taken away, so that people with business and little statistical knowledge should be able to use them with ease.

Moreover, online training courses like Machine Learning and Deep Learning series by Andrew NG on Coursera teach all the fundamentals needed to develop these algorithms from ground up. They have made it very simple, self-sufficient and easy to learn, as long as one has tenacity to learn. The good thing is all these products are open sourced and online training programs are very cost effective and affordable.