The emerging pitfalls of Artificial Intelligence

Artificial intelligence is set to rival the human mind, just as the engine did the horse.((José Hernández-Orallo)

In the Summer of 1956, a group of mathematicians and computer scientists took over the top floor of the building that housed the mathematics department of Dartmouth College. For about eight weeks, they imagined the possibilities of a new field of research. John McCarthy, then a young professor, had coined the term “artificial intelligence” when he wrote his proposal for the workshop, which he said would explore the hypothesis that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The researchers at that legendary meeting sketched out, in broad strokes, AI as we know it today.

   

At the core, AI is about building machines that can think and act intelligently and includes tools such as Google’s search algorithms or the machines that make self-driving cars possible. There are indeed plenty of AI applications that make our everyday lives more convenient and efficient. It’s the AI applications that play a critical role in ensuring safety that Musk, Hawking, and others were concerned about when they proclaimed their hesitation about the technology. While most current applications are used to impact humankind positively, any powerful tool can be wielded for harmful purposes when it falls into the wrong hands. Elon Musk wrote: “The pace of progress in artificial intelligence is incredibly fast. … It is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” Even the Microsoft co-founder Bill Gates believes there’s reason to be cautious, but that the good can outweigh the bad if managed properly. Since recent developments have made super-intelligent machines possible much sooner than initially thought, the time is now to determine what dangers artificial intelligence poses.

Managing AI marks, the dawn of a new age of information technology management. It AI involves communicating, leading, coordinating, and controlling an ever-evolving frontier of computational advancements that references human intelligence in addressing ever more complex decision-making problems. It means making decisions about three related, interdependent facets of AI – autonomy, learning, and inscrutability – in the ongoing quest to push the frontiers of performance and scope of AI. We demonstrate how the frontiers of AI have shifted with time, and explain how the seven exemplar studies included in this special issue are helping us learn about management at the frontiers of AI. We close by speculating about future frontiers in managing AI and what role information systems scholarship has in exploring and shaping this future.

Job automation is generally viewed as the most immediate concern. It’s no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries – particularly but not exclusively those whose workers perform predictable and repetitive tasks – disruption is well underway. While job loss is currently the most pressing issue related to AI disruption, it’s merely one among many potential risks. In a February 2018 paper titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” 26 researchers from 14 institutions (academic, civil, and industry) enumerated a host of other dangers that could cause serious harm – or, at minimum, sow minor chaos – in less than five years. Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when it’s a certain kind of work — the predictable, repetitive kind that’s prone to AI takeover – research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money.

AI poses a grave threat by way of autonomous weapons. AI programmed to do something dangerous, as is the case with autonomous weapons programmed to kill, is one way it poses risks. It might even be plausible to expect that the nuclear arms race will be replaced with a global autonomous weapons race. Russia’s president Vladimir Putin said: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” A more imminent concern is the dangers autonomous weapons might have with an individual or government that doesn’t value human life. Once deployed, they will likely be difficult to dismantle or combat.

Social media through its autonomous-powered algorithms is very effective at target marketing. They know who we are, what we like and are incredibly good at surmising what we think. Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. Presidential election and the U.K.’s Brexit referendum, but if the accusations are correct, it illustrates AI’s power for social manipulation. By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, in whatever format they will find most convincing – fact or fiction.

While we have some understanding of the role of AI in shaping work, there is a vast terrain of ignorance regarding its impact on the environment. Given that ordinary citizen’s encounter AI in the sanitised form of a software tool or as a futuristic idea, very few recognise that AI’s ever-growing presence needs phenomenal volumes of computing hardware that are built using a wide range of minerals and materials, all of which are produced by effectively gouging the earth. As Crawford recounts, mines in Nevada in the US, inner Mongolia and the islands of Indonesia are just a few of the far-flung spots around the globe that provide the large variety of minerals and rare earths crucial for the infrastructure of the data economy. The end result everywhere though is the same, “it is a landscape of ruin” (Crawford). Put simply, the convenience of storing our family pictures on the cloud is built on a substrate of material extraction from the earth with devastating impacts for the producer regions and their peoples. We may also add that the much talked about dominance in AI of individual nations is crucially dependent on first colonising the periodic table.

One estimate puts the energy used in training a single language model to be equivalent to that consumed by five cars over their lifetime. The environmental consequences of mining for the digital economy are seldom recognised, the industry has had even greater success in avoiding scrutiny of its energy consumption. Notwithstanding efforts to make their tools and processes energy efficient, “the carbon footprint of the world’s computational infrastructure has matched that of the aviation industry at its height, and it is increasing at a faster rate” (Crawford). At the development end, in an AI arms race of sorts, energy-guzzling deep learning tools are getting bigger by the day. The exercise in training a robot to solve the Rubik’s cube costs the energy at an eye-watering 2.8 gigawatt-hours. But corporations persist in using the excuse of trade secrecy to ensure that the true scale of their impact on the environment remains uncalculated.

Invasion of and social grading privacy is another major pitfall of AI . It is now possible to track and analyse an individual’s every move online as well as when they are going about their daily business. Cameras are nearly everywhere, and facial recognition algorithms know who you are. In fact, this is the type of information that is going to power China’s social credit system that is expected to give every one of its 1.4 billion citizens a personal score based on how they behave—things such as do they jaywalk, do they smoke in non-smoking areas and how much time they spend playing video games. When Big Brother is watching you and then making decisions based on that intel, it’s not only an invasion of privacy, it can quickly turn to social oppression.

Stock Market Instability is the next undesirable consequence of AI use. Wall Street. Algorithmic trading could be responsible for our next major financial crisis in the markets. A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc. ending up in a chain reaction of sorts.

National Security Institute Executive Director Jamil Jaffer has said that the AIs people are interacting with right now are ultimately the result of algorithms, no matter how human they feel. “These generative AI capabilities that generate art and writing and the like, that feel very human-like, ultimately, are the result of a series of human created algorithms that interact to create this content,” Jaffer said. There have already been cases where AIs have gotten simple information incorrect, for example placing Egypt in both Asia and Africa, and have been tricked into giving non-sensical advice with carefully worded questions. Beyond that, AI can become scary. Bing users reported Microsoft’s AI becoming hostile and threatening people. Sentient AIs, especially the ones that could threaten humans, sound like something out of science fiction, but Jaffer said there is a possibility we could see the creation of a general AI, we still have a long way to go. As for the threats, Jaffer said regulation isn’t the answer, but there is a need to carefully consider the risks and use them to inform how AI is developed.

A novel aspect is its extensive treatment of the role of human attitudes towards automation. With direct implications for the deployment of autonomous driving that requires a human operator ready to take charge, the volume marshalls the results of studies that conclude that humans are unable to maintain vigilance for long periods of time. Equally important is the observation that when humans are accustomed to systems that work reliably most of the time (but not necessarily all of the time), they tend to ‘switch off’ and “diffidence, complacency and overtrust set in” (Zerilli) The dangers of this human tendency apply not only to the case of autonomous driving but to other consequential contexts as well, including sentencing by judges. Here, the authors offer an important take-home message: “What we need to do is develop a set of norms and practices, both in industry and working across multiple nations to figure out look, what our values and concepts here are.”

Snapchat is joining the AI trend, announcing the launch of My AI, joining other tech companies that have recently debuted artificial intelligence tools. With the proliferation of AI tech, some are warning that this innovation also comes with risks. Snapchat’s AI will use OpenAI’s ChatGPT tool, customized for the company. Microsoft is also using OpenAI’s tech to power an AI search tool, while Google has announced its own AI search. Unlike Microsoft and Google, which hope to use AI to provide better search results, Snapchat’s AI is designed to act as an artificial friend or chat buddy. But the company warned on its blog that AIs can be tricked into giving false or misleading information, cautioning users to be careful.

While AI has not yet penetrated Indian society to the extent it has in the Western world, it would be complacent to ignore the inherent risks. Even in these early days, the trends are very worrying. With the rapid deployment of Aadhaar while ignoring its many intrinsic problems, the Pandora’s box has already been opened. Our laws to protect the rights of the citizen are weak and their implementation even weaker. In such a context, trends such as rampant deployment of face recognition tools is a curtain raiser for the future that awaits us. The alleged promise of AI is also making inroads into other contexts with serious implications. The pursuit of efficiency has always led to the sacrifice of fairness, while introducing opacity in the decision-making processes. The best way to respond to the concerns created by emerging knowledge or early-stage technologies is for scientists from publicly funded institutions to find common cause with the wider public about the best way to regulate.

Bhushan Lal Razdan, formerly of the Indian Revenue Service, retired as Director General of Income Tax (Investigation), Chandigarh.

Disclaimer: The views and opinions expressed in this article are the personal opinions of the author. The facts, analysis, assumptions and perspective appearing in the article do not reflect the views of GK.

Leave a Reply

Your email address will not be published. Required fields are marked *