Loving Life TV

Home Forums AI-ARTIFICIAL INTELLIGENCE GIVING AI DIRECT CONTROL OVER ANYTHING IS A BAD IDEA – HERE’S HOW IT COULD DO US REAL HARM

  • This topic is empty.
Viewing 1 post (of 1 total)
  • Author
    Posts
  • #417357
    Nat Quinn
    Keymaster

    Tue 22 August 2023:

    The release of the advanced chatbot ChatGPT in 2022 got everyone talking about artificial intelligence (AI). Its sophisticated capabilities amplified concerns about AI becoming so advanced that soon we would not be able to control it. This even led some experts and industry leaders to warn that the technology could lead to human extinction.

    Other commentators, though, were not convince

    d. Noam Chomsky, a professor of linguistics, dismissed ChatGPT as “hi-tech plagiarism”.

    For years, I was relaxed about the prospect of AI’s impact on human existence and our environment. That’s because I always thought of it as a guide or adviser to humans. But the prospect of AIs taking decisions – exerting executive control – is another matter. And it’s one that is now being seriously entertained.

    One of the key reasons we shouldn’t let AI have executive power is that it entirely lacks emotion, which is crucial for decision-making. Without emotion, empathy and a moral compass, you have created the perfect psychopath. The resulting system may be highly intelligent, but it will lack the human emotional core that enables it to measure the potentially devastating emotional consequences of an otherwise rational decision.

    When AI takes executive control

    Importantly, we shouldn’t only think of AI as an existential threat if we were to put it in charge of nuclear arsenals. There is essentially no limit to the number of positions of control from which it could exert unimaginable damage.

    Consider, for example, how AI can already identify and organise the information required to build your own conservatory. Current iterations of the technology can guide you effectively through each step of the build and prevent many beginner’s mistakes. But in future, an AI might act as project manager and coordinate the build by selecting contractors and paying them directly from your budget.

    AI is already being used in pretty much all domains of information processing and data analysis – from modelling weather patterns to controlling driverless vehicles to helping with medical diagnoses. But this is where problems start – when we let AI systems take the critical step up from the role of adviser to that of executive manager.

    Instead of just suggesting remedies to a company’s accounts, what if an AI was given direct control, with the ability to implement procedures for recovering debts, make bank transfers, and maximise profits – with no limits on how to do this. Or imagine an AI system not only providing a diagnosis based on X-rays, but being given the power to directly prescribe treatments or medication.

    You might start feeling uneasy about such scenarios – I certainly would. The reason might be your intuition that these machines do not really have “souls”. They are just programs designed to digest huge amounts of information in order to simplify complex data into much simpler patterns, allowing humans to make decisions with more confidence. They do not – and cannot – have emotions, which are intimately linked to biological senses and instincts.

    Emotions and morals

    Emotional intelligence is the ability to manage our emotions to overcome stress, empathise, and communicate effectively. This arguably matters more in the context of decision-making than intelligence alone, because the best decision is not always the most rational one.

    It’s likely that intelligence, the ability to reason and operate logically, can be embedded into AI-powered systems so they can make rational decisions. But imagine asking a powerful AI with executive capabilities to resolve the climate crisis. The first thing it might be inspired to do is drastically reduce the human population.

    This deduction does not need much explaining. We humans are, almost by definition, the source of pollution in every possible form. Axe humanity and climate change would be resolved. It’s not the choice that human decision-makers would come to, one hopes, but an AI would find its own solutions – impenetrable and unencumbered by a human aversion to causing harm. And if it had executive power, there might not be anything to stop it from proceeding.

    Air traffic control

    Giving an AI the ability to take executive decisions in air traffic control might be a mistake. Gorodenkoff / Shutterstock

    Sabotage scenarios

    How about sabotaging sensors and monitors controlling food farms? This might happen gradually at first, pushing controls just past a tipping point so that no human notices the crops are condemned. Under certain scenarios, this could quickly lead to famine.

    Alternatively, how about shutting down air traffic control globally, or simply crashing all planes flying at any one time? Some 22,000 planes are normally in the air simultaneously, which adds up to a potential death toll of several million people.

    If you think that we are far from being in that situation, think again. AIs already drive cars and fly military aircraft, autonomously.

    Alternatively, how about shutting down access to bank accounts across vast regions of the world, triggering civil unrest everywhere at once? Or shutting off computer-controlled heating systems in the middle of winter, or air-conditioning systems at the peak of summer heat?

    In short, an AI system does not have to be put in charge of nuclear weapons to represent a serious threat to humanity. But while we’re on this topic, if an AI system was powerful and intelligent enough, it could find a way of faking an attack on a country with nuclear weapons, triggering a human-initiated retaliation.

    Could AI kill large numbers of humans? The answer has to be yes, in theory. But this depends in large part on humans deciding to give it executive control. I can’t really think of anything more terrifying than an AI that can make decisions and has the power to implement them.

     Author:

    Guillaume Thierry

    Professor of Cognitive Neuroscience, Bangor University

    I am passionate about the human mind and how it makes sense of the world around us. My research is devoted to understanding how we form concepts, consciously or unconsciously, how we manipulate them, through language or nonverbally, how we learn, remember, forget, and imagine. In my applied work, I strive to inspire individuals to attain higher state of awareness of the world and of themselves. I share real stories and construct fictional ones to entice the imagination of others and invite everyone along on the path to higher levels of insight, understanding, and joy.

    Specifically, I use experimental psychology and electroencephalography to study language comprehension in the auditory and visual modalities, and mainly the processing of meaning by the human brain. I have investigated a range of themes, such as verbal/non-verbal dissociations, visual object recognition, colour perception, functional cerebral asymmetry, language-emotion interactions, language development, developmental dyslexia, and bilingualism. Since 2005, I have received funding form the BBSRC, the ESRC, the AHRC, the European Research Council, and the British Academy to investigate the integration of meaning in infants and adults at lexical, syntactic, and conceptual levels, using behavioural measurements, event-related brain potentials, eye-tracking and functional neuroimaging, looking at differences between sensory modalities, different languages in bilinguals, and coding system (verbal / nonverbal).

    Today I focus mainly on linguistic relativity and the philosophical question of mental freedom.

    The Conversation

     

    source:GIVING AI DIRECT CONTROL OVER ANYTHING IS A BAD IDEA – HERE’S HOW IT COULD DO US REAL HARM (independentpress.cc)

Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.