Potential danger of Artificial Intelligence as an intelligent (digital) weapon of the future Artificial intelligence and the possible consequences of its getting out of human control are one of the most relevant topics of our time. In connection with the statement of the famous British theoretical physicist, cosmologist Stephen Hawking, about the danger of creating artificial intelligence for humanity, discussions on this issue have recently significantly intensified. In particular, St. Hawking has repeatedly noted that he is afraid of systems whose development is not defined by rigid boundaries. St. Hawking emphasizes that if the biological evolution of the human species is very slow, then machines progress quite quickly, and over time they will be able to improve themselves. St. Hawking predicts the victory of artificial intelligence over humanity in the next hundred years. According to the scientist, on the one hand, the potential of artificial intelligence in theory will help to cope with diseases, impoverishment, and military conflicts. On the other hand, artificial intelligence can become the last development in the history of mankind if all its potential risks and dangers are not carefully calculated.
In addition, the co-founder of the rocket company SpaceX and the company for the production of electric vehicles Tesla, E. Musk, along with Stephen Hawking and the actor, Morgan Freeman, also said that, in his opinion, uncontrolled artificial intelligence "is potentially more dangerous than nuclear weapons".
Here are some of the statements of E. Musk, which clearly express his thoughts about the danger of further development of artificial intelligence and which indicate that the electronic mind can become beyond the control of humans: "with artificial intelligence, we summon a demon. I think we should be extremely careful with artificial intelligence. Perhaps this is one of the most serious threats to the existence of humanity. It will not be superfluous to introduce certain regulatory norms at the national and international levels that will not allow you to do stupid things". Previously E. Musk also stated that people simply do not understand how quickly artificial intelligence is progressing and what threat it can store in itself. As E. Musk rightly believes, all developers should unite and recognize that the safety of artificial intelligence is a priority in its development. Microsoft founder, Bill Gates also noted the threat to the development of artificial intelligence. In this matter, he adheres to the view of E. Musk, who suggests limiting machines without giving them freedom of development.
However, the study of interviews and speeches at conferences of well-known developers of artificial intelligence shows that they do not have a clear answer to the question of how to avoid potential conflicts between artificial intelligence and people.
Scientists, entrepreneurs, and investors involved in artificial intelligence research have so far only warned of its dangers, arguing that the intellectual abilities of machines can surpass the intellectual abilities of the people who created them. We are talking about concerns about the impact that AI can have on employment and even the survival of humanity in the long run.
Back in 2012, philosophy professor, Huw Price (along with the co-author, PhD, Jaan Tallinn)
published an article entitled "
Artificial Intelligence – can we keep it in a box?" in which he called for serious consideration of possible threats to AI. The scientist believes that the first good step in this direction would be "to stop treating artificial intelligence as a stuff of science fiction". He emphasizes the importance of imagining it as a part of reality that we or our descendants may encounter sooner or later. The philosopher believes that once we present such a problem, it will be necessary to initiate serious research on how to make the development of intelligent machines the safest from the point of view of the future of humanity. The main reason for the danger of AI is the fundamental difference between artificial intelligence and human intelligence. All values – such as "love, happiness, and even survival" - are important to us because we have some evolutionary history - there is no reason to believe that machines will share these values with us. According to How Price, machines may simply be indifferent to humans, which will lead to catastrophic consequences for us.
Along with this, as we increasingly move into the digital world, the risk of AI-based cyber-attacks is also increasing dramatically. AI and machine learning are used not only by IT security professionals, but also by state-sponsored actors, criminal cyber organizations, and individual attackers. Artificial intelligence in the hands of criminals increases the risk and effectiveness of cyber-attacks. Cybercriminals use AI to carry out more complex and targeted cyber-attacks. For example, they generate phishing emails using language models or even mimic someone else's voice and video images. All this means that companies and governments have to constantly improve their practices in order to keep up with the changing technologies of the digital world in order to ensure the cybersecurity of states, society, business, and people.
In particular,
cybercriminals actively use artificial intelligence as an intelligent weapon. Thus, attackers use automation tools for cyber-attacks, including using artificial intelligence technologies to improve and transform them, as well as to bypass well-known cyber defense tools. For example, according to experts, the criminal group behind the creation of the well – known Emotet trojan with the main channel for its distribution-spam phishing, could easily use artificial intelligence to increase cyber-attacks. Another possible area of malicious use of artificial intelligence is more efficient password selection or bypassing two-factor authentication.
In other words, it should be recognized that
cybercriminals have significantly transformed their methods and techniques of conducting cyber-attacks based on the use of AI. Most modern DDoS attacks are built on the principle of "smart" botnets, which, without centralized management, are able to organize themselves and solve complex calculation problems. Social engineering methods have also been significantly improved: attackers have learned to automate mailings through various channels, where information looks very truthful for users. In this regard, most companies are faced with the fact that traditional information security technologies are becoming ineffective or completely inefficient and unprofitable.
Using a large number of different data sources on the Darknet in order to form a knowledge base of artificial intelligence, attackers can implement quite effective cyber-attacks, so manufacturers of cyber defense systems are beginning to actively implement artificial intelligence and machine learning technologies in order to detect, predict cyber threats, and respond to them in real time. In general, according to Webroot, about 85% of information security professionals note that attackers use artificial intelligence technologies for criminal purposes.
In particular,
we can distinguish such common ways of using artificial intelligence by cybercriminals to strengthen cyber-attacks against critical infrastructure of the state, business, society, and citizens:
• Social engineering attacks. Cybercriminals use artificial intelligence in their social engineering attacks because it can detect behaviors. This information can then be used to manipulate behavior, gain access to sensitive data, and compromise networks.
• Software Changing. Artificial intelligence can be used to develop mutated malware that avoids detection by changing its structure.
• Data manipulation. Data manipulation can have a devastating impact on organizations if they can't detect it. Once the data is manipulated, it is extremely difficult to recover reliable data that feeds legitimate AI systems.
• Vulnerability detection. AI can be used to continuously monitor networks to identify new vulnerabilities. Over time, hackers can take advantage of these vulnerabilities.
Generative AI is already being used by hackers. For example, an automated WormGPT algorithm has recently appeared, which helps scammers generate convincing spam emails that bypass the spam filter system. To do this, the system uses a dataset of business emails from hacked corporate mailboxes. As a result, the number of cyber incidents involving phishing, ransomware, and the like has increased in recent years. At the same time, there are no more cybercriminals – just now one attacker can expand the coverage of their criminal activities thanks to artificial intelligence technologies, for example, send not 100 thousand spam emails a month, but 3 billion.
Generative artificial intelligence also allows you to fake another person's voice. There have already been several cases when a fraudster imitated the voice of one of the company's managers and ordered several million dollars to be transferred to his account by phone. This type of criminal scheme is more common: scammer calls the victim's relatives and tells a story like "your grandson is in the police – pay money to be released". In the United States, annual losses from such attacks reach 20-27 million.
However, at the moment, the voices generated thanks to artificial intelligence technologies are not perfect. They are robotic and do not convey nuances. This type of attack exploits human vulnerability and inattention more. Also, hackers have not yet fully learned how to make reliable video prints of faces. But progress in the field of AI does not stand still.
The possibility of using AI as a weapon is another danger that cannot be ignored, especially given the likelihood of a future cyber war between states based on the use of AI as an autonomous weapon. The development of AI-based weapons can escalate inter-state conflicts and pose a serious threat to global security. For example,
in the summer of 2023, US senators expressed concern about the possible use of artificial intelligence to create biological weapons.
In the summer of 2023, UN Secretary-General, Antonio Guterres also said that artificial intelligence could pose a threat to global peace and security, so he called on all Member States to urgently impose restrictions to control this technology. At the same time, Antonio Guterres asked the Member States of the Security Council to conclude a legally binding pact banning lethal autonomous weapons systems by the end of 2026 and warned that the use of artificial intelligence by terrorists or governments with bad intentions can lead to terrible levels of death and destruction. In addition, according to the UN secretary-general, improper operation of the neural network can lead to chaos, especially if this technology is used in connection with nuclear weapons systems or Biotechnologies. British Foreign Secretary, James Cleverly, whose country currently chairs the Security Council, agreed with Antonio Guterres, noting that artificial intelligence "can strengthen or disrupt global strategic stability".
It was already noted above that in March 2023, World-Famous People signed a letter calling on AI developers, together with lawmakers, to immediately develop systems for monitoring the development of AI technologies. In particular, more than 1,000 technology experts called on "all artificial intelligence laboratories to immediately suspend training of artificial intelligence systems more powerful than GPT-4 for at least 6 months", the latest version of the ChatGPT chatbot of the American company OpenAI. This six-month pause in the further development of AI is necessary in order to fully regulate these processes of creating AI, develop and implement security measures in this area, because AI with human and competitive intelligence can pose serious risks to the life of society and humanity as a whole. In turn, the Director General of OpenAI, which created ChatGPT, Sam Altman also supported the idea of state regulation of AI during a hearing in the US Congress.
In this regard, many major players in the market working with artificial intelligence, including AI developers OpenAI, Microsoft, Amazon, Alphabet (Google), and Meta, had to commit to the White House for the responsible development of artificial intelligence. In addition to investing in cybersecurity research, one of the commitments of these players is to develop alerts that will inform users when content is created using AI. These artificial intelligence companies voluntarily pledged to the White House to label their content (text, images, audio and video created by AI) with watermarks. This is due to the fact that despite the fact that the development of artificial intelligence is still in its infancy, it is already actively used to create disinformation. There are also concerns that cases of using these technologies for criminal purposes are spreading. These companies also promised to thoroughly test AI systems before release, invest in cybersecurity, and share information on how to reduce risks in the field of artificial intelligence. This step is designed to make AI technology safer.
Given these problems, Carlos Ignacio Gutierrez, a public policy researcher at the Institute for the Future of Life, noted that one of the big problems posed by artificial intelligence is that today "there is no collegial body of experts who would decide how to regulate it, as happens, for example, with the Intergovernmental Panel on climate change."
However, there are AI developers who are active conductors of the idea of a fundamentally different approach to creating artificial intelligence. They believe that you should first be taught to feel the machine, and only after that – to think logically. Only in this way can a machine, not being a human, acquire some human features. And this can only be achieved by giving the machine the opportunity to communicate with people so that it can get to know them better.
NATO is also exploring ways to use artificial intelligence ethically, but the Alliance is not sure that the enemy will also become an ethical AI user. Therefore, the alliance considers it appropriate to counter threats related to artificial intelligence. At the same time, NATO emphasizes that AI attacks can be carried out not only against critical infrastructure, but also to analyze important data at the state level.
The growing complexity of military networks and digital "breakthroughs" of world powers make artificial intelligence and related computer programs necessary conditions for developing effective measures to protect national security and ensure the defense resilience of states. With the explosive development of AI technologies, as well as the large amount of data that needs to be processed, there are additional requirements for national security and speed of response to possible threats in the field of the State defense. Therefore,
automation based on AI technologies is a key element of the Pentagon's adoption of a new cybersecurity paradigm – the zero-trust principle. This approach assumes that networks are at risk, which requires constant verification of users, devices, and access. This practice actually repeats the proverb – "trust, but verify." Representatives of the defense ministry set a deadline until 2027 for the implementation of the "zero-trust level", which in general includes more than 100 different measures.
First of all, this so-called "zero-trust level" consists in viewing existing data, including using artificial intelligence. It is necessary to get to the point of searching for abnormal data that occurs in the network in order to determine much faster where the weak point is. After all, modern aircraft, weapons systems, and other technical means all generate critical data and information. And this huge array of information must always be reliably protected from outside interference. So, artificial intelligence can be both an intelligent weapon and a high-tech tool for ensuring cybersecurity in the state.
International cooperation in the AI development, AI implementation and development, as well as establishment of clear ethical principles in this area are critical to preventing the use of AI for criminal purposes or for military purposes by aggressor states. Ensuring the safe and ethical development of AI requires careful discussion and effective protective measures at the international level.
So, the main problem in the field of AI now is not in the creation of effective artificial intelligence systems – there are already enough such developments in the world, but in the absence of effective approaches to creating an effective control system, primarily of an ethical nature, over artificial intelligence. Therefore, the safety of artificial intelligence should be a priority in its development. At the same time, it should be understood that without the use of artificial intelligence systems, further development of technological progress is impossible, so the development of intelligent machines should continue.
In more detail about current vectors and problems of artificial intelligence development as security toolkit and intellectual weapons of the future in the SIDCON's
book "Artificial Intelligence and Security". The book is devoted to problems of artificial intelligence development as a security toolkit and intellectual weapons of the future. In addition to this, the book presents the development of a practical focus concerning the potential, modern trends and prospects of artificial intelligence integration into different spheres of economic activities and life activities of society and people.