Generative AI and a Threat to National Security

AI brain png clipart illustration“/ CC0 1.0

Introduction

In recent years, Artificial Intelligence and Generative Artificial Intelligence have become hot topics primarily for negative reasons. For these reasons, the Centre for Emerging Technology and Security (CETaS) has produced a full report of all issues and solutions associated with the use of Generative AI 1.

But what exactly is Generative and what is the impact on society? Generative AI refers to AI techniques that can be used to create or produce various types of content including text, images, audio, and video. The most popular AI, and no doubt the one that most people have heard of, is ChatGPT (Generative Pre-Trained Transformer). This is a Large Language Model (LLM) with an estimated 100 million monthly active users as of January 2023 2, which increased to 180 million users in August 2023 3.

Generative AI offers various advantages, but it also presents several concerns, including bias, hallucinations (in which the response appears convincing and true but is entirely fabricated, potentially leading to misinformation), and inadequate common-sense reasoning 4.

As already highlighted ChatGPT has millions of users with 18% of adults in the USA reporting that they have used ChatGPT, and with such a fast pace in the increase of users means the chances of misuse are also going to increase specifically associated with misinformation and highly deceptive disinformation and the use of deepfakes. This makes it extremely challenging to tackle online disinformation.

Businesses and Organisations also need to be aware of the dangers and risks posed by using generative AI due to the use of it by employees specifically associated with the sharing of sensitive and confidential information.

Below is an exploration of threats associated with the use of generative AI taken from the center for emerging technology and security (CETaS) 5. So, the threats are very serious and need to be explored further to help with awareness as well as what needs to be done as these technologies advance, especially as the advance is so rapid. This blog post highlights some of the threats posed by generative AI as well as what the future looks like and how changes need to be made now to ensure security and safety.

Digital Security

Cybersecurity

AI can assist less technically inclined users in exploring new cyberattack techniques and gradually increasing their sophistication. Generative AI and Chat GPT can also be used by malicious actors to engage in Phishing mail attacks and social engineering attacks. Phishing emails can simplify the process of launching complex attacks, even for less technically minded individuals.

The effectiveness and stealth of phishing attacks can be significantly heightened by leveraging Chat GPT’s capability to learn communication patterns from trusted sources. Perpetrators exploit a sense of urgency and fear to manipulate victims into hasty actions, bypassing proper scrutiny. Attackers can train the AI using historical data to craft convincing, seemingly authentic emails. Although safety measures are in place, there is evidence to suggest that these safeguards can be circumvented by exploiting the AI, essentially “jailbreaking” it. Subsequently, the AI can be commanded to perform actions, known as “Do Anything Now” (DAN). 5

There is also evidence to suggest that Chat GPT has the ability to be exploited by threat actors to create malicious code. Again the threat actor is able to use the DAN command as previously highlighted. Furthermore the evidence also suggests that it is able to replicate code for the NotPetya malware attack5 This malware attack, attacks critical files to make systems unusable and also encrypts files. 6

Generative AI introduces a worrisome aspect of social engineering, wherein threat actors manipulate victims to obtain sensitive information. Leveraging the AI’s understanding of human context and language fluency, the acquired data can be transformed into persuasive communications like emails or texts, posing a significant risk.7

Physical security

Radicalisation and Terrorism

The ability for individuals to build personalised connections with AI chatbots, facilitated by their constant availability and endless patience, has the potential to reshape the blueprint for radicalisation. Nevertheless, there is a fundamentally human aspect to this phenomenon that current generative AI is unlikely to replicate in the near future.

Political security

Political disinformation and electoral interference

Generative AI has the potential to significantly amplify political disinformation. For instance, Facebook ads have attempted to sway voters by using deepfake videos of Moldova’s pro-Western president. Moreover, Russian disinformation efforts have harnessed generative AI in the conflict with Ukraine 8.

Targeting and fraud

Fraudsters stand to benefit significantly from generative AI. Qualitatively, generative AI can assist fraudsters with improved spear phishing campaigns.

Weapon instruction

The availability of publicly accessible yet obscure information reduces the distance to crucial data needed for devising and carrying out an attack strategy.

Surveillance, monitoring, and geopolitical fragmentation

Generative AI could play an important role in furthering the global proliferation of technology which adheres to authoritarian standards and values, aiding attempts to enforce single versions of historical truth for future generations. Democracies may be more vulnerable to the exploitation of the creative characteristics of generative AI systems than autocracies. This emphasises the need to understand the cultural and behavioural aspects to generative AI use around the world.

Child sexual abuse material

The growing presence of AI-generated CSAM presents a major concern for law enforcement agencies. The challenge of discerning authentic images from fabricated ones is becoming increasingly complex, leading to the risk of undetected false negatives. Additionally, there is a risk of false positives, potentially causing law enforcement to allocate resources to investigating images of children who have not suffered physical abuse, diverting attention from those in need.

Advantages of generative AI

Enhancing individual productivity is the primary function of generative AI. It operates as a ‘cognitive co-pilot’ by assisting in the direction, collection, processing, and dissemination of information. However, it is important to deploy generative AI cautiously and involve frequent human validation, especially in these early stages of its development and integration.

Malicious AI Threats

malicious AI uses consist of threats to the following domains:

Political security: AI is utilised to automate tasks related to surveillance, persuasion, and deception, as well as to devise new attacks that exploit an enhanced ability to analyse human behaviors, moods, and beliefs using available data.

Digital security: AI is utilised to automate tasks related to cyberattacks, including novel attacks that exploit human vulnerabilities, existing software vulnerabilities, or the vulnerabilities of AI systems.

Physical security: AI is utilised to automate tasks related to defending physical systems against attacks, including novel attacks that compromise cyber-physical systems or involve physical systems that cannot be feasibly controlled remotely.

Adversaries

Adversaries now possess the tools needed to disrupt systems, networks, societies, and potentially economies. These adversaries fall into three categories when it comes to malicious AI.

  • State Actor,
  • Non-State Hostile Actor (Such as an Organised Crime Group)
  • lone actors

Security issues with Generative AI

Below Is a list of all the potential security issues concerning generative AI: 9

  • Too much trust in AI systems needs to also be taken into account as overuse of AI can lead to laziness and overreliance.
  • Misinformation and Manipulation
  • Hallucinations
  • Privacy and Data Security
  • Phishing and Social engineering attacks
  • Bias and Discrimination
  • Offensive or inappropriate content
  • Deep fakes and Identity theft
  • Adversarial attacks
  • Cybersecurity Issues
  • Job displacement
  • Accountability
  • Safety
  • Business Impact
  • Sensitive information
  • Data Leaks

The Future

One suggested approach to managing the creation of content and information involves the use of watermarking. However, it’s important to recognise that this method could be circumvented by malicious actors. Other strategies to uphold the authenticity of content include the implementation of strong privacy and data protection measures. Additionally, educating and raising awareness among users is crucial to help them recognise and evade potential risks associated with Generative AI and chatbots. It is essential to maintain a human-centered approach, ensuring human involvement in all facets of AI utilisation. Transparency is also key, as users need to comprehend the limitations of AI. Governance and a regulatory framework is also another area that requires more attention as there are no specific international rules on how to deal with the proliferation of AI. However, The European Union has recently created the EU’s AI Act, which addresses the risk of AI and ensures that Europeans can trust the AI 10. Furthermore Last October (2023) The US president issued an executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence 11. This suggests that leaders and politicians are understanding the need for regulations, frameworks and laws to assist with ensuring AI can be trusted and is safe to use.

   

  1. https://edmo.eu/wp-content/uploads/2023/12/Generative-AI-and-Disinformation_-White-Paper-v8.pdf ↩︎
  2. https://www.nature.com/articles/s41746-023-00965-x.pdf ↩︎
  3. https://www.namepepper.com/chatgpt-users ↩︎
  4. https://edmo.eu/wp-content/uploads/2023/12/Generative-AI-and-Disinformation_-White-Paper-v8.pdf ↩︎
  5. Petya (malware family) – Wikipedia ↩︎
  6. IEEE Xplore Full-Text PDF: ↩︎
  7. IEEE Xplore Full-Text PDF: ↩︎
  8. https://edmo.eu/wp-content/uploads/2023/12/Generative-AI-and-Disinformation_-White-Paper-v8.pdf ↩︎
  9. https://idsc.miami.edu/magazine/wp-content/uploads/2023/12/Cyber-Security-Issues-and-Challenges-Related-to-Generative-AI-and-Chat-GPT-by-Rajesh-Pasupuleti.pdf ↩︎
  10. AI Act | Shaping Europe’s digital future (europa.eu) ↩︎
  11. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House ↩︎

Leave a Reply

Discover more from Planned Link

Subscribe now to keep reading and get access to the full archive.

Continue reading