Lost track of the AI apps inside your company? Uncover them with our free shadow AI scan 🚀
August 26, 2024

6 Questions CISO needs to answer before implementing GenAI

In this blog post you'll find some of the most pressing questions that CISOs have regarding the integration of generative AI inside their companies
Riccardo Morotti
Co-Founder & COO
Header image

Over the past few months, I’ve had the privilege of speaking with dozens of Chief Information Security Officers (CISOs) and industry experts, gathering insights into their most pressing concerns and questions about artificial intelligence. This blog post synthesises that feedback to address the key compliance, privacy, and cybersecurity issues facing businesses today. From understanding regulatory requirements like the EU AI Act to implementing effective safeguards and mitigating cybersecurity threats, this guide provides actionable answers to the most common questions and challenges related to AI technology. 

What compliance and privacy mandates must we consider?

The EU AI Act, which takes effect on August 1 2024, introduces stringent regulations for AI technologies to ensure safety, transparency, and fundamental rights. The Act mandates robust cybersecurity measures and imposes significant penalties for non-compliance (EU AI Act – Article 113).

The AI Act includes specific provisions to ensure the cybersecurity of high-risk AI systems. Article 15 outlines that AI-based products and their implementations must be designed for accuracy, robustness, and cybersecurity throughout their lifecycle.

Key requirements include:

  • Mitigation measures: implement technical solutions to protect AI models and applications from attacks and vulnerabilities.
  • Logging and traceability: automatically log operations and communications to trace system functions and identify breaches.
  • Protection against unauthorized alterations: safeguard against unauthorized third-party changes and address the latest vulnerabilities in AI.
  • Fault and error resilience: ensure the AI model or application remains resilient to faults and errors.
  • Risk management system: maintain a risk management system to address potential cybersecurity threats (e.g., OWASP top 10 for LLMs, MITRE ATLAS for AI).

These measures are not merely best practices but legal requirements under the AI Act. For detailed information, refer to Article 15 of the official legislation.

To assist companies in understanding their potential risks and the financial implications of non-compliance, we offer a specialised calculator. Use our tool to estimate potential fines your company could face under the EU AI Act.

How do I implement technical and policy guardrails?

Securing AI systems requires both technical and policy safeguards. We recommend implementing measures that ensure appropriate human oversight and control over AI systems. Keeping humans involved in AI operations helps mitigate risks and promotes safe, responsible use of AI technology.

Focus on three key areas:

  1. Application inventory: identify and understand the AI applications in use and their purposes. This mapping is crucial for developing effective AI policies and training users.
  2. Acceptable use policy: clearly outline AI usage do’s and don’ts in an Acceptable Use Policy to mitigate shadow AI risks.
  3. Policy enforcement: ensure that AI usage policies are adhered to. This may require tools to monitor and prevent the input of sensitive data into AI systems.

We have prepared a checklist that can help you in improving your security posture management. Find the link here.

What are the most common cybersecurity threats targeting AI?

Cybersecurity threats in generative AI can be categorized into two main areas:

  1. Usage risks: protect your organization from risks associated with employees using generative AI tools like ChatGPT. Common issues include the inadvertent sharing of sensitive business information, PII, financial data, or proprietary code.
  2. Integration risks: safeguard against threats posed by integrating AI into internal applications. Notable threats include prompt injections—malicious instructions that manipulate AI models to produce harmful or misleading responses.

What steps can I take to detect and mitigate cybersecurity threats targeting AI?

To address cybersecurity threats, consider the following general measures:

  • Identify risks: define potential risks such as prompt attacks, data extraction, model backdooring, adversarial examples, data poisoning, and exfiltration.
  • Enhance detection and response: incorporate AI into your organization’s threat detection and response systems to identify and mitigate attacks in real time.
  • Incident response plan: develop a comprehensive incident-response plan to handle AI-specific threats and vulnerabilities, including clear protocols for detection, containment, and eradication.

What are the cybersecurity risks of using generative AI?

According to a recent Gartner security blog, sharing sensitive information with generative AI applications is a significant risk. Gartner notes that there are currently no verifiable data governance or protection assurances for confidential enterprise information. Users should assume that data entered into platforms like ChatGPT could become public.

It is crucial to recognize that generative AI platforms not only store but may also reuse data from previous interactions. Sharing sensitive data with such tools—whether PII or PHI—could lead to severe compliance fines and damage to customer trust.

For more information on cybersecurity risks, check out our dedicated post.

What does the AI attack surface look like?

Generative AI models present new security challenges with unique attack vectors, such as data poisoning, model evasion, and extraction. Unlike traditional attacks requiring programming skills, adversaries can now exploit large language models (LLMs) through skillful prompting.

Recent research by IBM highlighted the vulnerabilities of LLMs, revealing that these models could be manipulated to divulge sensitive information, generate compromised code, create malicious scripts, and provide inadequate security recommendations.

Conclusion

At Glaider, we specialize in helping organizations secure their generative AI activities, ensuring that your AI systems are both compliant and resilient. To support you in this endeavor, we offer two valuable free audits:

  • Shadow AI usage audit: discover if your employees are using AI tools safely and in accordance with your company policies.
  • Prompt injection security audit: for those developing chatbots, our tool tests your system prompts to reveal their vulnerability to prompt injection attacks and assess their strength.

Leverage our expertise to safeguard your AI operations and stay ahead of potential risks. Contact us today to learn more about how Glaider can help enhance your AI security and compliance efforts.