As companies move from AI pilot programs to serious implementation, understanding zero trust is more important than ever.
Many organizations in the Asia Pacific region (Australia and New Zealand) have spent the last 18 months experimenting with AI. Looking ahead, it’s clear that we are now moving from pilot to implementation, as technology leaders begin to integrate AI-driven capabilities to accelerate growth.
According to IDCglobal spending on AI-enabled technologies will exceed $749 billion by 2028. Notably, IDC reports that 67% of the $227 billion in projected AI spending in 2025 will come from companies integrating AI capabilities. AI in their core businesses, outpacing investments in cloud and digital services.
As investments in artificial intelligence (AI) solutions increase, particularly in generative AI (GenAI) tools such as Microsoft Copilot, the need for more resilient security frameworks also becomes critical. Rather than simply expanding current cybersecurity practices to accommodate new AI technologies, security teams must first assess the risks associated with using these tools. This means identifying any new network vulnerabilities and additional actions needed without stopping innovation.
Zero trust is a fundamental approach to cybersecurity that has emerged as a key strategy for protecting sensitive data amid evolving access permissions and growing threats. However, when it comes to the latest wave of GenAI tools and assistants, is a zero trust approach still sufficient in an increasingly co-driven threat landscape?
Understanding Zero Trust Security
As security leaders know all too well, zero trust prioritizes securing data, systems and assets by granting access only when necessary. This approach is not only theoretical; it is actively implemented across various industry sectors, reflecting a growing recognition of the need for adaptive security measures in an increasingly complex digital environment.
When applied well, the benefits of Zero Trust can be substantial. According to ForresterAdopting a zero trust approach can build brand trust, accelerate engagement with emerging technologies, and drive growth by improving customer and employee experiences. It can also help bridge the gap between CISOs and those advocating for more investment and innovation in AI. As Forrester said: “With zero trust, security becomes a business amplifier and the CISO goes from being the nemesis of your organization to being a sought-after supporter.
However, zero trust is not without potential pitfalls. On the one hand, implementing this security framework is no easy feat. Organizations should undergo a complete reassessment of access management, user verification and system architecture to establish new policies and protocols. The introduction of GenAI tools further complicates this transition, as they require a more complex approach to security.
The AI Challenge: The Risks of Generative AI
While the Zero Trust model provides a solid framework, the rise of GenAI tools introduces new security concerns. AI assistants, designed to improve productivity, may inadvertently expose organizations to higher risks, including data leaks, unauthorized access and misuse of sensitive information. Many organizations do not fully realize how overly permissive access to data can significantly increase the likelihood that cybercriminals will infiltrate sensitive systems, leaving CISOs to mitigate the consequences.
As Thomson Reuters underlines: “The proliferation of GenAI and big language models (LLM) poses another tangle of liabilities – such as jailbreaking and rapid injection attacks – that jeopardize the sanctuary of privacy, opening the door for bad actors to wreak havoc, exploit weaknesses and reveal personal data. »
According to Vectra AI State of Threat Detection and Response Research Report 2024: The Defenders’ DilemmaAbout 54 percent of APAC SOC practitioners say security vendors flood them with unnecessary alerts to avoid accountability for breaches, while 45 percent express distrust in their tools to perform as needed. This highlights the urgent need for effective security measures to address the challenges posed by mass alerts and lack of trust in GenAI systems.
Organizations need to consider not only who is accessing data, but also what AI systems like Microsoft Copilot can access and how they manage that flow of information.
Integrate zero trust principles into AI frameworks
To effectively navigate the complexities of AI security, organizations can adapt zero trust principles specifically for GenAI. This involves a multi-dimensional approach integrating architectural design, data management and strict access controls. Key considerations for this framework include:
-
Authentication and authorization: Implement robust user verification processes and limit access rights to the minimum necessary. This principle also applies to AI systems, which must undergo rigorous identity verification before accessing sensitive data.
-
Data source validation: Organizations must validate the sources from which AI systems collect information. This protects data integrity and mitigates risks associated with data manipulation and exploitation.
-
Process monitoring: Continuous monitoring of AI processes is essential to identify anomalies and potential security vulnerabilities. By monitoring, organizations can detect unusual behavior and respond quickly.
-
Output control: Implement mechanisms to review results generated by AI systems – this prevents the dissemination of sensitive information or malicious content.
-
Activity audits: Regular audits of AI system activities help maintain accountability and transparency. These audits are essential to understanding how data is accessed and used in GenAI environments.
By focusing on these principles, organizations can cultivate a security posture that addresses the unique challenges posed by GenAI. Content layer security is emerging as a key element, going beyond conventional access controls to assess what data the AI system can access, process or share.
A way forward in an AI world
As digital innovation continues to evolve with the integration of AI technologies, the need for robust security frameworks cannot be overstated. Zero Trust security provides a solid foundation, but its principles must be adapted to meet the complexities introduced by GenAI.
By taking a proactive, data-centric approach, organizations can improve their security posture and protect sensitive information against an ever-changing array of threats. In the age of digital transformation, vigilance and innovation in security practices are not only beneficial; they are essential to protecting the integrity and trust of the organization.