Microsoft Copilot recently made headlines after researchers revealed a possible security vulnerability within its recovery augmented generation (RAG) systems. Double “Confused Pilot”, they claim that Copilot could be required to disclose sensitive data from the company’s systems.
Beyond Microsoft, other co-pilot programs that use GenAI are leading to a sharp increase in security breaches that many organizations seem unfortunately unprepared to prevent. In fact, by 2025, Gartner predicts The AI generation will lead to an increase in cybersecurity resources needed to secure it, leading to an additional spend of more than 15% on application and data security. This is consistent with what we’re hearing from safety executives, many of whom are concerned about the impact of using co-pilots on their safety infrastructure.
To adequately protect an organization, it is important to understand why these security issues persist.
Setting Context: The Rise of GenAI and AI Assistants
Generative AI continues to disrupt the market within organizations of all sizes and industries. Standalone and integrated solutions promise to radically improve workflows, improve customer support, and save teams hours of manual work.
In the Asia-Pacific region, the market size GenAI is expected to post an annual growth rate (2024-2030) of 46.46%, which would translate into a market volume of $86.77 billion by 2030. This growth is driven by the fact that GenAI lends itself to personalized and effective digital experiences, including virtual assistants and chatbots that understand and respond to individual preferences.
As pointed out GartnerGenAI is the number one type of AI solution deployed in organizations. Additionally, GenAI integrated into existing applications, such as Microsoft’s Copilot for 365 or Adobe Firefly, is the primary way to address GenAI use cases. Overall, 34% of respondents said this was their primary method of using GenAI and was more common than other options, such as custom GenAI models.
Leinar Ramos, principal managing analyst at Gartner, explained how this rise of GenAI solutions is propelling conversations about their appropriate use. He said: “GenAI has increased the degree of AI adoption across the enterprise and made topics such as AI upskilling and AI governance much more important. GenAI requires organizations to evolve their AI capabilities.
Microsoft Copilot is a clear example of a GenAI solution that can be integrated through an existing platform and requires additional attention to mitigate security concerns. As several analysts and statisticians have attested, Microsoft Copilot is one of the first major players in embedded GenAI. As Microsoft itself shared, “Microsoft Copilot is an AI-powered digital assistant designed to help people with a range of tasks and activities on their devices. »
Understanding the reality of the Copilot threat
While the promises of GenAI solutions like Copilot are certainly appealing, it’s crucial to understand why they also present significant security issues, as leading organizations are already trying to address security vulnerabilities. Essentially, it’s all about data and who has access to what.
The main concern with Microsoft Copilot and similar tools is that they have access to the same sensitive data as users. Unfortunately, many users using Copilot do not fully realize how overly permissive access to data can significantly increase the chances of cybercriminals infiltrating sensitive information and systems, leaving CISOs to mitigate the inevitable consequences.
Another fundamental concern is that Copilot can quickly generate large amounts of new sensitive data based on queries and may reference legitimately accessed data when it should not do so. For example, Copilot could formulate an answer to a question with confidential information for which the person asking the question does not have authorization, such as data about future product launches, corporate restructurings or operations high level.
Some attempts have been made to address issues related to data misuse. For one thing, Microsoft doesn’t use an organization’s data for Copilot training purposes. The data remains in their own Microsoft 365 tenant. In addition, companies are also paying more attention to user access. And yet these measures only go so far, with the documents and responses generated by Copilot continuing to adhere primarily to knowledge sharing, not security policy or principles.
So what can we do?
Trust is paramount, and for GenAI integrations to be successful and not represent a pending data breach, systems and processes must be reoriented toward the safety and security of personnel and operations. Essentially, there are three key things to remember: GenAI is a force multiplier for everyone, these attacks still require action on the objective, and they can be detected and stopped.
Before adopting Copilot, organizations must conduct a very rigid and thorough review of access control to determine who has access to what data. As demonstrated with zero trust principles, best practices often defend even the slightest user access.
Once Copilot is integrated, organizations can apply sensitive labels. Microsoft itself recommends applying sensitivity labels through Microsoft Purview. Here, users configure labels to encrypt sensitive data and ensure that users are not granted Copy and Extract Content (EXTRACT) permission. EXTRACT prevents users from copying sensitive documents and prevents Copilot from referencing the document.
It’s also a good idea for organizations to consider implementing additional security services and expert solutions that identify suspicious user behavior and flag priority alerts so that SOC professionals can detect threats before they occur. They don’t go too far. A SOC defender can use additional information to respond authoritatively, such as blocking a “user” (a hacker bot disguised as an employee) from their account. Quality tools will also provide increased visibility so SOC teams can see who is using Copilot and what data is being leveraged, so they can stay informed and prepared.
Like Vectra AI 2024 State of Threat Detection and Response Report As shown, AI can also be used to help security teams reduce data vulnerabilities, not only in their GenAI integrations but across all workflows.
The report highlights that almost all SOC practitioners (97%) have adopted AI tools, and 85% of them said their level of investment and use of AI has increased over the past year. last year, which had a positive impact on their ability to identify and use AI. face threats. Additionally, 89% of SOC practitioners will use more AI-based tools in the next year to replace legacy threat detection and response, and 75% of them said AI has reduced their workload over the past 12 months.
Partnering with a security expert can greatly help teams understand their entire organization’s security and how to properly adopt GenAI tools, while using other AI-based solutions for better detection and threat response.
When it comes to Copilot, security experts can help configure it to reduce data leaks, monitor prompts and responses to ensure sensitive data is not used inappropriately, and detect anomalous behavior or misuse of the solution.