
Generative AI has had a huge presence over what we deemed possible with artificial intelligence. Its mainstream adoption is no doubt shocking to those who don’t work in the tech sector, and it both impresses us – and makes us scared for the future.
In the realm of cybersecurity and enterprise security, it may have even bigger implications.
Much Like the Human Brain, But Faster
Generative AI operates on neural networks powered by deep learning systems in the same way that the human brain works. These machines are like human learning processes – but the big difference is that processing answers is light-years faster, thanks to the power of crowd-sourced data and correct information within generative AI. What a human can learn in 30 years would be a blink of an eye to these systems.
That is a benefit depending on the quality, and the massive data amounts fed into them. This technology can greatly increase the efficiency of an organization, boosting productivity significantly with the same number of human resources. Applications such as ChatGPT, Bard, and GitHub Pilot emerged overnight to take IT leaders by surprise, to the point where, in just six months, Generative AI tools have already reached a technology inflexion point.
Cybersecurity Challenges
ChatGPT and other Generative AIs are primarily delivered over a SaaS model by a third party. One of the biggest challenges posed comes with interacting with Generative AI and providing data to this third party.
Large Learning Models (LLMs) backing these AI tools require storing that data to make intelligent responses to any prompts. AI use presents a significant issue around losing this sensitive data and compliance. Providing sensitive data to Generative AI, such as Health information, intellectual property and personally identifiable data – needs to be viewed the same as other data controller and processor relationships, with proper controls in effect.
Information fed into tools like ChatGPT fuels a pool of knowledge to which any subscriber would have access. Any data uploaded or enquired about would be replayed back within app guardrails to other third parties making similar inquiries. That is very similar to SaaS application problems that can impact the response to future queries when used in training. As it stands, Generative AI does not have concrete data policies for user-provided data.
Inside Threats
The insider threat is also significant when it comes to AI. Those with intimate knowledge of the enterprise can utilize ChatGPT to create an authentic-looking email, duplicating the style and typos in very fine detail. That also allows attackers to replicate websites much easier than before.
For more information on enterprise security and all future risk management events, check out the upcoming events from Whitehall Media.