Questions Around Enterprise Generative AI You Should Be Asking


There has been a lot of talk about enterprise generative AI over the last few months as its use has become more implemented. However, the real questions should be asked by security teams about their providers’ approach to data privacy, transparency, user guidance, and secure design and development.

There is no doubt that GenAI is a technology transforming enterprise IT strategies excitingly while allowing for internal IT security practices to be reviewed, but how many enterprise security teams are asking the questions that make it easier for supportive and secure use?

Here are the questions you should be asking of Generative AI for your business.

Will the Data Remain Private?

In truth, there is no bigger question to ask. The threat of losing data or having it stolen by threat actors is something every company dreads happening to them – and not being responsible is an even worse scenario.

GenAI providers should have clearly documented privacy policies. Customers should also be able to retain control of their information and not have it used to train any foundational models or have it shared with any other customers or parties without their explicit permission.

Can You Trust the Content Created by GenAI?

Like the human element, GenAI can expect to get some things wrong – especially in these early stages. While perfection may be an issue, transparency and accountability should never be.

You can accomplish this in three pathways. Use authoritative data sources to foster accuracy, provide visibility into reasoning and sources to maintain transparency, and provide a mechanism for user feedback to support continuous improvement. This way, providers will help the credibility of the content GenAI creates.

Safe Usage Environment

Enterprise security teams are responsible for ensuring a safe and accountable GenAI within their organisations, with AI providers supporting them in various ways.

One area of concern is user overreliance on tech. GenAI is established to assist workers in their tasks, not replace the workforce. To achieve this, users should be encouraged to think critically about information being served by AI. Providers can additionally promote the correct amount of user scrutiny by citing sources and using carefully considered language.

As for hostile misuse by insiders attempting to engage GenAI in harmful ways like generating dangerous code, AI providers can mitigate this risk via safety protocols within the system design, and setting a boundary on what GenAI can and cannot achieve.

Everything changes so fast with enterprise security that it is easy to miss a vital step. For more information about enterprise security and future risk management events, check out the upcoming events from Whitehall Media.