The Global Guidelines for AI

ai in business

The UK has proven itself to be at the forefront of secure development for AI in business, paving the way for other countries to develop their stronghold on AI safety.

Nowhere is this more respected than the UK, which published the first global guidelines for assurance of secure development of all AI technology – developed guidelines that have been fully endorsed by agencies from 18 countries so far, including the US.

Regulation Leadership

The new UK-developed guidelines, led by GCHQ’s National Cyber Security Centre and the US Cybersecurity and Infrastructure Security Agency, are documented to aid developers of all AI-based systems to make informed cybersecurity decisions throughout development.

The UK has taken great strides to establish itself as a leader in AI safety, focusing awareness on security levels required around artificial intelligence and how it is designed, developed, and deployed. With 17 other countries endorsing and reinforcing the UK-crafted guidelines, it has proven to be a major step in a protective stance against the potential threats posed by AI in business.

The guidelines are mapped in cooperation with industry experts and 21 international agencies and ministries worldwide as a global effort to instigate heightened security protocols on the rapidly evolving technology. The cooperation also included delegates from all members of the G7 group of nations and the Global South.

First of a Kind

These new guidelines are the first-ever of their kind that has met with global agreement. As a guide to informed decision-making for developers of any systems using elements of AI, these steps will provide better avenues for secure development from scratch and further development over existing tools or services.

The officially approved step will now help to ensure that crucial enterprise AI cyber security measures are essential pre-conditions within AI system safety and legally integral to the development process from the beginning stages – effectively a secure-by-design approach. The product officially launched at an NCSC-hosted event attended by over 100 industry, government, and international partners to discuss securing artificial intelligence, including highlighted panels by UK, US, Canadian, and German cyber security agencies.

The UK-designed guidelines are available on the NCSC website, as well as comments from NCSC officials who worked on them. The guidelines are divided into ope- as well as suggested behaviours on improving security in AI In Business: The Global Guidelines for AI.

Immediate Focus

The current focus for those involved is various perils around retrofitting security into AI for the enterprise in the next few years and stressing the need for security to be at the beginning stages – not as a late-stage development.

The guidelines are accepted as a worldwide, multi-stakeholder effort in addressing such issues whilst strengthening the UK Government’s AI Safety Summit’s standing on sustained worldwide cooperation towards AI risk.

Speak With AI Experts At Our Next Event

It is a big development in AI for the enterprise, sure to be addressed at the UK’s premier Enterprise AI & Big Data event on November 6th, 2024, in London. Presented by Whitehall Media, the event will feature key guest speakers on the professional development of technologies and practices around the safe implementation of AI in business.

For more details on our featured speakers and event lineup, check out our Whitehall Media events page for updated information.