By J Connolly
Though machine learning (or AI) is the most recent craze in the technology sector, many real-world applications of the technology remain far away. But in IAM there perhaps already exists a clear and achievable way for machine learning to be implemented, specifically in analysing user behaviour to protect organisations from insider threats.
Previous solutions in the IAM sphere have generally focused on controlling access to an enterprise’s systems. They have ensured that employees have strong passwords, can pass multiple forms of authentication and only need one identity for federated systems. By doing this, businesses have been able to strengthen the gateways into their systems – making sure malicious attackers are kept out.
There has been a recent realisation, however, that that the greatest threat to an organisation’s security has often already been allowed inside. Defined as ‘insider threat,’ and brought to light by high-profile breaches by employees who have chosen to leak data such as Edward Snowden, malicious actions by current employees have been shown to have devastating impacts. In a recent survey by Kaspersky Lab 29 percent of all businesses interviewed reported disclosures by insiders as their largest source of lost data.
Therefore IAM Analytics, which examines existing data and logs to identity suspicious behaviour and flag it to system administrators has been growing in importance and utility. The ability to monitor user behaviour such as geo-location, applications used and data consumed allows identity managers to keep an eye on insider threats and situations where users have more access to information than they need.
This task of identifying these behaviours risks becoming increasingly gargantuan though as enterprises expand. Policies such as already monitor user behaviour, but human behaviour is notoriously tricky to create static rules for. They must account for some employees travelling the world frequently and accessing sensitive files, and others rarely leaving the office and accessing few files on the system. Machine learning arguably can solve this by analysing data in real-time to build predictive models that identify risky behaviour far more effectively than static rules or human judgement possibly could.( ) have multiplied the sheer number of devices connected to a network and the contexts in which they are used. 41 percent of large organisations
Application of Machine Learning in analytics
Machine learning, at its most basic, works by analysing large amounts of existing data to identity a pattern from which it builds a model to interpret future inputs. In the classic example, a machine learning algorithm is fed multiple images of a car and then is able to identify a car from new pictures provided.
In IAM, machine learning could process the large amount of data already existing in user logs to build a model of what different users in different roles should do and where they should be doing it from. It would then be able to flag unusual behaviours in future and act accordingly. While it is hard for administrators to create rules for every employee and adjust them in a large enterprise, machine learning could create ‘living’ security policies that use HR data to constantly reflect the best security practices.
Machine Learning could also improve the automation of controlling unused access and excessive privilege. This would ideally reduce the possible harm when malicious attackers do gain access. With privileged users frequently a high concern due to their access to sensitive material, a model which curtails this access when it is not necessary could have a dramatic impact on harm caused. It would also be useful in instances where malware has gained identity details and is currently unchallenged on the system.
One of the main selling points of implementing machine learning in IAM solutions is its ability to harness greater amounts of data than traditional systems. As Alex Simons at Microsoft’s Identity Division has highlighted, identity systems will be able to constantly mine data about users to not only authenticate them, but to build a profile of their behaviour perhaps even based on keystrokes and working habits. Though it may seem invasive, the potential of these systems to reduce insider threats could be dramatic.
Barriers to entry
The difficulties in implementing machine learning in Survey say they expect machine learning to be in place in their organisations within three years.analytics will be similar to the issues current IAM projects face. Enterprises are becoming more aware of the necessity of secure identity systems and the importance of staying ahead of cyber threats. But as IAM does not directly generate revenue the costs of implementation can often mean future projects are side-lined. Machine learning is also some time away from full adoption in enterprise. But as the rapid progress of machine learning algorithms in areas such as vehicle automation has shown, leaps forward can be made in areas more complicated than IAM analysis in short periods of time. Indeed, 75 percent of executives queried in an Economist Intelligence Unit
Other risks of AI in identity remain more hypothetical but equally important. Humans struggle to understand how machine learned models reach their conclusions once sufficient data has been absorbed by their algorithm. Are we happy to leave the important task of employee verification and access to a model which may bias against certain employees? Similarly, if certain employees find they are unfairly discriminated by the algorithm, it may not be possible to alter this behaviour, creating frustration for certain users.
Though machine learning in IAM is some way away yet, trends in other areas and the vast amount of funding channelled into AI suggests it could become reality sooner than we expect. The IAM sector should be open and interested to the benefits it could bring while being wary of potential future risks.