AI is not a security cure-all, says CEO of Identity Automation

0
710

With the start of the 2020s, many security industry experts are reflecting on the emerging technologies over the past decade and looking forward to what is in store in the future. Artificial Intelligence (AI) is a buzzword that has infiltrated everyday nomenclature throughout the past decade. Every industry from healthcare, to banking, to security has implemented some form of AI that is touted as the hidden key to maximising productivity and/or security. However, AI technology is still in its infancy and is not the panacea that many cybersecurity experts claim.

Today, AI is a nascent technology and has limited practical applications because it’s still difficult to understand the rationale that is used by the machine learning algorithms for making their decisions. As a result of this limited understanding, these technologies are only leveraged for pinpointed functionality. For example, AI can be designed to analyse data for specific threats, such as malware, but AI technology is only as good as the data it analyses and cannot be fully trusted to discover new threats that emerge on their own. Furthermore, even when a threat is detected by AI, humans are still needed to confirm that a real risk is present.

AI undoubtedly has its place in today’s security space. Government agencies and the military use AI to comb through hundreds of hours of call data to try to isolate terrorist or criminal activity. In its current state, AI can successfully review large amounts of data and automate repetitive tasks. The results of these services are helpful as additional data points, but at this time it makes more sense to use AI as helper technology rather than fully relying on AI to ultimately make decisions for organisations.

Furthermore, AI can be utilised in access management to watch the habits of individuals and identify actions that stray outside the norm. For example, if a user logs into a system outside the normal working hours, AI can identify this anomaly and utilise step-up authentication to further validate the user should be granted access. AI can also be leveraged for access certification campaigns by providing scoring that helps approvers prioritise their efforts on certifications with low scores. However, algorithms should not be trusted to make higher stake decisions. This is because AI cannot currently be taught or programmed with the intuition that humans naturally possess, making the risk of a breach or other security threat considerably higher.

In this next decade, AI will continue to grow from its infancy into a more useful and robust tool that companies can utilise to keep their assets and people safe. There will be a point where AI reaches a mature stage where it can truly think and learn on its own; computing power has grown exponentially in the last decade and will only continue to grow more in the next. This increase in computing power opens up a limitless number of possibilities for AI usage, especially as humans perfect and refine AI’s algorithms.

For now, AI has a place in today’s security industry and has already proven to be adept at identifying threats and making society safer. While this is helpful, humans remain a critical factor in evaluating threats. I believe that the future of AI is bright, and fully expect that our capabilities around explainable AI will rapidly advance, thus providing many opportunities to leverage these technologies in a fully autonomous capacity in the not-to-distant future