The CrowdStrike issue, the widespread IT outage that occurred last July causing millions of Windows systems to crash, can be likened to hiring a security team to protect your house only for them to accidentally set it on fire, said NASSCOM president Rajesh Nambiar.
Speaking at a panel discussion at an event hosted by India Global Forum in the UAE, Nambiar reflected on the incident, which triggered the infamous "blue screen of death" and disrupted businesses, governments, and daily life across the globe. Key industries, including airlines, hospitals, banks, and retail, were severely affected. Nambiar emphasised that the outage highlighted the critical need to strike a balance between speed and quality in IT deployments.
Nambiar noted that artificial intelligence has a crucial role in minimising issues in the Continuous Integration and Continuous Deployment cycles which have to deal with product vendors continuously deploying updates into production. "However, achieving a balance between speed and quality remains essential," he said. "Security and stability are equally important, and systems must be resilient enough to recover quickly when things go wrong. This concept of cyber resilience—building enough redundancies into systems—is vital."
Nambiar noted that there is a tendency to minimize redundancies and maximise performance. "But redundancy, though often undervalued, is crucial for resilience," Nambiar said. He pointed out that the COVID-19 pandemic highlighted the need to prioritise resilience alongside efficiency.
"Don’t put all your eggs in one basket. Systems must incorporate redundancies to ensure stability and rapid recovery during disruptions. Resilience, stability, and efficiency must all be equally prioritised to create robust systems,” he emphasized.
Nambiar also noted that cybersecurity for AI and AI for cybersecurity are fundamentally different but equally critical areas. AI systems inherently deal with sensitive enterprise data, often combined with large language models (LLMs) and other advanced technologies, making them prone to risks such as data leaks and security breaches. The large-scale deployment of AI further amplifies these challenges, necessitating robust cybersecurity measures.
Generative AI introduces additional complexities. "When training LLMs, there’s significant potential for biases to creep in, and during inference, these systems may exhibit hallucinations," Nambiar explained. Ensuring security for AI systems requires addressing these vulnerabilities during their development, not as an afterthought. Implementing strong AI governance models is key to building secure, reliable systems that deliver value without compromising safety.
On the flip side, AI is becoming indispensable for cybersecurity, said Nambiar, citing the volume of data in modern cybersecurity exceeds what traditional IT systems or human oversight can handle. “One thing that people are concerned about is the ability for you to have very seamless cyber security management, whether it is identity management, whether it is access management, whether it is in terms of having the cyber security on the data,” he said.
"All of that is better managed through an AI system rather than human beings or the traditional IT system because of the volume of data. The ability for you to detect an incident, the ability for you to process such a volume of data—it's only possible with AI. And hence, AI is much more important for cyber security than anything else."