NEW DELHI :India is home to the second largest developer base for GitHub, the world’s largest code storage, hosting and sharing platform. Data shared exclusively with Mint by GitHub revealed that India has over 11.4 million individual developers on the platform, while over 440,000 Indian companies also host and share their code through the platform. All this has contributed to the creation of almost 30 million code repositories on GitHub by Indian users. In an interview, Mike Hanley, Chief Security Officer and SVP of Engineering at GitHub, explained how despite these numbers and the advent of generative AI, the cybersecurity talent shortage remains acute in India And in the world. done to combat the increase in cyberattacks and why training professionals is not the only way to address the talent shortage. Edited excerpts:
What is GitHub doing in terms of contributing to the cybersecurity developer community?
We try to ensure that developers around the world, with a focus on open source, achieve better security outcomes with us. To do this, we provide you with educational resources and free safety training. Our security lab spends a lot of time finding vulnerabilities in open source software and then working with the communities that create that software to improve it or fix bugs. We are also closely associated with the Open Source Security Foundation (SSF).
Next, we are further improving the security of our own platform: we will require everyone who contributes code to GitHub to use two-factor authentication, which is a step to increase the security of the overall ecosystem.
Despite these contributions, major geographies such as India and the United States are experiencing significant cybersecurity talent shortages. Why is this so?
India has a huge developer community, the largest for GitHub outside the US, with 11.4 million developers. However, the vast majority of them are not security experts. This talent shortage poses a major challenge because these developers create open source software, and we depend on it for everything from smartphones to cars and even smart coffee makers.
Through GitHub, we’re trying to improve this by giving developers the right security experiences and equipping them with the right educational resources and sponsorships. These resources can be directly based on what we have done or on someone else’s. In terms of tool offerings, we seek to ensure that GitHub’s tools are designed in a way that developers can benefit from good security standards without being a security expert. Our developer products have advanced features like security code analysis, which ensure that a developer does not need to be a security expert to create secure code. We’re trying to design this for every developer who will interact with these products. But there’s nothing we can do to magically show interest or attract more people to cybersecurity.
Are Indian developers interested in code security?
India is a huge market and the developer talent there is also exceptional. There are also exceptional security practitioners across India. The level of interest and talent shortage in relation to the demand of Indian developers in cybersecurity is more or less similar to that of the United States. It’s simply structural, because the cybersecurity landscape is very dynamic and significant challenges multiply over time.
What India and other countries need to think about is that we are unlikely to solve the talent shortage through training alone. This is where AI, public-private partnerships, and open source security foundations can help. Broader solutions using public and private sector resources to address some of these challenges will be important. We are not going to get out of a talent shortage of this magnitude through training.
So, will AI be the answer?
I believe AI will bring a fundamental transformation in preventing software vulnerabilities in code. In terms of talent shortage, the situation is difficult, mainly because most of the time we don’t have anyone to find the bugs. A vast majority of software defects are written and persist for years, before we encounter them. For example, Log4j, one of the most notorious cybersecurity incidents in recent times, lasted almost two years before hitting.
If the problem is not having enough people to find and fix vulnerabilities, then AI is going to help us prevent vulnerabilities from being written in the first place, which marks a massive shift. Typically, developers receive security feedback sometime before or after building an app. With AI, we’re talking about security feedback that happens at the time of writing the code. This is a massive change.
Breaking things down within AI itself, what impact do you think generative AI will have on cybersecurity?
A new feature we introduced in February of this year to Copilot’s underlying AI models is a feature to emulate static analysis tools, a fairly traditional security tool that any developer would have. As we are able to emulate these features, we are able to identify vulnerable code patterns and improve the patterns’ code suggestions over time. This helps developers stay focused on their core work, without needing to be security experts. This is how generative AI will help with cybersecurity.
But can all this also be used by attackers to enhance their capabilities?
It’s a good question. We have a Malware and Exploits Policy on Github, in which we recognize that many security research tools can be dual-use. In fact, security professionals will tell you that more offensive toolkits can actually make you a better defender and, in many cases, are used to help train or simulate cyber defenses.
The challenge is that you can’t necessarily infer intent just from the existing piece of code. The intent depends on the user. Obviously our policy does not allow code to be used to facilitate an attack, but we understand that much code may be dual-use.
As for the generative AI platform, Copilot is doing a better job of filtering out code suggestions that aren’t secure, even though it’s still in its early stages. We’re constantly improving the quality of suggestions, but it’s important to remember that models are trained on code written by humans. While the AI’s code suggestions are better than what you get from an average developer, it’s still being trained on code that has bugs because humans write bugs literally for a living.
These are things we will continue to work on over time. As for whether someone could actually use it to write malicious code, that’s where AI security concerns come in. To address this issue, we’re working closely with Microsoft and OpenAI to determine what the right safeguards are.
“Exciting news! Mint is now on WhatsApp channels 🚀 Subscribe today by clicking the link and stay informed with the latest financial information!” Click here!