Gary Kasparov and the New Code Generation
Man vs. Machine in the Era of AI and Secure Coding
The match between Gary Kasparov and IBM’s Deep Blue in 1997 captured the world’s imagination as the ultimate “man versus machine” showdown. It wasn’t just about chess. It symbolised humanity grappling with the rise of artificial intelligence.
Kasparov’s initial victory over Deep Blue in 1996 highlighted the enduring dominance of human intuition and strategy over machine calculation. However, his loss in the rematch a year later marked a turning point, demonstrating the rapid evolution of machine capabilities.
This shift symbolised the potential of machines to not only challenge but also surpass even the brightest human minds in specific contexts. In the broader narrative, it underscores how humans and machines, while distinct in strengths, are increasingly intertwined. It’s a theme that resonates strongly in the modern era of AI-assisted software development and secure coding.
Fast forward to today, and the narrative of “man versus machine” has transformed. We are no longer pitting individuals against algorithms in isolated competitions; instead, we’re entering an era where humans and AI must collaborate, particularly in critical areas like software development and cybersecurity.
However, while organisations continue to heavily invest in once-in-a-generation technologies, investment in the developer community who use and manage these tools hasn’t kept pace.
This imbalance has historically been difficult to address, but today, new platforms and solutions make it easier than ever to equip people with the skills needed to keep up with rapid technological advancements that will ultimately keep our businesses safe and secure.
The State of Developers in the Modern Era
Developers are the architects of our digital world, and their role has never been more pivotal or complex. As organisations increasingly adopt AI-assisted tools like GitHub Copilot and ServiceNow NowAssist1 and Atlassian’s ROVO2 , the focus is shifting toward augmenting human capabilities rather than replacing them.
However, this shift brings a new set of challenges. While tools like these promise greater efficiency and support of the development teams, the veracity of the underlying AI models in relation to security and reliability should still be in question.
Based on a few conversations I've had, Anthropic's Claude and OpenAI ChatGPT are often regarded as the most dependable models from a security perspective. This kind of infers that other models can pose significantly higher risks depending on the underlying development languages being used. What it doesn’t change is the importance of thoughtful selection and implementation of AI tools in development workflows.
For organisations, the key challenges in developing secure code are relatively straightforward to identify. One major issue is the lack of visibility into their developers' actual skill levels. While they may know which programming languages their teams are using, they often don’t have a clear understanding of whether those developers have been properly trained or possess the necessary expertise in those languages; such as specific accreditations or certifications that validate their proficiency.
Secondly, boards and executive leadership teams consistently rank cybersecurity as a top priority, but there’s often a disconnect at the operational level. Secure coding practices, which are critical to preventing vulnerabilities, frequently lag behind because this level of detail is rarely addressed strategically. Likewise, CISOs may not realise the extent of these gaps until insecure practices surface in code commits. Without adequate training and tools, developers are left to navigate these challenges on their own, increasing the risk of vulnerabilities.
Finally, the wide range of programming languages and frameworks commonly used by development teams significantly complicates efforts to maintain secure coding practices. With potentially dozens of languages in use within a single organisation, standardising training, enforcing security protocols, and ensuring consistent expertise across teams becomes an increasingly complex task. Without closing these gaps in developer skills and secure coding practices, every language or framework in use becomes a potential threat vector. Each instance of inadequate training or oversight creates an opportunity for vulnerabilities to emerge, broadening the organisation’s attack surface.
A Fundamental IT Management Challenge
These skills gaps are not a new problem, nor should they be overshadowed by the excitement around AI and other emerging technologies. They stem from a foundational IT management principle that has been a cornerstone of frameworks like ITIL for decades: Workforce and Capacity Management. Addressing these gaps has always been critical, but the stakes are even higher in today’s rapidly evolving technological landscape.
Workforce and Capacity Management is an essential part of service design. It ensures that employees have the skills to meet both current and future demands. Yet, in many organisations, this critical ITIL principle is overlooked. Particularly within IT departments, where skill gaps often go unnoticed the further you move from the boardroom.
Leadership teams frequently assume their technical staff are fully equipped, but this assumption is rarely backed by the necessary investment in understanding or addressing skill deficiencies. Instead of focusing on training and development, organisations tend to hire for talent and leave existing gaps unaddressed, creating a significant risk for both operational efficiency and security.
The Skills Framework for the Information Age (SFIA) offers a structured approach to assessing and developing skills, but it’s under-utilised at an operational level.
If we were to strip it all back, insecure code, regulatory risks, and the absence of secure-by-design practices simply stem from a failure to align developer skills with organisational needs.
Secure Code Warrior: A New Paradigm
Enter Secure Code Warrior (SCW), a platform company seeking to redefine developer training and risk management. Historically viewed as a developer training tool, Secure Code Warrior has evolved into a comprehensive solution for securing development practices at scale. So if you are an organisation with its own large DevOps team, or one with a significant outsourced or offshore capability, there are two clear cost-benefit opportunities that this kind of capability can provide developer-reliant organisations:
Unlike traditional security training, the platform focuses on role-based and context-specific training. Developers learn to embed security into their work from the outset. An approach known as Secure by Design.
The platform enables organisations to shift the risk conversation from “which product is at risk?” to “which developer is at risk?” By using trust scores and integration into Git repositories, organisations can gain insights into developer skills and repository security and address problems literally at the source (i.e. the developers fingertips).
A third significant cost-benefit opportunity for “end-user” client organisations lies in the indirect systems integrator (SI) sector.
Global Scale Body Shops
The SI sector operates differently from traditional outsourcing contracts. SIs typically focus on delivering large-scale, custom-coded projects that require significant programming efforts. They are big projects that may or may not extend into managed services relationships. These projects often involve vast pools of developers, many of whom the client never directly interacts with. Instead, the client only sees the outputs of their work at various development milestones.
When bidding for work, SIs often showcase “resource pools” with expertise in specific technologies. However, clients typically lack visibility into the individual developers or teams responsible for the work, either because these resources are based in offshore centers or clients simply don’t ask. As a result, the quality of the final product has traditionally been the sole measure of success, while the skills and performance of those building it remain largely unexamined. Until something goes awry.
This lack of visibility creates a significant gap in the SI business model. Clients are unable to assess the skills or readiness of the developers involved at the point of contracting, leaving them reliant on the SI’s assurances about resource capabilities3. Overcoming this challenge would offer tremendous value to clients, enabling them to make more informed decisions and ensure higher-quality outcomes.
Platforms like SCW now make this possible. By providing insights into developer skills, coding practices, and even trust scores, SCW would allow clients to evaluate the capabilities of individual developers or teams within a prospective SI before work even begins.
This shift would transform how clients and SIs approach project resourcing, offering unprecedented transparency and enabling higher confidence in project delivery and significant points of differentiation amongst the SIs themselves.
The Role of AI and the Future of Development
AI-assisted development tools are reshaping the industry, but they come with caveats. Large language models (LLMs) like OpenAI’s GPT and Anthropic’s Claude are revolutionising secure coding but are only as reliable as their training data. That is because languages with comparatively sparse representation across the internet, such as COBOL or Salesforce’s OOL, APEX, provide less feedstock for the LLMs, and therefore remain higher-risk areas.
Despite these challenges, the opportunities are immense. In five years, AI tools may well eradicate entire classes of vulnerabilities, such as SQL injection. That is a very good thing. Developers’ roles should also start to shift from writing code to architecting secure, scalable systems. That is a very good thing. And secure-by-design should no longer be a niche practice but a baseline expectation.
Kasparov’s experience against Deep Blue taught us an important lesson: machines excel at specific tasks, but humans bring strategic thinking and adaptability to the table. In the age of AI and secure coding, this dynamic remains true.
Developers and AI must coexist, leveraging each other’s strengths while maintaining rigorous checks and balances.
The future isn’t about taking sides in the man-versus-machine debate or rushing toward quick, yet risk-prone solutions. Instead, it’s about fostering a thoughtful collaboration between human expertise and machine capabilities to achieve secure and sustainable outcomes. It’s about creating a symbiotic relationship where developers are empowered to innovate securely and AI serves as an enabler, not a replacement.
By embracing platforms like Secure Code Warrior and focusing on the fundamentals of IT management, organisations can turn the challenge of secure development back into the transformative opportunity it has always been.
ServiceNow's Now Assist for Code is integrated directly into the ServiceNow script editor and provides real-time, context-aware code suggestions based on natural language prompts. It can generate code snippets, optimise existing code, and offer recommendations tailored to the specific context within the ServiceNow environment.
ROVO is designed to enhance team productivity within the DevOps environment, with additional support for developers through integration with AI coding assistants like Codeium. Codeium can be embedded directly into developers' IDEs and connected to Bitbucket repositories, providing contextual assistance, AI-generated code suggestions, and chat support. This integration is intended to boost coding efficiency and streamline workflows within the Atlassian ecosystem.
This issue highlights a fundamental flaw in most labor-hire contractual models, which often rely on third parties providing personnel to address client challenges without sufficient emphasis on expertise. While Independent Software Vendors (ISVs) and PaaS providers attempt to mitigate this by implementing their own badge, certification, and accreditation programs, these assurances are seldom requested or reviewed by clients, leaving a critical gap in verifying the quality and suitability of the talent provided.