[00:00:00]
[00:00:06] Pratik Roychowdhury: Hello and welcome to ProdSec Decoded where we break down complex product security topics into actionable insights. I’m Pratik Roychowdhury, and today we are doing something special. We will do a year-end review of 2025 and we will be speaking with our very own Chiradeep Vittal, who is the CTO at AppAxon and the co-host of this podcast. Chiradeep has been in the trenches of distributed systems and security for over 25 years from building cloud infrastructure Cloud.com and Citrix, to leading engineering and security at Lendistry and now helping companies navigate the AI product security landscape at AppAxon. In today’s episode, as I said, we will do a year in review and we’ll talk about the trends in product security that shaped 2025.
We will also get Chiradeep’s take on what are the expected trends and predictions for 2026. Alright, so let’s jump in.
So [00:01:00] Chiradeep, welcome back to the show, or should I say Welcome to your own show.
[00:01:03] Chiradeep Vittal: Yeah, thanks Pratik. It’s a bit meta doing a year-end review on our own podcast, but honestly, 2025 was really impactful, too significant, not to unpack. We’ve been talking about AI and security all year. But this was the year it all came together or came crashing together, depending on how you look at it.
[00:01:23] Pratik Roychowdhury: So that’s exactly what we’ll be diving in today. We will be calling it the AI trifecta. So there are three forces that we often talk about that come together and collided in 2025 to reshape product security. Maybe, if you don’t mind setting the stage for it, what exactly are we talking about here?
[00:01:41] Chiradeep Vittal: Yeah, sure. The AI Trifecta is this. First, developers are using AI to build and deploy applications at machine speed or much, much faster than they used to. What we are calling vibe coding, maybe that term is getting a makeover right now, but apps that [00:02:00] used to take weeks are, you know, people show that it can be built in hours and but this was leading to, you know, a different class of security problems because too much code, not enough review.
And then that’s leading to a new class of security vulnerabilities. Second, attackers are using AI to breach systems autonomously. I think Anthropic has been pretty vocal about this , demonstrating a bunch of attacks that they have detected. And we, they told us about these AI attacks, which are happening without human oversight in autonomous attacks. And third, from the security industry, obviously we are all responding with AI driven reactive and proactive tools from autonomous SOC to continuous red teaming. So lot of change in this year. And so each force accelerates the other. Faster development creates more vulnerabilities.
More vulnerabilities attract AI powered attacks. AI attacks force AI defenses. And then the cycle [00:03:00] continues.
[00:03:01] Pratik Roychowdhury: I’m assuming you cannot solve this with traditional security approaches. Right. That’s the, that’s the key insight. Maybe we’ll talk about manual security processes like manual code review, manual threat modeling, manual incident response. They can’t keep pace with the speed at which things are happening anymore. So in today’s so-called we, we have been referring to this as the agentic battlefield in some of our blogs during the year, in this agentic battlefield, the speed at which AI development and attacks are happening require proactive security controls, require more autonomous way of thinking.
So, so let’s break down each of these trifecta that you touched upon. Maybe let’s start with the development side, the AI powered development that you’re talking about. You’ve mentioned vibe coding. When I first heard this term, I thought it’s gonna be another Silicon Valley buzzword, which will come and go.
But the term I believe was coined earlier this year, sometime in Feb. [00:04:00] And it took the whole development world by a storm with Cursor , Claude code, etc. becoming mainstream in 2025. So let’s talk a bit more about what you’re seeing in the vibe coding space and also your own experiences. You have been doing a lot of coding yourself, your experiences in AI assisted coding using Cursor and some of the other tools.
[00:04:21] Chiradeep Vittal: Yeah, the term was coined by Andrej Karpathy, and it describes a fundamental shift in how software gets built. And in all on, in, in all fairness to Andrej, he was talking to this this lifecycle where you just describe what you want to the software and then it, the, to the AI, AI builds it. You don’t look at the code and then you test it, you say, “Hey, hey, Mr. AI, there’s something’s wrong here. Can you fix it?” The AI fixes it, you test it, and then, you know, you never ever look or touch the code because the AI is doing everything. And so he, he said that, “I enjoy doing this for, you know, throwaway code”. But [00:05:00] increasingly as the AI models get better and better it’s encroaching into mainstream software development, right?
And so, for good or bad and typically this works better in the hands of more experienced software engineers, the people who are new to software engineering obviously can’t guide the AI in the right manner, but senior engineers are actually having a, a good time using these AI assistants.
And that applies to me too, that I’m able to be much, much more productive, in my job.
[00:05:37] Pratik Roychowdhury: So.
[00:05:37] Chiradeep Vittal: We, we’ve seen some, you know, just a throwaway examples here. There’s some guy made a full flight simulator just vibe coding, and he put it on the app marketplace and he started earning money from it. And so, and of course he didn’t actually touch the code. And so that shows that things can be put into production and make money with so-called vibe coding.
[00:05:59] Pratik Roychowdhury: Wow. So [00:06:00] maybe, maybe talk a little bit more about the security problem. If apps work fine, you’re putting it in production. Why does it matter how it was built? And maybe give some concrete examples. If you can.
[00:06:12] Chiradeep Vittal: So this is the critical insight, right? So these models are trained to make code, but they’re extremely eager to get you working code, not necessarily the most maintainable or the most secure code, right? So even if you prompt it, make it secure there’s just too many choices the AI has to make. And it’s, as I said, it’s prime objective is to get something to work. And so when you vibe code, typically you’re only checking of the application functions. You’re not reviewing line by line or system level or architecture level, you’re not thinking about those issues, and then you’re not also testing the code for security issues. You’re just trying to make sure that the code works like you expected it to work. And I think that’s where these silent vulnerabilities creep in. The app works perfectly, passes your functional tests, but it could be wide [00:07:00] open to attack.
[00:07:03] Pratik Roychowdhury: And maybe if you don’t mind also giving a concrete example of where this was seen, if at all.
[00:07:09] Chiradeep Vittal: Yeah, so, there was a, you know, widespread example. Somebody used Lovable earlier this year to publish an app, and he started charging money for it. And then people found that the AI had checked in his API keys into the code and then they were able to, you know, hack his application.
So, pretty simple example. It was a inexperienced software developer that did it. But Databricks, their AI red team did an even more interesting experiment where they asked the AI to build a multiplayer game. And the AI chose Python’s Pickle module for networking because it is the most direct functional path.
Now even, you know, junior software engineers may not pick up on this, but Pickle is just notoriously vulnerable to remote code execution attacks. And, you know, it’s, it’s, it’s a no-no to [00:08:00] not to use this thing. So, if it had gone through security review, of course it would not have passed security review. And so they were able to show that, you know, once it was built, they were able to hack it and then make the sure that the app was vulnerable.
[00:08:15] Pratik Roychowdhury: So, the app worked perfectly fine. It was vibe coded, but it had an RCE remote code execution vulnerability, which was not caught right as you said, because of the security reviews.
[00:08:28] Chiradeep Vittal: Yeah, and this pattern repeated throughout, you know, 2025. There’s a bunch of these categories of silent killer vulnerabilities. One is the most obvious one. Like I said, hard coded secrets. The AI says, “Hey, I gotta get this to work. This developer has not figured his environment variables correctly. I’m just gonna check in this code to make it work,”. And so the developers also, or the vibe code, is also trying to make, get it to work, and AI says, “can you gimme the API key”? And he gives them the API key, and then [00:09:00] the AI checks it into the repository. So that’s, the most obvious one.
Then there’s, insecure defaults. You’re not checking the inputs, sanitizing the inputs. Maybe passwords are default, you’re not encrypting the password. So basic security hygiene, is not being done. And that security audit of the Lovable platform found that, almost 10% allowed access to PII, because the developers never thought about asking the AI to hide the PII.
And lastly, this is the most critical. I think this is also applies to h uman developed code, but it’s, it’s magnified or intensified by the AI, is logic flaws and vulnerable patterns like privilege escalation, unchecked inputs, unsafe deserialization, missing rate limits. These are things, even senior engineers can overlook. But this is going at 10 x the previous pace. The code works, but it’s exploitable.
[00:09:59] Pratik Roychowdhury: And, [00:10:00] and as you said , we saw this repeated in 2025, so I’m guessing it scaled quite a bit across organizations this year, right.
[00:10:08] Chiradeep Vittal: Yeah over 70% of the organizations reported using AI assisted development by the end of the year. And so, there are vibe coding platforms like Replit and Lovable that people are generating thousands of applications. Some of them are generating dozens of applications a week just by a single developer.
And here’s, and here’s what’s really concerning. Many of these are deployed to production without any security review. And so if I was a senior engineer, or a senior security engineer I’m probably being, you know, asked to fix vulnerable AI code, AI generated code. This is not really fun part of the code. It’s, it’s okay to fix my own code, but to fix somebody else’s code is, is a big pain. So, things are going to need to change, but that’s for 2026, hopefully.[00:11:00]
[00:11:00] Pratik Roychowdhury: So, maybe let’s talk about the solution. Are we saying that, you know, vibe coding, given that it’s inherently very insecure, or AI assisted coding, is it, is it gonna go away or, or are there some changes that might happen?
[00:11:16] Chiradeep Vittal: So Pratik, we have talked to a bunch of product leaders, engineering leaders, security leaders, and they’re all saying that it’s being adopted, right? So it’s, the genie cannot be put back into the bottle. And then it, for good reason, the productivity gains like I’m seeing in my own day-to-day is just too significant. I think we need to fundamentally change how we approach security for AI generated code. And the irony is that with a little bit of care and oversight, aka, prompting, AI can generate code that doesn’t have the usual landmines that we as programmers step into SQL injection, XSS, and so on. But it’s not so easy to get completely secure code in one shot. Just like you [00:12:00] ask to generate a LinkedIn post or a blog post and you’re like, “this is not it and I gotta refine it”, right? So, human coders need to get another pair of eyes on their code. And so also AI is generated code needs another of review. And the good news is that the AI can do this at scale. And if humans are vibe coding, then the machines can be vibe checking as well. And so currently AI review tools are pretty good, but we need to improve them. They’re showing about 46% accuracy in detecting runtime bugs, there’s actually an improvement on traditional SaaS tools. And so we need this approach of building a security right from the get go shift even more left, if you will. Secure prompting so explicitly instructing the AI to include security controls, just like prepared statements in SQL. This needs to become standard practice.
[00:12:54] Pratik Roychowdhury: Basically it’s not about stopping vibe coding. Or vibe coding or AI assisted coding is not going [00:13:00] away anywhere . Like you said, we’ve spoken with a number of product and security leaders, and that’s the general consensus. But it’s more about building all these guardrails, moving security, as you said, more towards the left very, very early on, and ensuring that security is beefed up around these AI assisted coding practices.
Right?
[00:13:20] Chiradeep Vittal: Exactly, and you need, obviously, runtime controls, just like as before. But perhaps they need to be even more stronger and maybe even get the AI to review it and recommend more runtime controls. Attack paths that you may not have thought about, I think the AI can do it at scale and, generate a number of plausible attack paths.
And then you know, you should be able to increase your security based on the AI’s recommendation. But you have to assume that the AI generated code has flaws, and we should design systems that can survive this insecure code.
[00:13:52] Pratik Roychowdhury: Perfect. That actually talks quite a bit about the first part of the trifecta that you were talking about, which is the development side of the [00:14:00] trifecta. So let’s move on to the second part and talk a bit about attackers using AI or bad actors using AI. So while developers were using AI to build faster, the attack landscape transformed in fundamental ways as well.
So maybe walk us through what happened and give us some concrete examples there.
[00:14:19] Chiradeep Vittal: So, yeah. While you know, developers are using AI, obviously the attackers are not sitting by. They’re also using AI to improve their tools, improve their approaches. But then let’s distinguish between two different kinds of attacks, right? One is attackers using AI tools, maybe they’re vibe coding, using AI to orchestrate attacks. And the second part is AI systems that use LLMs in the backend to, let’s say, parse documents or give you answers about, various topics, are themselves susceptible to attacks, new kinds of attacks what we call prompt injection, prompt leaks , and these are [00:15:00] extremely new types of attacks, which both security teams and engineers are not fully well-versed with.
And so, and both are happening in 2025. And, the implication for security is sobering because one is there’s a new kind of attack that they don’t have adequate defenses against. And two, the scale and then the intensity of the attacks is just increasing. And so let’s look at the first one, which is the AI as an autonomous operator.
And so I think in 2024 it was obvious that, you know, the kinds of phishing emails we got were just, the quality was far better. And you could see that the, the people who were using AI to generate these phishing emails. But in 2025,, they started using AI agents not to just generate one-off attacks, but to orchestrate attacks over many days, many, many weeks even. So because these [00:16:00] agents are so good at programming, they understand systems or machines very well, and so they’re able to use tools like SSH, any kind of hacking tools, which are command line based, and attack a system. And, and so they, use whatever regular red teamer would use as a script and they can orchestrate those scripts and attack systems. And so Anthropic told us about the GTG 1002 incident in September. This was attributed a state sponsored group, and it was the first documented large scale cyber attack executed by autonomous agents. And what these. Attackers did was the manipulated Claude Code, which was Anthropic’s coding assistant, and weaponized it. The AI agent, autonomously mapped the attack surfaces for over 30 global targets, tech companies, financial institutions, government agencies, and scanned for exposed services that identified entry points.
And [00:17:00] instead of then, you know, handing it over to humans or, you know, trying well-known exploits, it generated custom code for each target. And if one exploit failed, it would just think about why it failed then try a different path. And then once it got in, it was able to move laterally, you know, get credentials from one machine and use it to attack a different machine. And so this is even like a super, you know, elite level hacker, but at scale. And so the implication is profound, right? So, so the threat actors can deploy these agents persistently probe, and exploit the defenses around the clock. Relying on traditional controls or reactions or incident response is probably not gonna be adequate. The other side of the, the other thing that makes it scale up is that [00:18:00] incompetent, or, you know, people, hackers who are not that great at coding, can code up very sophisticated ransomware attacks and then, you know, bring down systems. So scale is moving both in terms of what an elite hacker can do and what the not so elite hackers can do.
[00:18:22] Pratik Roychowdhury: So we have AI conducting attacks autonomously while AI systems themselves are being attacked. Right. So that’s, as you said, is a two front war. And I guess almost all organizations have been deploying or at least tinkering with AI in 2025. We have spoken with a lot of them. They are starting to build some apps, some are even in production, and so they are more likely to be exposed on these both of these fronts, right?
AI as attacker and AI as victim so to speak. And obviously most were, weren’t prepared for it either, right?
[00:18:55] Chiradeep Vittal: Yeah, yeah,
[00:18:56] Pratik Roychowdhury: So,
[00:18:57] Chiradeep Vittal: yeah.
[00:18:57] Pratik Roychowdhury: so we, yeah.
[00:18:58] Chiradeep Vittal: So the second [00:19:00] category is, of course, the, the attacks against the AI systems themselves. We are not talking about attacks against OpenAI or Anthropic, but you know, everybody’s we talk to has at least CoPilot deployed. It’s a different question whether employees are using it or not, but these systems, which are, you know, assist the, the employees or make them more productive, are vulnerable to what we call prompt injection, right? So unlike SQL injection or XSS, prompt injection exploits, the AI’s inability to distinguish between instructions and data. So the AI can’t tell the difference between what the developer told it to do so, and what the attacker smuggled into the data stream. And as an example, there was the disclosure about the Microsoft CoPilot vulnerability. So the attackers could craft malicious content that CoPilot later consumed - documents, messages, whatever -then the internal system prompt of CoPilot was manipulated. [00:20:00] And so then CoPilot was, didn’t realize it, but it would exfiltrate sensitive data from the tenant, with zero clicks. So the user didn’t do anything wrong, the organization didn’t do anything wrong, it’s just that even Microsoft put in a lot of guardrails, a lot of defenses to make sure that the prompts weren’t hijack able. But but it was possible. So so it’s a, it’s a tough problem and, and, this is a problem that will continue to grow and will have to deal with it.
[00:20:38] Pratik Roychowdhury: And I’m also assuming that maybe enterprises have not even begun to properly measure the cost of AI system compromises. Because many organizations obviously did not realize that they were being attacked this way. I know every year IBM has got this cost of breach report that they come up with.
I, I believe in [00:21:00] 2025 or maybe, yeah, 2025. It was about 4.4 million is the, is the cost of breach? I’m, I’m just waiting for them to update it with all these newer types of potential attacks and AI-based attacks that we’ll come across.
So, we talked about the development side of the trifecta.
We talked about the, the bad actors. Now let’s talk about how did security industry respond to this? So this is about the third part of the trifecta, which is AI powered defense. You, you talked about, you know, a bunch of things Agentic SOC and some of the other items. So security industry is obviously trying to fight fire with fire, so using AI to defend against AI powered threats.
And I, I guess there are three fronts in which this, we have seen some of these solutions. One is the reactive controls, the agentic SOC related stuff. We have also seen evolution of some of the regulatory frameworks. And of course, the most important part of it is moving in a proactive manner [00:22:00] because you cannot do reactive controls and keep pace with this fast you know, AI powered breaches that are, or attacks that are happening.
So maybe let’s start with reactive controls and specifically let’s talk about the agentic SOC Talk to us more about agentic SOC and what we saw in 2025.
[00:22:19] Chiradeep Vittal: Yeah, so you know, traditional security operation centers have been plagued by alert fatigue and skills gap for years. You got tier one analysts drowning in alerts, trying to figure out what’s real and what is noise. And this is a problem even before AI came along, right? And so now with AI power attacks, this is just gonna make this worse.
And so the security vendors and startups are coming out with agentic SOCs. So replace or augment your tier one analysts with AI agents that can autonomously take a look at an alert, verify that it’s, you know, a real alert by, you know, maybe they write a SIEM query, [00:23:00] maybe they go and check a different system for correlation autonomously, or check for, you know, known attacks known patterns autonomously, and then mitigate, at least, you know filter out the, the noise maybe escalate the priority of the, the genuine ones and the promise is maybe even, you know, dynamically apply, go to a tier two analyst level by, you know, mitigating the attack. The jury, I think, is still out there, whether it’s making an impact. But like everything, these tools will mature. The kinks will be ironed out and it should become a, you know, de facto or it should be, you know, just like people deploy EDR today, it’ll just become natural part of the security posture to deploy agentic SOCs.
[00:23:55] Pratik Roychowdhury: And I remember during the early days of AppAxon you used to talk about, you know, you [00:24:00] led security in your previous organization and this alert fatigue was a, was a big deal for you. So you, you were talking about agentic SOC quite a bit in the early days. Maybe, let’s talk about the second front, which is more about regulation. I mean, there are lots of things going on in the, on the regulation side. On one hand you’ve got the EU AI Act, which went into enforcement. On the other hand, just last week itself president Trump signed an executive order seeking to limit states regulation of artificial intelligence . So talk to us about what are the impacts of these two and others if, if at all on, on, on the AI front.
[00:24:38] Chiradeep Vittal: Yeah, basically we have two approaches, right? So the EU AI Act and the US executive order. The EU is taking a really strong stance think of it as a forcing function. They’re outright banning “unacceptable risk” AI like social scoring starting in February, 2025. Plus by August, 2025, any general purpose AI model has to [00:25:00] meet strict requirements, technical docs, providing copyright compliance, and red teaming for safety. It’s a high bar.
The US on the other hand, is trying to keep things flexible. The focus is more on lighter federal oversight to prioritize how fast innovation can come. They want consistency and not super prescriptive rules. And this is, part of it is based on you know, there is competitive pressure from open source models from China, for example, where an open source model does not have any regulatory oversight. It’s just released. Anybody can use the weights. And so as these open source models become better and better, the proprietary models which are burdened by compliance could fall behind. So that’s the fear in the US. So we are seeing a split world. Europe is setting a high risk based compliance standard while the US is emphasizing flexibility. But you know, like most cases, the EU standard is, tends to be globally [00:26:00] applicable. And so everybody’s trying to scramble to meet, meet that standard.
[00:26:05] Pratik Roychowdhury: I guess to say, to kind of summarize it, one forces transparency and accountability and the other is trying to make it more realistic and fostering AI innovation , and keeping pace with AI competition that is coming from China and other places. Right.
[00:26:20] Chiradeep Vittal: Yeah, yeah, yeah. So it’s, yeah, it’s remains to be seen how how many other count, how other countries follow EU often sets the standards, you know. Pollution, social media, copyrights, and so on and so forth, and other regions typically follow. And so here we are seeing something of a split world.
[00:26:40] Pratik Roychowdhury: Got it. So, all right, so we spoke about the regulation. We spoke about the agentic SOC, the reactive controls. Let’s talk a little bit about proactive security. So that seems to be a big story going into 2026, right? So the argument we are making is that reactive security, even with [00:27:00] AI powered SOC or agentic SOC isn’t enough.
Why do we need proactive security? What’s the reason?
[00:27:07] Chiradeep Vittal: Well, that’s the main reason is the speed and scale of these AI driven threats, right? And makes reactive approaches insufficient. By the time you detect and respond to attacks, you know, significant damage could have be done. And so, these agents, autonomous agents can operate 24/7. And it’s not a lone hacker probing here and there, or even a team of hackers.
It’s swarms of agents probing and attacking your system. And then if you have been vibe coding or AI assistants carelessly then that increases your attack surface. And so you need to make sure that your coding standards or your coding controls are much more rigorous. And we have seen a number of attacks against NPM packages and, and so on and so forth this year. And so, those, supply chains are being threatened as well. So you can’t handle this battlefield manually. You need security that’s [00:28:00] baked in from the start, and security that operates at the same speed as that of the threats.
[00:28:05] Pratik Roychowdhury: So, so maybe talk to us a bit about what does proactive security look like in practice in the context of product security, of course. We have been talking a lot about proactive product security. Maybe walk us through how it looks like in practice.
[00:28:20] Chiradeep Vittal: Several things. First thing is, you know, secure by design and secure by default. And so we have, the industry has been talking about ‘shift left’ for a long time. But this is taken to the extremes. This is building security into the products from the start, but not adding it later and shipping products with safe configurations out of the box and not leaving it to, you know customer configuration. And second is automated threat modeling. So traditional threat modeling was too slow for the AI era of development. but we have seen AI powered threat modeling tools emerge. And these tools can, you know, understand complex architectures, [00:29:00] reason about attack vectors, and then cross-reference known attack techniques. And they can provide you exhaustive analysis that would be impractical, manually.
[00:29:09] Pratik Roychowdhury: Yeah. And I guess this is, this looks like, as you said, shifting way left. It’s not just shifting in the development cycle, shifting all the way to the design cycle, so that we can catch any kind of security issues earlier in the design phase itself, right? So, and, and also it’s, it’s about making security as an enabler, not a bottleneck, or a blocker, which is what it is often perceived to be when security checks are fast and automated developers don’t see that as obstacles is, is what we are thinking. Right. So maybe, maybe let’s talk a bit about threat modeling. You talked about automated threat modeling.
There are different kinds of frameworks that are in existence, STRIDE, LINDUN, Pasta, and other frameworks that have been used in a while. But I guess there are also some new frameworks that are coming up, right? So maybe talk to us a bit about these newer [00:30:00] frameworks.
[00:30:01] Chiradeep Vittal: Yeah, STRIDE and PASTA, et cetera, they’re designed for the pre AI era, obviously. AI brings in this non-determinism. When you talk to an LLM, you’re never certain about, you know, their responses. It’s not like any other API call. So this is where new frameworks come in. There’s the MAESTRO framework the ‘modeling agent system threats, risks and operations’ adopted by the Cloud Security Alliance. MAESTRO addresses threats specific to AI agents, things like goal misalignment cascades, where one agent’s misaligned goal spreads through a multi-agent system or agent to agent trust where one agent might socially engineer another. Pretty advanced stuff, but people need to start thinking about this today.
[00:30:47] Pratik Roychowdhury: What I’ll do is I’ll go back to something that you mentioned earlier, which is red teaming, right? So let’s double click on that. And talk to us a bit more about the red teaming and some of the tools that are used today.
[00:30:58] Chiradeep Vittal: Yeah. One, one is [00:31:00] about bringing AI to, traditionally manual effort to scale it up. So AI to red team your applications, whether they’re have AI in them or not. The second is red teaming applications which have AI in them. AI, as we discussed earlier is, is unpredictable. It’s a statistical system and so traditional red teaming does not fit into this, so we need to test it differently. And so the, this GenAI red teaming went into mandatory compliance, partly driven by the EU AI act, and the scope expanded beyond penetration testing to include what’s called responsible AI risk. So bias hallucination, harmful content, and it had to scale. You can’t manually red team this volume of these AI systems being deployed, and so automated red teaming tools became critical.
[00:31:52] Pratik Roychowdhury: So, you talked about secure by design, secure by default, threat modeling, red teaming. So all of these are part of the [00:32:00] proactive security bundle, and all of this is about catching vulnerabilities before attackers find them, right? So you’re essentially simulating attacks to find weaknesses for red teaming.
You’re creating, figuring out what are all the threats early in the design cycle using threat modeling, and you’re doing it continuously at, at scale, using against AI systems, right?
[00:32:23] Chiradeep Vittal: We saw that speed and security are, you know, traditionally at loggerheads. Security is always seen as a speed bump. But they’re not opposites. I think with the judicious application of AI, we can do both. And so organizations that adopt AI based tooling to improve the security will win. And they’ll build resilient systems designed to survive the AI era. And they don’t just react faster, they’ll build systems where security was impossible to skip because it is baked into every layer.
[00:32:59] Pratik Roychowdhury: Fantastic. [00:33:00] So Chiradeep, we talked a good amount of stuff. We have done the retrospective in what happened in 2025. So let’s just look forward. You’ve been in the industry long enough to see different kinds of patterns. Let’s talk about some of the predictions. What do you predict will happen in 2026 when it comes to all these.
The three AI track, the, the three parts of the AI trifecta that we were talking about. So let’s start with AI powered development. Vibe coding, as we discussed, is obviously not going away. It’ll mature. So talk a little bit about that side of the, of the trifecta.
[00:33:35] Chiradeep Vittal: So I think vibe coding will mature significantly. The first prediction is that frontier models’ coding abilities will be enhanced to generate secure code. The way these models are trained is by reinforcement learning and during this training, the model gets a reward if it generates correct code. And this makes the model eager to please by generating code that works, but it’s not necessarily secure. I believe by adjusting the [00:34:00] rewards these labs will enable the models to bias towards secure generation. There’s many problems to solve in this you know, multidimensional reward, framework, but I’m sure the labs will get, get there. And because these problems are, you know, applicable to wide range of capabilities that they’re eager to build, and they’re spending billions of dollars trying to do it.
So quite hopeful that some of these most common security errors will be eliminated or mitigated by right when the code is generated.
[00:34:29] Pratik Roychowdhury: So basically what you’re saying is even vibe coders could generate more secure code.
[00:34:34] Chiradeep Vittal: Exactly, but not perfectly secure. That’ll be too hard. So second prediction is that enterprises will be emboldened to rewrite and replace some of those creaky legacy code that nobody understands and security treats like radioactive garbage. And these things generate tremendous amount of in revenue, but they’re also landmines. And so the idea is that you use AI assistants to reason about the existing code and then reproduce parts and [00:35:00] the functionality in a secure manner so that it just strengthen the security. I hope we’ll see the beginnings of this moment.
The third prediction is that AI driven security review will be mandatory. So just, this is just a scaling question. Not every piece of code can be reviewed by humans at the current rate of code generation. So it’ll just be required that security teams will say that, “Hey, at least you know, the first pass should be done by an AI reviewer. And then maybe a second pass to, you know, flag false positives or take a deeper look through humans”.
And, and fourth prediction is that the tech and security debt created by AI coding assistants just increase faster than the attempts to fix it. And part of it is that developers are still not shifted left, and so they will not prompt the AI to generate code securely. And part of it is that there are non-developers generating code and deploying code.
So it just makes the [00:36:00] problem worse.
[00:36:02] Pratik Roychowdhury: So, so maybe talk about the attacker side. Where do you see in 2026 AI powered attacks going?
[00:36:09] Chiradeep Vittal: So this is where I think things will get a little worrying. And so we’ll see scale up on the attacker side. And even if the frontier labs improve their security or, you know, start detecting that people are using their tools to develop attacks, we are seeing that open source models are getting very, very good.
And so the, the hackers will just switch to open source models and go even more in the dark. Right? And so the first prediction is that the scale will knock security back on its feet and they’ll scramble to put in emergency measures. Mostly it’ll be just you know, locking down existing controls even more, and then more employee restrictions.
[00:36:48] Pratik Roychowdhury: That’s, that’s definitely terrifying.
[00:36:51] Chiradeep Vittal: Second prediction, open source vector. So attackers will mine open source for zero days at scale using AI. And we have, [00:37:00] I use AI to identify bugs in my code, and it’s sometimes the things that it finds or surprises me and things that I have not thought about. And no doubt there are tons and tons of these kind of latent bugs, which, just humans have not been able to find, even hackers are not able to find because it’s not feasible. And so they’ll, they’ll find these bugs and start using them to attack. So think about Heartbleed and Log4j and Structs and WannaCry all at once in maybe compressed a few weeks and maybe double that.
Third, prediction development teams will continue to get targeted, whether it’s malicious or hacked MCP servers, poisoned open source libraries, hijack coding agents or even fake remote employees.
[00:37:47] Pratik Roychowdhury: Wow. So, so it looks like the sophistication and scale both increases and I think the scary part of it is that the barrier to entry drops and you won’t need kind of elite hacking skills anymore, right? [00:38:00] You’ll need the right tools and some creativity and that dramatically expands the threat actor tool, right?
So, so maybe, maybe let’s talk about the AI power defense, the, the proactive revolution. So development is maturing, attacks are escalating. What about security? Is there a silver lining there?
[00:38:19] Chiradeep Vittal: I think 2026 will be year, you know, proactive security finally becomes the norm driven by AI. I think security teams will finally care about the AI apps that developers are deploying. They’ll scramble to find solutions to defend these apps. And we have seen that CISOs claim to be concerned, but, you know, you’ve to rank it below the problems.
And real time threat modeling will become reality. And so we talked about AI powered threat modeling tools. In 2026, this will evolve into continuous real time threat modeling. Every time you change infrastructure, add a new microservice, modify API, deploy new AI agent, your threat model should automatically update and flag new risks. And [00:39:00] so organizations will have living threat models that evolve with their system.
And then there’ s that autonomous red teaming, we’ll see AI agents that can both attack and provide defense simultaneously. And so these agents will continuously probe your system, you know, find vulnerabilities, automatically generate and deploy fixes. Its like having a persistent purple team or ethical hacker that never sleeps, constantly testing and hardening your defenses. And then, you know, it just this is a key enabler in AI, is that to have context and memory. And so these agents will learn from whatever they’ve done before and then continuously improve the system.
And the prediction is that compliance automation with the EU AI Act and so on, the compliance will become too complex or manual processes. And so these agents will monitor your assistants for compliance drift, automatically generate audit reports, flag non-compliant AI [00:40:00] usage, and even recommend remediation steps. And so just like security, compliance becomes automated and continuous, rather than a quarterly nightmare.
Security will struggle to get control over shadow AI agents. Unapproved agents launch from employee laptops will have access to whatever the employee has. Employees would make mistakes, but, you know, maybe a mistake a week.
But these agents can go blindingly fast and, and you wouldn’t even be aware of these mistakes. And so the CEOs are pushing their employees to go AI native and this will clash directly with the cowboy tendencies of employees to try the latest agents.
[00:40:41] Pratik Roychowdhury: So, so what you’re saying is there is a silver lining. There is, there is gonna be proactive security controls, which will strengthen and that’ll balance out the other parts of the trifecta, right? Developers moving really fast with AI. Attackers moving really fast with AI. And so how do you bring in the security part using AI, [00:41:00] right?
So that that’ll, that’ll also strengthen up. So, so maybe, let’s move into you, you did a lot of predictions. We will obviously, you know see how these predictions go along in 2026, but maybe let’s, let’s try something fun. Maybe let’s try seeing what is the boldest prediction that you can have in 2026.
[00:41:20] Chiradeep Vittal: Honestly, this is very, very hard to predict. I mean, I, nothing in that happened in 2025, I could have predicted. So, what I can say is that the trends will continue. AI will become ubiquitous. It’ll show up everywhere and in the most unexpected places. And then you’ll, you’ll see that it makes the most unexpected, illogical decisions, and we won’t even realize it until months later.
And so that’s gonna be a challenge to keep, you know, governing these systems and keeping an eye on them.
[00:41:51] Pratik Roychowdhury: Actually honestly, like you said beginning of 2025, if somebody had told me the amount of progress we made on the AI front, I would not, [00:42:00] obviously, believe it. So things are moving at immense speed. That’s been the pattern of AI anyways, right? So things are moving at really, really fast speed. So maybe, maybe if you were to sum up 2026 in in a sentence or two, how, what would that look like?
Just, just a, just to summarize all the things that you talked about.
[00:42:21] Chiradeep Vittal: I think 2026 will be the year we learn whether we can govern AI faster than the adversaries can weaponize it. And that’s the race technology exists on both sides. Can t he good guys deploy proactive AI defenses, establish effective governance, and mature the security practices before the attackers overwhelm the traditional defenses. I think we can, but it requires urgency. Organizations that wait until they’re breached will be too late.
[00:42:50] Pratik Roychowdhury: And I guess that’s why conversations like these matter where we talk about the, the trifecta or the so-called agentic battlefield, right?
[00:42:59] Chiradeep Vittal: Yeah, [00:43:00] these decisions that organizations will make in the first quarter of 2026, will be key for the rest of the year. So start soon. Start now.
[00:43:11] Pratik Roychowdhury: Alright, perfect. So that’s a segue to essentially wrap up. It’s, it’s, it’s, it’s obviously been a, been a lot of things that we have unpacked. So Chiradeep it, it, it was a incredibly comprehensive look at 2025 and where we are heading to 2026. Maybe before we wrap up, any final thoughts you have?
[00:43:32] Chiradeep Vittal: Yeah, I think 2025 proved that AI is just not, not only the future product security, it is already here. So the AI trifecta, AI powered development, AI powered attacks, AI powered defense, it’s, it’s here. And this cycle accelerates, right? So faster development means more vulnerabilities, more vulnerabilities mean more AI attacks and AI attacks force AI defenses.
And so this is just going to accelerate, going to consume [00:44:00] everything. And so the organizations that organize, that understand this and adapt now with proactive security, automated controls and proper governance will be positioned to thrive in 2026. And those they don’t will find themselves behind fighting yesterday’s battles with yesterday’s tools.
[00:44:18] Pratik Roychowdhury: Perfect. Well, that was a perfect place to end, Chiradeep. Thanks a lot for breaking all this down.
And to our listeners, thank you for joining for this special year in Review episode. If you found this valuable, please subscribe to ProdSec Decoded where you get your podcasts. We’ll be diving deeper into many such topics in the coming episodes and in 2026.
Until next time, stay secure out there and have a happy holidays and see you in 2026.