Raise Your Level of AI Ambition - Microsoft's AI Strategy for Developers with Jay Parikh

Raise Your Level of AI Ambition - Microsoft's AI Strategy for Developers with Jay Parikh
Microsoft's Jay Parikh explains how AI is fundamentally transforming software development from linear coding to parallel agent orchestration, challenging developers to raise their ambition beyond amazement.

Fresh out of the studio, Jay Parikh, Executive Vice President of Core AI at Microsoft, joins us to explore how Microsoft is fundamentally transforming software development by placing AI at the center of every stage of the development lifecycle. He shares his career journey from scaling the internet at Akamai Technologies during the dot-com boom, to leading infrastructure at Facebook through the mobile revolution, and now driving Microsoft's AI-first transformation where the definition of "developer" itself is rapidly evolving. Jay explains that Microsoft's Core AI team, is moving beyond traditional tiered architecture to a new paradigm where large language models can think, reason, plan, and interact with tools—shifting developer time from typing code to specification and verification while enabling parallel project execution through specialized AI agents. He highlights how organizations like Singapore Airlines cut project timelines from 11 weeks to 5 weeks using GitHub Copilot and challenges both individuals and enterprises to raise their level of ambition: moving from being amazed by AI to being frustrated it can't do more, while building cultural experiments that unlock this exponential technology. Closing the conversation, Jay shares what great looks like for Microsoft's Core AI to enable AI transformation for every organization around the world.


"There's this set of people that are using these AI-powered tools and they're like, 'Wow, that's amazing!' Stunned as to how incredible the response is from AI. Then there's another set of people that have these experiences when they work with AI—they’re frustrated with it because they're just like, 'Why can't it do this for me yet?'And they're pushing the envelope of what this LLM or what this system can do, what this tool can do. If you are in the former group, then you need to raise your level of ambition. You need to delegate harder things to it. And if you're in the second group, then you need to learn more about how these things work." - Jay Parikh

Profile: Jay Parikh, Executive Vice President, Core AI, Microsoft (LinkedIn)

Here is the edited transcript of our conversation:

Bernard Leong: Welcome to Analyse Podcast, the premier podcast dedicated to dissecting the pulse of business technology globally. I'm Bernard Leong, and today we are diving deeply into Microsoft's Core AI strategy and what it means for developers, enterprises, and startups in Asia and beyond. With me today is Jay Parikh, Executive Vice President of Core AI at Microsoft, now at the heart of Microsoft's AI transformation. So Jay, welcome to the show.

Jay Parikh: Thank you. It's a pleasure being here.

Bernard Leong: Of course. We always begin with trying to hear the origin stories about guests. My first question is, how did your career journey begin and what eventually led you into Microsoft?

Jay Parikh: I started my career at the beginning of basically the dot-com boom. I think the arcs that kind of define my career: one is working at a company, Akamai Technologies, back in the day that really helped scale the internet. From there I moved to Facebook where we were scaling social networking and also mobile, because we had the mobile challenge as well. Now I'm at Microsoft. I joined last year really to help the company and to help our customers and partners transform their companies around AI.

Bernard Leong: In looking back at the career journey, given that you started in the dot-com era, we went through Web 2.0, then after that mobile, and now AI. If you look back through your career, what are the key lessons from your journey that you can share with my audience?

Jay Parikh: For me, I think it's always just been this pursuit of hard problems and, in some ways, not getting stuck in the status quo or things that are maybe linear in growth. When I go back and I look through, I guess, just the rise of the internet and how that scaled, and how quickly and exponentially that was in terms of impact and change in the world, you look at then social media, then mobile, and now AI. I would just say for me, being comfortable was always the thing that got me to pick up and move and go somewhere else and get onto what I would call the steep part of the S-curve in terms of learning.

Bernard Leong: So there's always this unlearning and then relearning again.

Jay Parikh: For sure. I mean, there's always unlearning, maybe how an organization worked or a business model, and then learning and transforming and changing and evolving on the new thing. That's why I always like to push myself to be someplace where you're uncomfortable, where you're learning from the people around you. You're building, you're creating, you're solving hard problems—not just technical problems, but business, society, organizational. The whole system needs to evolve quickly.

Bernard Leong: So I want to really get to the subject of the day, because there's a lot that we want to cover on Microsoft Core AI. I'm quite curious. For our audience, can you share your role as the Executive Vice President of Core AI in Microsoft? What does it entail and what do you cover?

Jay Parikh: We started the Core AI team in January of this year, and what we did is we said we need to take a different approach to how we're going to help the world build software for the future with AI. As you know, it's changing so fast. Really, if you look at the old way of building software, you and I would go talk to customers, partners, we'd get requirements, we'd come back, we'd design probably some database schema to be the store of record, build some application logic, and then design some UI—maybe a mobile app, a website. That was traditionally how we did things. This tiered architecture.

Now in the new world, when you think about these models—large language models, multimodal, et cetera—these things can think, they can reason, they can plan, they can interact with other tools, they can do lots of different things that we are still very much scratching the surface of. Then the way we build software and applications with the model at the center needs to completely change. That is the focus, the main focus of the Core AI team, which is really focused on empowering every developer. Really I think that term "developer" is changing also really rapidly to shape the future with AI.

Bernard Leong: So how does Microsoft now uniquely support developers and startups to be globally competitive with other ecosystems? I think you point out the software development cycle has actually drastically changed. People are now doing vibe coding, and I think business owners actually get this instant gratification of seeing the front end. But they don't think about the real heavy lifting that is done in the backend.

Jay Parikh: I think in our Core AI focus, we think about basically helping developers reinvent every part of the software development lifecycle. So it's not just the code part of it. It's the specification, the ideation, the creation side of it, then it's the code generation side of it, then it's the code review, the unit test, the deployment, the operationalization of it, the monitoring, the fixing of it when something goes bad in production, updating your security vulnerabilities. So we're thinking about bringing AI-powered tools through that entire lifecycle for developers. That does include front end, backend, middleware, everything.

Then these applications get deployed on what we call our AI and agent factory known as Foundry. That's where these applications are running in the backend. Where we then bring in a number of different other components, and security and trust are woven through that from the start. Then we provide a flexible deployment model, which most today is done in the cloud, but you can also deploy these applications on the edge as well. So that's the stack of what we're trying to do to help really change the paradigm of how we build software with these models or AI at the center of it.

Bernard Leong: If I were to dive a bit deeper, what really stays the same and what changes in the software development cycle at this stage? When you talk to developers today—I think you rightfully pointed out that the job nature has also changed—how is the conversation with your customers?

Jay Parikh: I think it's changing, so I don't know that we are at the end state. I know we're at the beginning stage—no way we are at the end state yet. I think what it was a year ago versus what it is today versus what it'll be six months from now—I think we're still on some moving train here in terms of that process evolution.

What I would say though is that when I see our teams building software and they have access to all of this AI-powered capability, you see now what's happening is the emphasis, where your time is being spent, instead of lots of time spent typing in code and all the characters in syntax and doing all of that, the time spent is actually moving to the upfront creation, specification, requirements side of it, getting all the context right, and then letting the LLM, letting the coding agent go and create this thing. But then also you're spending time as a human on the verification side of it.

So your time moves from what was in the middle—we used to spend a little bit of time writing a spec, a lot of time writing code, a little bit of time testing. In reality, now it's sort of going to the ends, where we're spending a lot more time in that specification, that ideation. Ideation is way more free now because I can kick off lots of different prototypes. I can actually have the code gen come out and now I can see these things, I can react to them, and then I can create those evals, those tests that then feed back in and do that iterative process.

Now, the only other thing I want to mention is how we used to do these projects. If you think of a classic development team, they were tracked on some schedule. You would have different tasks scheduled out, but they were kind of linear in terms of what humans could work on, because you would go and say, "Hey, I have to go refactor this component," and that takes me a week to go refactor it.

Now, if I can employ these agents, these software development agents that can do different tasks, I can bring in the right agents, I can put them together, give them the specification, then say go, and then basically this coding process now is co-... it's merging with compilation. Before you'd say make, and it would go off and do the thing. Now I just say, "Here's the spec," now it's going to code, it's going to iterate, it's going to do the research, it's going to test things, then you're going to be able to validate, verify the end result, and then you're going to be able to have many different projects in parallel.

We see that happening already today, where people will use, say, a CLI and have a project going on here, a different project going on here, a different project going on here. That I think is really the transformation that we're early seeing.

Bernard Leong: I'm glad you mentioned this because my team and I were using Visual Studio and we were running Claude Code. Now I think there's a very good integration with Claude Code with GitHub Actions. We can actually... our time used to be linear, but we can actually, within the instance, work on multiple features in parallel. I think that's the kind of automation that we don't see that was in the past.

Jay Parikh: Correct. The way we would do that is we would just take longer to do these things, or we would have to not experiment as much. I think there are two things that are interesting here. One is it's still changing. I think this is very early on and it's super exciting. This may be in some ways one of the biggest and most profound shifts in how software is made in a very, very, very long time.

With that, you could also then assume that we may have only written a very, very tiny, tiny portion of the software in the world compared to what is going to be created in the future. If you think about all of the excitement around what humankind could go discover and make and build and create, now when you get this type of automation, this type of gain from a throughput perspective, you think about all the different types of problems and businesses that can be created going forward.

Bernard Leong: So the CEO of Microsoft, Satya Nadella, has often spoken of mobile first, cloud first, and now evolving to AI first. How is Microsoft's vision under his leadership influencing how you're thinking in Core AI?

Jay Parikh: I think in Core AI, everything's about AI. It is AI-first in everything we do in the team. That's, I think, implicit in the name as well. A few things here. One is that we are building that infra, that platform, those tools so that we can help our first-party product teams as well. So you think about M365, you think about the security team, you think about all of these different products that our customers globally use.

We want to help our internal teams be able to unlock that creativity, that value in these products and remake these products with AI really being this shift in value and creativity that the end user, the knowledge worker, gets to take out of it or experience. Then there's the platform and the tools that we provide to our third-party customers. So you think about all of the startup companies, the bigger, larger enterprises that are all AI-first, or they're trying to become AI-first and do the transformation. We are producing and packaging and creating that platform, those tools to enable that transformation for these companies.

So that is everything we do. It's how our end-user products are experienced with AI everywhere. It's also how we are working to drive our own creativity and productivity with AI.

Bernard Leong: I noticed a big shift now with Office 365. I was using Microsoft Word in preparation for this interview, and I had to prompt it a few times to get the interview laid out in the format that I was asking questions about. What I would like to examine, the flow this way. I find it actually is a very different way of working when I use Microsoft Word now. You talk about redefining how software is built, operated, and evolves. What does this look like for developers today? Maybe, how far are we? I don't want to ask about the future, but maybe what is the next intermediate state that these developers will go towards?

Jay Parikh: There are a few things here. One is, in how we think about building this end-to-end system for developers, I would say the first thing that we should note is the definition of software developer I think is expanding, is changing. As you see how much more approachable coding is today, it's not something that is so maybe scary or hard to grasp anymore, because literally anybody can come in, start writing a prompt. Just persistence, curiosity, iteration is all it takes today to build something in software.

For us, the things that we are focused on from a product perspective: first, make sure that we are bringing these AI-powered tools to wherever the developer is from a workbench perspective. So in Visual Studio Code, in Visual Studio, in the different IDEs—maybe IntelliJ or Xcode, you name it. Now we also announced the GitHub Copilot CLI last week. So we have a lot of different surfaces that we want to bring this AI power to the developer.

The second is actually building up the platform. Now GitHub is long past just being a place where you store your code. There's GitHub Actions, there's issues, there's a ton of context that we have other than just the code base. There's documentation, there's issues, there's security vulnerabilities. All of this sits in this platform. You think about it as context around what a developer can interact with and how you can hook up the LLMs to be able to generate and do even more for you. So that platform, in terms of the context that we have, and also then providing the runtime for these different types of agents—so there's a coding agent, there's a code review agent, there's a testing agent. So you think about all the different types of maybe specialized agents that are required across that software development lifecycle. They all need a coordination mechanism to orchestrate these different projects.

Then the third area is just because the enterprises—and listen, I would say that everybody requires these things, but there's a whole grade of just scale-important things like identity and security, and this is all part of our enterprise infrastructure that needs to scale as well. So those are the three areas that we generally think about in terms of how we invest, how we make product, and how we iterate rapidly.

Bernard Leong: I'm so glad you talked about all the new developments with the CLI, the GitHub stuff, because I think when I was teaching large language models in the universities—enterprise customers of yours—they have difficulty trying to use the modern tools because it bypasses a lot of what is called the security piece. Now with integrations with GitHub Actions, I think it has made some more experienced developers much more susceptible to convert to an AI-first world.

So GitHub and Azure AI Foundry, which are pretty central to actually enabling AI-powered development. How are these platforms helping organizations now to go beyond what we call first-generation AI applications, now going into what we call dynamic, intelligent agents?

Jay Parikh: There are lots of great examples here. I think that we're moving, at least in the tools, beyond just doing code completions. We're delegating entire pretty big tasks to these agents to go and work on, and then I have to keep an eye on the work and make sure that it's not just stuck or that it produces something that has lots of problems with it. So there's still lots of work in terms of how to craft the tools and the models and the evals and the whole system to get accuracy and speed, and also make them cost efficient as well.

I think for the organizations out there, the benefits are obviously time saved. This is a productivity gain, but other organizations out there are actually being able to drive top-line creativity, top-line growth. One of our customers here, Singapore Airlines, had a feature, a major feature they needed to add to the consumer app. Originally the team, when they specced this out, that feature or that project was going to take 11 weeks. 11 weeks, almost a quarter. Then they got this team together. They used GitHub Copilot, they designed it, they did all the work. They got this project done in five weeks.

That was a pretty big time savings. That's not very common to be able to save that much time in the project. Then later, the team came back to the partner and to the lead, and they actually had finished and drained their entire backlog. So they got everything done that was in their backlog too, which is pretty astonishing to be able to see that small team with these AI-powered tools, GitHub Copilot, be able to get projects done much quicker, to be able to go through the entire backlog much quicker.

So then you start to think, oh wow, I'm always used to being behind schedule or having infinite things in my backlog. Now wait, I'm getting stuff done faster and I'm able to keep up with the load that I have—customer support tickets, bugs, et cetera, et cetera. Then I'm able to modernize. Listen, the enterprise out there, all of us deal with code that's not just vibe-coded stuff. It's stuff that's been here for 10, 20, or more years that still runs most of these enterprises. So these AI-powered tools will help us modernize this code faster as well, so that we can get out of the past, get to the present, and start then looking forward and investing in building for the future.

Bernard Leong: So what's the one thing you know about building AI platforms at scale that very few people do, but they should know?

Jay Parikh: I would say for me, it's be more ambitious. I kind of have these two... I see these two different mindsets when I talk to people. I'm kind of curious—I'm guessing where you are and what you see here in Singapore as well. But there's this set of people that are using these AI-powered tools, whether it be GitHub Copilot or another copilot. They do something, they're prepping a doc or summarizing an email, and they're like, "Wow, that's amazing." Kind of stunned as to how incredible the response is from AI. They're surprised and they're happy with the results.

Then there's another mindset or another set of people I think that have these experiences when they work with AI. They're frustrated with it because they're just like, "Why can't it do this for me yet? It's not... it's broken. It needs too much help. It needs too much context." They're pushing the envelope of what this LLM or what this system can do, what this tool can do.

That's where I see, and I have these conversations, which is like, which one of these mindsets or which one of these experiences do you typically live in most days? Are you in the former mindset, which is you're amazed by what LLMs can do or what GPT can do? Or are you frustrated that it can't do more for you? I think that we're trying to get people, and I talk about is, if you are in the former group, then you need to raise your level of ambition. You need to delegate harder things to it. You need to use LLMs and copilots more.

If you're in the second group, then you need to learn more about how these things work. You need to experiment with more models. You need to try different tools. You need to learn more of these techniques. Maybe it's fine-tuning, RLHF, or be able to better context-engineer, all of the above. So staying curious and going deeper and deeper, because chances are that in a week or a month, that task that you're already giving it to, that it's not doing so well, it will be able to do it. Because that's the rate of kind of scale advancements that we're on. It's this exponential curve, and you can't stay close to this if you're being surprised by it. You should be pushing yourself, pushing these tools.

Bernard Leong: So my observation, just to take into consideration what you were asking me about, what is the trend like between both groups. I find that there is a group which they got very excited with the technology. They can do what's called a zero to 70%. That means they get something that they can build up, software coded very quickly, without any knowledge about the programming language. Then what they would end up doing is they end up in a state where they try to push the boundaries and got more and more frustrated. This is where I would tell them, maybe you should take a full-day class of this language that you want to learn, like maybe learning how to develop properly, learn the actual software engineering so that you can understand how this is being done.

Then there is the second group, which I think are really very good software engineers, and of course now with things like Copilot, GitHub CLI, they really have their ready-made GitHub Actions probably done now. They became a hundred-x, a thousand-x engineer. Where it is really tricky is that middle remaining 70 to 100 percent group, where they're very established engineers, but they have difficulties because maybe they will operate, as you rightfully pointed out, in the enterprise space, where they're trying, they want to use it, but they also have that difficulty, and the protocols of the company don't allow them to actually experiment the way. Then they will try to put up walls to say no, the DevOps cannot allow, et cetera.

But there is this... I think that group is starting to actually kind of diffuse because the boundaries of this stage are pushing. I think that is pushing on from both ends. On that, I don't know, does that really gel with how you see it?

Jay Parikh: I think so. It just depends on the organization. I mean, you work with a ton of startups, and so do I, and it's very different in startups in terms of what's happening. Then you get to the enterprises and you'll see certain groups that look and resemble startups and then others that are still in maybe earlier or mid stages of diffusion. They really haven't... they're still maybe dabbling with code completion or thereabouts, and they haven't really, I think, delegated again, maybe because of corporate culture, corporate policies.

I think those things are changing though, because I think the companies are starting to realize that this has to change. Their competitors... it's what these developers do in their nights and weekends for their hobby projects. They're using these tools and they want access to these tools in the enterprise in their day job too. So it's kind of coming. I agree that it is getting smashed together from both sides, which is good because then the overall adoption, the overall productivity, the overall creativity of these organizations should go up, and then that should push on the advancements of the tools too.

Bernard Leong: I just came back from the Philippines teaching business owners of several conglomerates on generative AI, and I think one of the biggest difficulties, even with developers or maybe even the other support units, is what tools are we allowed? It seems that even trying to agree on the tools seems to be like the first level of difficulty. I think, of course, given what I've seen in the last couple of months and weeks where I start seeing so much Office 365 changes, I'm beginning to see that maybe those organizations specifically who are all adopting Microsoft already would gradually make their life easier.

I don't know whether it is because there was always a lot of point solutions, tools, and no one actually bringing them end to end. I think the end-to-end is coming. Maybe from your point of view, I think that's where you are also driving towards.

Jay Parikh: For sure. I think that with the entire team and the company across all of our products—the office products and Outlook, Excel, PowerPoint, Word, everything there. Then you look at everything that we're doing in security with Security Copilot, you look at our Dragon Copilot for healthcare. You look at everything that's happening in the platform. Developers, you see, you talk to Doug and the Azure team as well, and you know how much they're investing in and helping out the rest of the company. So that thread of just transforming all of our products to really unlock that creativity, that productivity, is our major focus.

Bernard Leong: So I have you here, so I want to talk to you about leadership and also something on trends and AI as well. What leadership principles have guided you in building high-performing and cross-cultural teams? I mean, specifically AI is moving so fast. I'm sure there will always be this thing when you have to talk to them and say, "We gotta go move faster." The teams are like, "Yeah, we are going to move faster too."

Jay Parikh: I think for me it's always been, even I would say even before AI, I think with the boom of the internet and scaling that and then seeing social networking and mobile, and now AI, it's always been, again, for me, just that pace of learning. So I have this notion of a learning loop. As you get into an organization, that organization gets bigger because it's more successful. There's more people, there's more customers, there's more code, there's more products. That loop, that circle gets bigger. You travel around that loop or that circle slower. So being able to do one cycle of learning just ends up being slower and slower as you get and as you scale.

So everything, when I think about the organization, when I think about the culture, when I think about the way we work, the people we hire, how we develop people, how we communicate, share context, it is all about trying to scrunch that wheel or that circle back down into a smaller circle and to propel ourselves as an organization around that loop faster. So ultimately you want to just learn faster. Despite your scale, where your people are, your people are globally, that doesn't mean you should learn slower. You should then have this mindset, which is like, "Hey, what do I do to take advantage of all of these smart teams around the world or smart, different product teams to help us learn faster?"

So for me, it's, obviously, I really think because I've done a lot of startups and then scaled and seen scale, which is small teams moving fast is the sweet spot. Small teams, they're multidisciplinary. You have different functions in there. They have a ton of context, and they are given a high amount of agency to go and be able to drive that part of their business.

Now they do need to iterate quickly. That's learning. So being able to ship and try things out, experiment with things—that is and has got to be a way of working. Then the other thing is making sure that you have this ability to self-reflect as an organization, as an individual. So you're not going to get it right every loop, every iteration, every version that you put out there. You're not going to get it right every time. But if you're able to quickly learn and then tweak whatever it is in the product and then do another iteration, and you do that faster and faster than most or all of your competitors, then you will deliver the best experience, the most value for your customers and partners.

Bernard Leong: Then how do you maintain, given the speed of the teams, these distributed teams are faster... how do you maintain alignment and momentum across a global organization like Microsoft? I mean, there's also trying to make sure they have this mindset, growth mindset, within these distributed teams.

Jay Parikh: I think you have to maintain a certain level of alignment, and I think that alignment is met by a few things. One is making sure that everybody subscribes to the same sort of mission and values. Everybody in a company like Microsoft, there's such a very diverse product set. While we're trying to work hard to drive this AI transformation in our products and the customers using those products, we really want to make sure that the products also then come together where they need to come together. So for example, in Core AI, the developer tools and the developer platform do need to actually be aligned.

Now for us, it doesn't mean that we need to tightly couple everything. For us, it's going to be: make sure that these teams have the right amount of context. They sort of know what the North Star is and that they're heading in the same direction. Then you really want to make sure you share context, and then you learn as you go and you have these sync points. You have these ability, these points where the teams are learning from each other. So I think it's like alignment is important, but I think not everything needs to be synchronized. I think we can have this done in a way where you want to have people working quickly and learning and not have to tightly get stuck together.

Bernard Leong: So how would you think about, now we're in the AI world and where we are now, having this open versus closed large language model foundation models, that's become quite a hot debate. Given that you're building the platform tools for a lot of developers, how do you see this to shape the ecosystem? What guidance do you have for startups navigating across all these different models?

Jay Parikh: I think the choice for startups is fantastic. I think that you have to stay curious about what's happening with these models. Sometimes there's going to be use cases in your product that requires maybe a really big closed model because it gives you some unlock to your product idea that you can't achieve anywhere else. Then there's going to be places where, "Hey, I can actually produce this part of the experience and I can go with something that is open source and I can get it to be pretty good, but it's much cheaper and faster for me." Then with the open source models, then I can also do more advanced processing of those models. Maybe I need to fine-tune it. Maybe I need to RL it. Maybe I need to do something else.

So I think right now it's really understand, as a startup, as a builder, you're building some product. That product may not only require one model to serve the entire surface area of that product. You may be able to split the experience into multiple sub-experiences. Then each one of those sub-experiences could be different agents, they could be different models. I think building that capability, that intuition into your product experience gives you long-term flexibility because it's going to be both.

From a platform perspective, from a tool perspective, we want to give developers choice. We want to give the folks that are building AI applications model choice as well. So that is just going to be something where if you think about that and build that into your startup idea, then you give yourself that flexibility and that ability to ride these different innovation curves for both open source models as well as the closed source models.

Bernard Leong: I think in Azure's Foundry, you can actually decide to select which models you want. There are a lot of open models there. Actually one of my favorite models, which Claude model has just been turned on there, because we deploy for multiple cloud platforms. So it's actually, for us, better because we have the choice of models in the era of generative AI.

What would be your advice to startups in Asia Pacific, where I live, on how to build and differentiate effectively?

Jay Parikh: One of the things that has come to mind in many of the conversations I have with startups is being able to tell the story. It's not a technical answer—you might have been expecting a technical answer.

Bernard Leong: No, I expect that. This is a great point.

Jay Parikh: But I find that today, because—and you advise and invest in a ton of startups—at some point, the technology and how you explain it in your elevator pitch, it can sound very similar to lots of other competitors or lots of other companies that may be in the space. So how do you spend enough time really telling that captivating story about why your product is better, how it has effected and improved, demonstrated this ROI it has, this level of taste in it? It's whatever it is, that story that as a founder, as the early team there, making sure that everybody on the team really iterates and builds this story and has that conviction, has that passion that it comes through.

It can't just be, as I say, a bunch of whats, like a bunch of what your thing does. It has to have the why. It has to have the why and the how. Then you can get to the what. But I think as technologists, we often lead with the what, what, what. Then we forget the why and the how, and then you just end up blending in with everybody else.

Bernard Leong: That's very good advice. Given AI is almost changing every two weeks, what are the concrete trends in AI in software development that you think leaders and startups, whether it's in Asia or across the world, should be paying attention to right now?

Jay Parikh: So I would say, we talked a little bit about this already, which is use these tools. If you're building a new application, a new product, make sure your team has access to these things. Go and use these tools. If you have projects that are, let's say, modernization things—you have technical debt somewhere, you have to upgrade some version of Java or something like that—use these tools. Microsoft is putting a lot of effort into helping you build new applications, greenfield applications. But we're also investing a lot in making sure these tools are helpful in the brownfield applications, which powers all the enterprises today. So use these tools.

I think the other part of this that I would say is really important to understand is what is happening from a model development perspective. Don't take these models just off the shelf and use them. Think about how your own data, your own startup data, your enterprise data, can really help you build more intelligent agents. Because you can take some base model, an open source model, an OpenAI model, an Anthropic model, whatever, and then from there, how do you start to give it more context? How do you think about, in certain models, fine-tuning it or using RL to give this model, this agent more intelligence so that it's going to demonstrate or deliver a higher ROI for that task that it is or that workflow that it's designed to do?

So that work to understand how these models can be tuned, tweaked further, given further intelligence, I think is another trend because it's not just about prompt engineering, context engineering anymore. It's not just about fine-tuning a model anymore. There's a lot of excitement. There's a lot of startups. There's a lot of continued research going on in RL [reinforcement learning]. So getting, pushing that and getting that into your organization's KPIs, learning that talent, et cetera, I think is really important.

Then I'd say that the third thing is making sure that you are cross-connecting different parts of the org. That's, I think, one of the things that I'm observing when I talk to the larger enterprises. They're all curious about how to transform their cultures. In addition to building AI applications, when they talk, they're always trying to get some advice, some guidance, some stories around, "We have to change our people, how they work. We have to change how our organizations, our processes." This has nothing to do with the LLM, which model they pick. It's the cultural change. So really trying these... what are these cultural experiments that you can do if you're a leader in a larger enterprise and not just waiting for it to just figure itself out. You're going to have to go and make changes and try different ways of working to really unlock the power that's there with AI today.

Bernard Leong: Very interesting point. I want to get to the enterprise side. Security and trust are built into Microsoft's AI stack because you have obviously a large pool of enterprise customers. How are you now approaching responsible AI at scale, especially where you have a lot of developers now orchestrating increasingly autonomous agents?

Jay Parikh: It is baked in from the start. We've always thought in building out the platform and building out the tools that we don't want this stuff to be an afterthought or a feature that we do later. So there's a huge amount of focus in making sure that when we build out the platform Foundry or we're building out GitHub and VS Code and these tools, that security is there all along the way.

I'll give you some examples here. So one is, you're using anything in GitHub and it has all of these enterprise features to control identity and security and all of this entitlement work in there. So that as an enterprise, as a small startup, you can kind of tweak where you are in that slider of risk. Then if you're a larger enterprise, like here's how it flows into your compliance programs, into your security audit programs. If you're a startup and you don't have any of that, then you can move fast and you'll hopefully have those problems when you get a million times bigger in scale and you're super successful.

In Foundry, the platform where these AI applications and agents are built and are run, responsible AI is a huge part of the user experience in Foundry. From making sure that you can... we have a red teaming agent that can actually red team your application and then help you score and understand where your application might go off into places that you don't want your customers to experience. So it can help you find those problems from a responsible AI perspective early.

We have all of this observability and monitoring that you can tap into as well to see how your AI applications are performing in the wild, in real life—not just performance and cost and those things, but correctness—and being able to then go and tweak them and update and change the evals so that you can keep the system really on the right set of rails that you prefer.

Then the last part of this is making sure that the platform and these agents now can tie into the rest of Microsoft security portfolio. So for example, when you build an agent, you deploy it, it automatically gets an agent ID that is tracked in Entra, which is our identity system. These things, these agents, when they're needing to access data, they're going to go through first checks through a system called Purview that we have, which is also another important system that enterprises deploy. So everything that we're doing has that connection to security and into responsible AI.

Bernard Leong: So I'm very lucky to have you here. You're in Singapore to meet with startup founders over the next two days. What inspired you about the innovation that you're seeing in Asia?

Jay Parikh: It's just a really, really fast... it's a really fast-paced part of the world. I mean, two-thirds of the world's developers are here in Asia. I believe 75% of the AI patents in the world have been started here, or created here, are published here. So you just start to look at all of the developments that are happening here. You look at what's happening in open source, LLM development. You see, and I'm curious what you see in your startup talks, what are those exciting trends from the startups. But there's just a lot of startup energy here with the amount of developers here, with the technology, with open source, and that's what I'm excited to spend time with a ton of companies. I'm going to be spending time with Pantheon Labs, which is building these digital human-like avatars for customer service types of use cases. Then the Manus folks as well. They have built an incredibly cool application that you can give very complicated and hard challenges and tasks to, to help you solve them.

Bernard Leong: So I think one question I have—given that the US is also at the forefront of AI development, with your conversations with, say, ecosystem players there, what do startups maybe here need to learn from, to think about when they scale with AI, and how do they work with Microsoft?

Jay Parikh: So we have a number of different programs, and I think our team here is happy to spend time with the different teams and find what they need and how best to work. Because part of it is we have a number of programs to help startups build on Azure and on Foundry and with our tools. So we have a number of different programs there to help them get going. Then, as they scale and become more advanced, how does that turn into more co-building together? Like how do we build our products together and how there may be integrations that help us solve that customer problem, that customer opportunity better together.

Then I'd say the next phase is how we co-sell and how we co-market together. Because how do we use the Microsoft team connections in the world with enterprises to then bring better combined solutions to these enterprises that we serve along with these different startup partners that we have?

Bernard Leong: I'm very curious. What is the one question you wish more people would ask you about Microsoft's approach to AI, but they don't?

Jay Parikh: I would say for me, I talked a little bit about this, but it's this notion of having enough ambition. It's like, what could we do together? It's maybe less about Microsoft, but how do we raise our level of ambition in co-building these applications, these agents, these solutions together, so that we really unlock the next levels of human creativity, of human collaboration, of discovery. That I think is just something where the amount of potential intelligence that is stored in all of these models, closed, open, whatever is happening in the enterprise to fine-tune RL of these models, the amount of potential and intelligence is just massive.

I just really want, and I encourage our teams and push myself too, to really try to do more ambitious things with these models in terms of our applications. We have to get out of this linear way of thinking of software development and application development that we've been doing for a really long period of time. Because I think the future of where this is all going is just going crazy fast.

Bernard Leong: I was listening to the history of Microsoft recently, and I remember there was a very important part of the story when Bill Gates and Paul Allen were talking about the trends of the microchip. When commenting on this, Bill said, "This is an exponential technology. That means it's going to be very important and we have to get involved in that technology." We're still talking about the same thing even today.

So I have a very traditional closing question. What does great look like for Microsoft's Core AI in the next couple of years?

Jay Parikh: I think what great looks like for the Core AI team and for Microsoft at large here is really delivering the right tools, the right products, the right platform, all with security, responsible AI in mind to help this transformation of all of the organizations out there that we serve. Because ultimately, the startups that we're all working with, we want to partner with them. We want them to help us build better products, to help us push what the state of the art is possible in terms of how we scale, how we build the technology. But then we want to be able to support the larger organizations to be able to move faster in this AI world.

So really it's this, I would say, focus on delivering this and unlocking this value of AI in every organization around the world.

Bernard Leong: Jay, many thanks for coming on the show. In closing, just two quick questions. Any recommendations that have inspired you recently?

Jay Parikh: It's a book I read a while ago. I'm sure you're familiar with it. It's the book ["Technological Revolutions and Financial Capital"] written by Carlota Perez on how these technological transformations happen and the cycle and the boom-bust and stuff. Those cycles are just... I think a lot about that book given what we're experiencing right now.

Bernard Leong: How can my audience connect with you and maybe learn more about Microsoft's work on Core AI?

Jay Parikh: We publish a lot of updates. Pay attention to the GitHub blog. There's a lot of new stuff coming out there weekly, almost daily. We are obviously communicating a ton in terms of where the platform is going, where the products are going, on the Microsoft blog, and send us any feedback. We're always looking to do better.

Bernard Leong: You can find us anywhere—YouTube, Spotify, or across the world. Jay, thanks for coming on the show and thank you for sharing your beautiful Sunday with me here in Singapore. Thank you so much.

Jay Parikh: Thank you.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.

Comments