Why Data Streaming Is the Secret Weapon for AI Success with Kamal Brar

Why Data Streaming Is the Secret Weapon for AI Success with Kamal Brar
Kamal Brar reveals why data streaming infrastructure is the critical foundation enterprises need to scale AI applications and unlock fragmented data silos.

Fresh out of the studio, Kamal Brar, Senior Vice President of Worldwide ISV and Asia Pacific/Middle East at Confluent, joins us to explore how data streaming platforms are becoming the critical foundation for enterprise AI across the regions. He shares his career journey from Oracle to Confluent, reflecting on his passion for open source technologies and how the LAMP stack era shaped his understanding of real-time data challenges. Kamal explains Confluent's evolution from the category creator of Kafka to a comprehensive data streaming platform combining Kafka, Flink, and Iceberg, emphasizing how real-time data infrastructure enables businesses to harness both public AI models and proprietary enterprise data while maintaining governance and security. He highlights compelling customer stories from India's National Payments Corporation processing billions of UPI transactions daily to healthcare AI applications serving patient needs, showcasing how data streaming solves fragmentation challenges that plague 89% of enterprises attempting AI adoption. Addressing implementation hurdles, he stresses that data infrastructure is the most critical piece for AI success, advocating for standards-based interoperability through Kafka's protocol and Confluent's extensive connector ecosystem to unlock siloed legacy systems. Closing the conversation, Kamal shares his vision for Asia Pacific becoming Confluent's largest growth region, powered by massive-scale innovations in payments, mobile transformation, and AI on the edge for autonomous vehicles and next-generation interfaces.


"In the context of where Confluent can play a critical part, it's also the interoperable integration with all the respective AI ecosystems. If you think about what AI is doing, it's working across microservices, working across data lakehouses, databases - could be a different endpoint service. Bringing all that together in a secure and consistent manner, constantly serving that information, is where I think it plays the most pivotal role." - Kamal Brar

Profile: Kamal Brar, Senior Vice President WW ISV [Independent Software Vendor] & Asia Pacific/Middle East, Confluent (LinkedIn)

Here is the edited transcript of our conversation:

Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business, technology and media in Asia. I'm Bernard Leong. The ability to harness data from real life streaming has become a strategic necessity for generative AI. With me today is Kamal Brar, Senior Vice President of Worldwide Independent Software Vendor [or ISV, in short] and Asia Pacific and Middle East at Confluent, to help me unpack the emerging role of data streaming platforms in the world of AI and how it's gonna shape the future of Asia Pacific. Kamal, welcome to the show.

Kamal Brar: Thank you, Bernard. Thank you for having me.

Bernard Leong: I always want to hear origin stories about our guests. How did you start your career?

Kamal Brar: It's an interesting one. I'm sure some folks are still thinking through what they want to do in school and deciding what they want to actually end up doing. For me, it was pretty clear. I had a strong interest in electronics and computing at a very young age. I wanted to pursue a career in computing and back then, computing was taking off in the late nineties. We didn't have - a lot of the curriculum in universities was around software development. It wasn't really around mobile and all the other amazing technologies that came later, but it was fundamentally software engineering and databases. Those were the two core areas. I ended up taking a Bachelor of Computing, which turned into a Bachelor of Computing Science when it was rebranded. I was very much focused on learning all about software programming. Back then it was Perl, Apache, MySQL, all that type of good stuff - the LAMP stack, if you recall. I was definitely into that. I did that as my foundational educational studies.

I was very fortunate. As I wrapped up my tertiary education, I got an opportunity to work for Oracle. Even prior to Oracle, I was working at Optus, which was a telco in Australia. Then I shifted to Oracle, and Oracle was an amazing company. Back then it still is. But back then it was 25,000 employees. It was a very different company to what it is today. If you're tracking Oracle, it's making a lot of interesting investments in AI. For me it was just a great learning playground. I got to learn a lot about Oracle Technologies core tech, then I evolved from Core Tech to the applications, understood Oracle's applications. From there I decided to take a leap of faith. Having spent six, seven years at Oracle, I wanted to experience something different. Having worked in the Oracle machinery where you learn a lot and you learn about really interesting technologies - it was the number one database company and technology company at that period of time.

But there was so much disruption on the onset. I got to learn from working in some of the most exciting open source technology companies. My career has been a mix of data, but also I've worked in four open source companies.

Bernard Leong: Prior to this conversation, I took a look at your background. You worked for Oracle, Hortonworks and then now Confluent. What drew you to data streaming in your present role and what do you think about all these open source technologies that gives you this common thread in your career?

Kamal Brar: Remember I referred to the LAMP stack. I actually worked for MySQL for five, five and a half years. That's where I got the bug. Literally, MySQL was a very small company - about 350 employees when we got acquired by Oracle. That was my second stint at Oracle. It was just a phenomenal cult following. Back then when Facebook was coming together, it was all powered by MySQL and the database. You think about it right now, it's a given. But back then, when Facebook was building its architecture for scale, for mobile and web, there was no database of choice. I think that was the era where we were in a very fundamental part of the transition on the web. Mobile hadn't taken off, but web was truly there. Social media was truly taking off.

What attracted me to the Confluent story - I always knew Confluent was a great company, a great brand, done tremendously well. It created the category of Kafka and data streaming. For me it was going back to the core. I always wanted to come back to open source. I'd worked in other technology companies, so it was more for me to go back to something I enjoyed. I'd been in MySQL, I'd been in Hortonworks as you talked about. It was just coming back to open source again.

Bernard Leong: Always open source enterprise tech. Now you reflect on your career journey, what are the kind of lessons you would share with my audience?

Kamal Brar: I think the fundamental lesson I would say is there are no limits in what you can achieve in any role or in particular, any company you go work for or be part of. Every company has very unique elements of learning. There are different growth stages in the company. There are different technical challenges. There are different go-to market challenges. When you think about all the things that have to go right to make a company so successful, it's amazing. You see very few companies cross the chasm of getting to a billion dollar run rate in open source. MySQL was acquired quite early. Hortonworks - we got acquired as well as part of Cloudera. Most companies don't get to that billion dollar run rate. Obviously Red Hat was one of them and did really well and continues to do well. I think Snowflake, Databricks as well. Snowflake, Databricks - these companies, all these technologies were open source. If you look at what Databricks came from - Spark, very much open source.

That's pretty exciting. Open source has become, and definitely the foundational projects in Apache and the community have been really pivotal in defining the tech landscape.

Bernard Leong: Do you think the community is also part of the core of what makes open source software such an interesting part of your career journey?

Kamal Brar: Yeah. Hidden secret - I used to be a contributor a long time ago. I don't write code anymore. But I think the community angle is absolutely pivotal. Without the community, it's very difficult to get the adoption. Adoption defines the opportunity for us to do some of these more disruptive technologies. Otherwise, you'll be stuck in the most obvious, safer choice. The community enabled us to do that.

Bernard Leong: Let's get to the main subject of the day because we are gonna talk about data streaming and AI readiness in Asia Pacific and also Middle East, which we cover in the region as well. Maybe just to baseline, to help my audience understand Confluent better because not everyone knows about all these enterprise tech companies. What's Confluent's mission and how does it fit into enterprise AI and data infrastructure landscape?

Kamal Brar: Maybe just worthwhile for those of your viewers who may not be familiar with Confluent as a company. It started over 10 years ago, and the project was around Kafka. Kafka today is very much de facto. People refer to Kafka as a pretty standard. But that's when it was defined. Our three founders, Jun and Jay, who's our current CEO and co-founder, kind of out of LinkedIn had this project. Out of this project, they ended up spinning out a company. The challenge of course, if you're familiar with LinkedIn, is I only want to subscribe to information that's relevant to me. To be able to consume that information efficiently - just imagine the total volume of people who are on that platform and then to do that efficiently and to be able to publish that data continuously as a feed that's relevant without having to keep making a call. In the world of databases, it used to be you'd query something and you'd get a response. It's a request-based access, whereas this was continuous feed of information in real time.

That's how the company formed. Kafka became one of the most popular Apache projects, and is still today a growing Apache project. The company over the last 10 years has pivoted from becoming the de facto category creator in Kafka and data streaming to becoming the DSP company. For those of you who are thinking about what DSP is, fundamentally you have real time event streaming. We know that's number one with Confluent and Kafka. In addition to that, you need to be able to process these streams and be able to manipulate or transform these streams. That's where Flink became the standard. About three years ago, I believe, Confluent acquired a company called Immerok. That's where we essentially brought the founders of Flink into our ecosystem. Flink and Kafka became one. Until today, that plays a very important role in the context of AI, and I'll explain a bit later.

More recently what we've done is introduced Iceberg. Iceberg is becoming the de facto table store format or essentially the interoperability for Lakehouse, where you can hopefully be able to share information between the data stores or data lakehouse and also the platform itself.

Bernard Leong: I also had a stint when I was heading AI and machine learning at AWS for Southeast Asia, where one of the key things in data streaming, specifically F1 for example, where you need a lot of events - the key part of it, even for basic video analytics, is that a lot of people always talk about Kafka streams and actually using Flink as part of the ingestion engine so that they can get that information.

Kamal Brar: That's a great use case, by the way.

Bernard Leong: Can you explain why data streaming is so important to AI now? I can understand it because I use ChatGPT every day. Essentially there's so much conversation streams going on in my chat window, but I think it's only just one stream. But if I start thinking about there are 500 million users suddenly all using the same thing, there's a very different conversation.

Kamal Brar: That's very true. In the context of data streaming platform and AI, there are multiple facets to it. The most obvious is you want to be able to do inference. When you talk about your querying or asking questions in ChatGPT - the prompts, as you call them, prompt engineering - you'd be able to understand the context of what you're asking for. In that scenario, majority of the technology or majority of the models in some shape or form want to be able to store feedback loops and understand: how did we make a decision? How did we get that data? Where was that data and how frequently was this accessed? Even things such as the metrics associated with that particular question so that they can learn and train and be more efficient at the next particular ask. You may have you and I asking the same question in a different way. It learns and becomes more efficient in how it can provide that data. The inference part is super important. Being able to do that efficiently, being able to do that across multiple data sources, I think is very logical. That's where Flink comes into play. That's a very important part.

The other part linked to that is the model feedback loop. To enhance the models, you want to be able to store that data for a long period of time or at least be able to store it to understand how the models can improve. The most obvious is, as these models look at not just web data - if you think of the enterprise context, the models are generative AI, so they understand the web, they understand the machine learning training that's already happened on those models. ChatGPT or any of the models, they'll tell you when it's trained to or when news shoots a little bit behind. But in context of enterprise, they know nothing about your business, and for good reasons. That data is very much specific to your business requirements and it's probably a competitive differentiator. That data cannot be publicly shared. How do you bring the power, harness the power of the public or generative AI models and the enterprise models? That's where, with Confluent, Kafka and Flink, we want to be able to bring the inference so you'd be able to essentially bring together and query these different data sources and endpoints. Could be an endpoint that's external. Then bring that data together very quickly and efficiently and provide that combined knowledge of both your enterprise data and also the generative data and serve that to your user.

Bernard Leong: A lot of people do not appreciate that data streaming is quite important for production level. We're not talking just the ChatGPT use case. When you think about even a customer service chatbot servicing a million queries, I think that's when the strength of the data streaming provided by Flink or Kafka becomes very essential for the customer. There was a recent 2025 Data Streaming report published by Confluent. I actually got a chance to read it. Can you talk about some of the key takeaways from that report itself?

Kamal Brar: I encourage everyone to go look at the report, but the three areas - I think the willingness to drive or leverage AI based data technologies is very high. Of course there are fundamental challenges with it. 89-90% of our respondents said they're very eager to use the DSP technologies and leverage this for AI.

Bernard Leong: But they also cited that fragmented data is one of the big issues.

Kamal Brar: Correct. That has been just tech debt. A lot of the systems that you have in play today, if you think about the nature of these systems - you look at a bank, you look at anything that's significantly large in terms of complexity and size, you'll realize that it's actually a lot of work. You can't turn those systems off. There are a lot of legacy technologies that are built around it. You still have the challenge of data fragmentation, data silos where departments or certain parts of the business may operate a very specific requirement simply because of high level of governance or compliance.

Bernard Leong: Regulatory separation of controls, access controls.

Kamal Brar: Yeah. That makes it very difficult for you to unlock. For example, a new FinTech division or a digital bank may be able to move much faster because whilst they still have the regulatory compliance, they don't have the technology debt and they don't have the legacy systems that they may have to - they would've built something ground up, which would've been more cloud native, would've been leveraging more of these emerging technologies. In that context, it makes it much easier for them to go and accelerate. What tends to happen is the fragmentation exists in large enterprise, and I would say more digital native or digital born companies, it's less of a problem. Just because they have a new digital stack or a new definition. If you look back in the old days of the LAMP stack, that's evolved - Node.js and so forth. The new tech stack defines the pace of innovation. But the data segmentation, the silo nature of that data continues to be a challenge.

Fundamentally, that's one area where if you have standards-based interoperability or if you have a standard communication protocol - Kafka becomes one viable option, where if you have the ability to talk Kafka and stream in and out of Kafka, it becomes a much easier way to share data. That's one challenge. The other one is of course, skill shortage. Not everyone's got the skills.

Bernard Leong: I think you got the point first about the tech debt, then because tech debt also partially because of skill shortage - usually about 91% was cited in the report. Why is real-time visibility becoming such a critical differentiator for today's enterprises?

Kamal Brar: Look at the engagement of how you interact with your applications today. Everything you and I do probably is through a mobile device. I don't really need to log onto my laptop to actually engage in my mobile banking app. Today, if I want to make a transfer in the good old days, I'd have to log into an internet browser. I'd have to log in, use my HSBC key or whatever bank I'm with, and I'd get a time-based authentication login. I'd do my transfers. Today, everything is based on mobile through a notification that's - all through my notification, will all in real time come to my mobile device and I'll authorize transactions. But the whole interaction has changed to real time. Whilst I think in the past, the nature of the applications were that they were very responsive, they were web-based applications, but they didn't have the same demands in terms of real time capabilities.

If you look at the interactions with your bank or interactions with your applications 15 years ago, 10 years ago, we didn't really have ride sharing. Now, being able to track where my Uber is or where my Grab is, is kind of a given. It's a given. We assume that this is normal. It wasn't normal 10 years ago. Or whatever the time when Uber took off and Grab took off. I think in that context, the nature of how we interact with applications, how these applications actually feed data and just the relevance of that data becomes critical. Just imagine if I'm looking for my Uber and that information's delayed by five minutes, it's probably not a very good use of data for us. That doesn't provide me the data I need in real time that I want for the context, which is I want to understand where my Uber is, what's my ETA to my destination and so forth.

I think that relevance of context of time and the real time nature - we live in this society where it's right here, right now. Everyone wants things today now. There's no concept of "I'll wait for five minutes or 10 minutes." If you ask ChatGPT a question, you probably want that pretty quickly. You don't want to wait 10 minutes for an answer.

Bernard Leong: But then with reasoning, you still need one minute and a half or maybe even three minutes to get an answer.

Kamal Brar: I think with the right models, they talk about fidelity and velocity in terms of the AI models. It's not just important to have the speed, but also the accuracy of response. Many occasions, if you actually learn to use these tools properly, you realize you need to ask the right framing of the questions. In some cases you actually need to go use the right models. When I'm using some of these tools, I'll actually enforce deep search so that I get a better, more accurate response versus getting the faster response.

Bernard Leong: I think another interesting part of that report was that I think 94% of the people - let me make sure I get the numbers correctly - that AI use in business analytics is set to grow. What are the more promising applications do you see gaining traction from your perspective?

Kamal Brar: I think if you look at enterprise applications, everyone's shifting towards AI enabled applications. Most of the enterprise guys, or I would say the larger enterprise applications or technology providers, are AI enabling their stack. They're like, "Hey, how can we leverage AI to serve our existing customers better? Can we improve interaction?" They may introduce AI chatbot services, they may introduce better AI through support, so logging tickets or engaging in that. If I'm a customer service agent, can I get that experience better through an automated telephone or an AI agent? That's happening in the enterprise. They're all modernizing, I would say, improving the quality of their interactions, improving the quality of their applications with their existing install base.

Then you have this whole new category of disruptors. You have interesting companies who are building voice to text. You have interesting companies building all types of interesting use cases around health science and so forth. If I'm an outpatient of a hospital, can I make that more efficient? Versus a nurse calling me and checking up on my drugs usage and making sure I've taken my medication. How can I automate that process? Even things like counseling, believe it or not, where I want to be able to reach out to elder care patients who may have certain requirements emotionally, mentally, which they need additional support on, and how can I make that an AI led approach. You'd be surprised - so much training has gone into getting these models trained on certain drug types, medication and not just that, even things such as understanding suicidal behavior. They look at all these things as being able to hopefully make a better opportunity for them to serve, in this case, these patients, but to do it through a completely AI model or AI led model has been quite radical. Those companies have gone from being very small to significantly large businesses. I think that's the interesting space. You've got a whole new category that tends to happen in tech every 10, 15 years. We see the next innovation and I think AI definitely, for my lifetime, will become the most important generational change for us.

Bernard Leong: I think we went through probably three, four technological revolutions over the last three decades - since we talked about software engineering. Then web, mobile, now AI. It's always moving so quickly. What's the one thing you know about Confluent and the role of data streaming in AI that very few do?

Kamal Brar: I think the Kafka part is a given. Confluent is a category leader in that space. We've defined it and we have probably the most impressive cloud offering on the planet, and being able to scale it. However, to make that a seamless process across, being able to consume that as a service, as a true hybrid offering is something I think our larger audience may not be aware of. Being able to integrate that across DSP, where you have the challenge of scaling Kafka, making Flink a very easy to use consumable service. If you haven't experienced Flink, I'm sure you have in your experience with AWS. But Flink is a very complex engineering deployment. Just the operational overhead of running Flink is challenging. To consume that as a service seamlessly across all of our cloud offerings across GCP, Azure, and AWS is an engineering marvel. It's actually a lot of work. I think that appreciation in some cases - because we just consume services, we don't realize the complexity, but the complexity of how we're running these systems at scale, just our customers - we serve 5,000 customers globally. It's quite amazing.

That whole seamless experience, being able to consume it through a cloud and then go back to on-prem and hybrid to address sovereign requirements, I think is pretty unique.

Bernard Leong: Can you walk me through a real world use case in AsiaPac or Middle East where you actually helped the business to unlock that value through Confluent?

Kamal Brar: I would say the largest use case that we've seen in probably a long time is around payments. If you think about the world of payments and how payments are being disruptive - in Singapore, we have QR based payment codes. We call it PayNow or PayLah, depending on which bank you're with. But PayNow being the standard. In the world of payments, if you think of some of the larger emerging economies in particular countries like India or Brazil, they have pretty unique standards and they have a very large population to serve. In India, we've been very closely working around some of these areas in particular with the banks around payments through a UPI interface. NPCI, National Payments Corporation in India, leverages a lot of Kafka. If you look at the backbone of their stack, they have a digital India stack of course, which is largely open source. But if you peel the onion on that and you look through what's powering the payment stack, it's a lot of Kafka.

Being able to disrupt, but also serve a new generative way of accepting payments for a guy who's serving tea for a couple of rupees and for someone who's doing large transactions, and to be able to settle those transactions across intermediary banks in real time at a scale of 1.3 billion people, I think is something that's been really amazing for that team to achieve. We played our small part, of course, being one of the largest contributors to Kafka, and we work closely with the respective agencies in making sure we build secure, highly scalable governance around that. It becomes mission critical infrastructure when you're serving the entire country's payments. If you're down, the economy doesn't function. Consumers can't transact.

Bernard Leong: I would just ask this - because you want your data streaming platform now to be essential for AI readiness, and you can simplify things like data access, ensuring data quality, and maybe even enabling governance. Can you elaborate a little more on how these capabilities can actually translate to faster AI deployment? Because I think enterprise usually they have a lot of compliance conversations. They have a lot more governance conversations. How do you actually navigate that?

Kamal Brar: I think the areas you talked about when it comes to - we have a schema registry. We have a pretty stringent focus on role-based access. We want the data that's streaming to be relevant and have strict security and governance in the context of the applications or the access that you have for the data. If you think of the world of AI, it becomes not only important for you to have that data in real time and to be able to do it in a secure manner, but then you want to be able to - in some cases, a lot of the AI data has sensitivity around it. Maybe when I give the example of the patient outpatient calls -

Bernard Leong: They're hyper compliant and they need PII [personal identifiable information].

Kamal Brar: Hyper compliant. They need - there may be certain other areas that you're treating this patient. Just being able to serve that data securely and in the context, I think the most challenging part is in the context where it understands: when I spoke to patient Joe, Joe had the following and the follow ups with this, and now in the current context of the conversation and the relevant drug medications that he's on, or the challenges he's now in real time articulating. He may be saying, "Look, I need additional help on the following areas, or I'd like to speak to a doctor." Being able to serve that in real time and to be able to securely do it in a context of localized sovereignty, and then also in context of scalable systems that could be on public infrastructure, I think is the challenge.

Bernard Leong: Almost like the magic and also the added complexity of the AI responding in order to pull the correct information across different sources and then streaming back in front of the person trying to receive the information.

Kamal Brar: Yeah. If you remember, I initially said in the context of where Confluent can play a critical part, it's also the interoperable integration with all the respective AI ecosystem. If you think about what AI is doing, it's working across microservices, working across data lakehouse databases could be different endpoint services to bring all that together and to do it in a secure manner and consistent manner, constantly serving that information. That's where I think it plays the most pivotal role.

Bernard Leong: What would be your advice to business owners of enterprises now thinking across things like data streaming, thinking about how to enable their AI applications to run across this kind of production workloads? What do you think will be the do's and don'ts that you'd tell them to do?

Kamal Brar: Look, every application and every use case has very different requirements. Being very clear in articulating what are they trying to actually solve, and has that been done before? In many cases, if you look at the AI capability and what people are trying to do, I think in every context there are very good learning opportunities if you go look at what's happening in the Valley and in context of even the Middle East, which I think is a little bit further ahead in some of the use case adoption. But the learning opportunities are the same. The challenge of sovereignty doesn't change in Singapore and UAE, it's still the same challenge. The context of - we talked about that patient example where you'd have an outpatient service which was AI led. Those requirements would not change in Singapore. What would change potentially is how the data sensitivity and PDPA [personal data privacy act] is managed versus HIPAA and so forth in the US. Just the regulatory compliance may change, but fundamentally the use cases will be very similar.

I would say the do's and don'ts - really understand the use case, define it really well, and it's okay to have an iterative process on that. You're never gonna nail the use case upfront. You're gonna have the ability to redefine it. Because what may be a very challenging use case today, Bernard, in two months or three months, [that] may no longer be the case. That's just the nature of what we're dealing with in AI. Every three to six months there's been innovation that didn't exist. If you look at even the models, the models themselves have dramatically improved. I think NVIDIA talked about some of their - I forgot what the name of the new chip is - Blackwell. But the new ones, they're actually smaller. There's the bigger ones coming up. This is what powered basically ChatGPT-4, I think. It's amazing to see the innovation is not just consistently happening on the chipset side, but the fact that all of this translates to how the models have changed and how the models will perform is pretty remarkable.

I would say sometimes we spend a ton of time around defining the use cases, making it perfect, and you want to learn the context of what's happened, how it's being relevant in the US and other places where it may have already been done. But then being able to just iteratively drive it and to do it better, I think is a process of just constantly accepting that things are gonna change and it's not gonna be this way forever. I think that's something that people have to adjust to. We almost think, "Hey, the problem I'm solving now is gonna be the same 12 months, 18 months down the track." It may not be the case. It may actually be the case. The hardest problem you're solving now may actually be easily solved in six months time.

Bernard Leong: But the fundamentals don't change. One interesting thing I find when talking to CEOs, when they ask me about AI applications or they tell me they want to move fast and try to get the whole team to move with them - the first thing I always ask: can you tell me where your data is? Can you tell me how the different data streams work? Then they suddenly stop. It's like, "Oh." Then you start to probe deeper and they realize that they need to make investments in those areas. Do you see that happening all the time when you talk to customers where they just tell you, "Hey, I would like to do this fast," but then when you start to look at the underlying, you need to really help them to educate why this is important?

Kamal Brar: I would say fundamentally anything around AI, the data infrastructure is probably the most important piece. If there's a takeaway for your audience today, it should be: without the data infrastructure piece being clearly defined and having standards around that, you really struggle to build a scalable offering. You're spot on. Most of the challenges we see are they want to move really fast, but as we've talked about, there's been technology, there's tech debt, there's skills gaps, and there's just lack of consistency in some cases, which is what the report highlighted very clearly. I think where we're lucky as an organization, as a company, because we're redefining some of those areas, it's a journey. It's a journey that we talk about - our customers going through a five phase process where today they may just be a very small deployment, and then it becomes the central nervous system, where the data streaming platform is core and central to all elements of the business. That could be context of AI, it could be context of just servicing existing applications or modernizing applications. That entire journey takes a period of time. It doesn't happen overnight. That's been our learning - our customers themselves usually start departmental or silos, individual, developer projects that scale to department, department to part of the company to enterprise wide. That's how we also work with it.

Bernard Leong: When it comes to the modernization part of the enterprise application, let's say they want to change this particular enterprise application - say a chatbot may be running on legacy infrastructure and then decided, "Hey, now I want to transfer to data streaming because of the way how conversational AI is moving" because you're trying to have this high speed, real-time communication with the agent and with the client. How much of that rearchitecting is required?

Kamal Brar: It's a really good question. What we've done is we've built, within Confluent Cloud, which is our offering, we built predefined connectors or managed connectors. We have about 80 managed connectors out of the box. We make it seamless for you to interact and interoperate with some of these systems out of the box.

Bernard Leong: Like Salesforce or SAP connections.

Kamal Brar: Exactly. The CRMs of the world, the database platforms of choice and so forth. Then we've also gone - there's a whole ecosystem around this. That's fully managed. Co-managers, and there are connectors that are built part of the community, which may be self-managed. There are probably 150 to 180 connectors.

Bernard Leong: Do you also have a marketplace or some kind of third party providers that can actually help to actually customize Confluent to maybe certain specific use cases?

Kamal Brar: What we find is there's a whole bunch of ecosystem community-based connectors out there as well. Confluent has its own, and we want to make sure our connectors are scalable, enterprise grade and so forth. But there are a ton of community connectors, which have all sorts of use cases. Someone's found a particular connector that they want to build and interoperate. When you talk about these legacy systems, it's about unlocking it. We may not have a native Kafka client, but someone's built a custom connector that talks to that protocol and serves that as Kafka. There are multiple ways around it. I think that's been a really interesting way for our organization, our customers, to benefit. It's like they don't have to necessarily rip and replace, but there's interoperability through a protocol, which is now built by someone in the community.

Bernard Leong: One of the untold stories of AI applications or generative AI is actually the modernization. You can think about COBOL programmers - now you can use a lot of the generative AI to move them from COBOL to Java. That also comes together with the financial data streams that we're talking about in new banking, transaction banking. That's why the whole concept of thinking about modernization, where data streaming is involved, is actually not an easy task. That's why I thought that question was actually very interesting when you told me, "Actually there are connectors to pull it together." But I have this question for you. What's the one question that you wish more people would ask you about Confluent or DSPs?

Kamal Brar: The most obvious is around interoperability.

Bernard Leong: I'm gonna ask you the question: What about interoperability?

Kamal Brar: How do we better work across - if you think one of the biggest challenges is I have all these disparate systems, I have all this fragmented data. How do I interlock? How do I unlock this data and how do I make it easy and seamless for me to share that data? I think there's probably a lack of awareness around some of the amazing things we're doing around how we provide that interoperability through Kafka as a standard protocol. We have our entire Connect ecosystem, which is - we actually have a program called Connect with Confluent, CwC, where you can come and even build custom connectors, as I mentioned earlier, or you can just leverage our connectors, pre-built connectors. But to do that across the enterprise domain or to do that across your entire infrastructure is pretty impressive. In many cases customers don't realize the power of our connectors. They understand Kafka because they've already built something ground up with Kafka, but you may have other stuff which doesn't work as well. That's where we could really unlock and harness the power of data.

Bernard Leong: Wow. I didn't realize. What does great look like for Confluent within your region of interest? What does success really mean for the business in the next five years for you?

Kamal Brar: We've been probably experiencing some of the most aggressive growth in the region. If you look at our business, I've been with the company for just over three and a bit years. Since I've seen the business in Asia Pacific grow, our Southeast Asia business continues to hum along. Our India business has been probably one of our strongest performing businesses. Australia has been obviously a foundational part of our growth journey. If I was to look at the next five years, I think our business will become even larger. The interesting part is the adoption of some of these technologies and just the scale of these technologies - what we're serving in Indonesia and India and in China for that matter. Very large economies, which have tremendous appetite for some of these data systems. We talked about NPCI and we talked about payments. That problem is in billions. When you think about it, it's gonna be in trillions. They're gonna get to a hundred billion transactions a day. How many systems in the world do that?

When you think about the context of what we're solving and the scale of the systems, I think there's just so much to be done over the next five years. I think you'll see more and more emergence of that in this region, more than any other region. Europe's heavily regulated, but still doesn't have the same level of scale and sophistication in some cases on the tech. I would argue that some of the innovation that we're seeing in Asia is actually a little bit further ahead now of Europe. That's not because Europe's not capable, it's just heavily regulated industries make it a little bit much more challenging for them to move. But then we also have a very unique problem when we're solving for billions of people. The fact that we can move a little bit quicker makes this a very unique market. Five years down the track, I think Confluent's Asia Pacific business will be probably one of the largest components of growth for the company.

Bernard Leong: Do you think significantly it's gonna be powered mainly by generative AI applications or AI applications for the enterprise?

Kamal Brar: On the whole, I have no doubt. I think over the next five years, we'll see a massive transformation across how we interact with applications. Today we are very much mobile led and I think that mobile experience will become more AI, generative led.

Bernard Leong: Maybe we can go to glasses [form factor, for example Meta's Rayban glasses or Apple Vision Pro] or maybe even you wave your hand. It's already happening everywhere [like in New York]

Kamal Brar: Today you have the glasses. Maybe there'll be just more interaction there. Or if we can get these neural chips working correctly, I think that's gonna be a really exciting era. I don't think it's too far. We're talking in the context of robo taxis coming out by 2027 in Dubai where I spend time there as well. They're going to start that next year I think. It's actually not too far away.

Bernard Leong: Actually almost everything that we just talked about in the last 30 seconds to a minute - data streaming will enter into the lens so they can see the real time information. The self-driving cars need to be able to access all that - it's actually data streaming platform that's very important.

Kamal Brar: It's what I refer to as AI on the edge. You have to be able to serve the data on the edge, and that's again, where Kafka is so important. Your decisions are made on the edge. You can't wait for something to go back to the cloud. If I'm driving my Tesla and I need to be able to process all that information within my own processor on the car, I'm not gonna send the query back to the cloud and say, "Hey, can you make a decision while I'm driving?" That would be a bad outcome. AI on the edge is a very important part, and that's where a lot of these embedded systems actually leverage Kafka as a protocol.

Bernard Leong: Kamal, many thanks for coming on the show, and I really appreciate this conversation. You bring a very interesting dimension. My audience may not know the importance of why data streaming platforms, specifically in the data side, is so important to power all the generative AI applications or even all the next few form factors from glasses to self-driving cars. In closing, I always have two quick questions. Any recommendations which have inspired you recently?

Kamal Brar: I still work for an amazing guy, and I think you should - I spend a lot of time - he is one of my mentors. Someone called Bipul Sinha. He is the co-founder of a company called Rubrik. They're in the cyber data, cybersecurity space. He's been - he always inspires me to do amazing things. He moved to America straight out of college. He went to IIT, failed a few times before he got into IIT in India. But then he ended up becoming an American and built this amazing company. I think the way he defines it is that ultimately everyone has a vision or everyone has a dream. How big you make that is really the limitation you enforce on yourself. It's an interesting perspective on it. He really, I think, exemplifies the creative approach of just having no limits and he continues to do amazing things. I learned a lot from him. He has interesting posts. I encourage you guys to go check those posts out. If you want a good read, "The Hard Thing About Hard Things" is great. Marc Andreessen and Ben Horowitz have a great book on their journey of building a company and just some of the hard decisions that you have to make when you're building companies. Both, I would say, very aspirational and good reads.

Bernard Leong: How do my audience find you? Please feel free to tell me, because I'm probably gonna put a link to the data streaming report, but what else do we need to know about you all?

Kamal Brar: Of course. I'm on LinkedIn, so you can find me on LinkedIn. It's pretty easy. But also check out the Confluent website - confluent.io. You'll learn a lot more about data streaming. We, in fact, just launched the Data Monster as a theme. It's a fun way for us to describe the challenges of data fragmentation and silos and so forth.

Bernard Leong: You can definitely find us on any channel from Spotify to YouTube, and of course LinkedIn. Kamal, many thanks for coming on the show, and thank you for sharing.

Kamal Brar: My pleasure. Thank you so much.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.

Comments