How Elastic Bridges Enterprise Data and LLMs with Ken Exner
Fresh out of the studio, Ken Exner, Chief Product Officer at Elastic, joins us to explore how Elastic evolved from the world's most popular open-source search engine into the context layer powering modern AI applications and agent systems. He shares his career journey from database programming during the early internet era, to spending over 16 years at Amazon building the internal tools and resilience practices behind AWS, and now leading product strategy at Elastic where search, observability, and security converge into a unified AI infrastructure platform. Ken explains that context engineering β not just prompt engineering β is the defining discipline of the AI age, where developers are becoming managers of agents focused on goal-setting and relevance rather than writing code, while enterprises need retrieval and relevance layers to bridge their messy, siloed data stores with LLMs. He highlights how Elastic's 15-year head start in enterprise-grade capabilities β from vector search since 2017 to two orders of magnitude improvement in memory efficiency β positions it ahead of purpose-built competitors, while its ability to deploy on battleships, Mars rovers, and air-gapped government environments makes it uniquely suited for public sector AI transformation. Closing the conversation, Ken shares what great looks like for Elastic to become the foundational relevance and retrieval layer between enterprise data and LLMs while disrupting observability and security from within.
"I like to think of the future of software development isβdevelopers will be managers of agents. They're no longer going to be ICs [Individual Contributors], theyβre going to be managers. Every developer is going to be a manager of agents and theyβre going to be doing context engineering. Theyβre going to be figuring out how to pass context and data to an LLM or an agent. And theyβre going to be goal setting. Theyβre going to have their team of agents, and theyβre going to give them goals, and theyβre going to review the output." - Ken Exner
Profile: Ken Exner, Chief Product Officer, Elastic (LinkedIn, Blog)
Here is the edited transcript of our conversation:
Bernard Leong: Welcome to the Analyse Podcast, the premier podcast dedicated to dissecting the powers of business, technology, and media globally. I'm Bernard Leong, and one of the biggest challenges for enterprise AI today is not just models, but how organizations connect those models to their data systems and workflows. Elastic started as the company behind Elasticsearch, one of the most widely used open-source search technologies in the world today. The company has expanded into observability, security, and increasingly into the infrastructure layer powering AI systems and agents. With me today is Ken Exner, Chief Product Officer of Elastic, to discuss how Elastic evolved from search infrastructure into what it now calls the context layer for AI applications and agent systems. Ken, welcome to the show. Welcome to Singapore as well.
Ken Exner: Thank you. Love coming to Singapore, so it's good to be back.
Bernard Leong: You just came straight from the keynote [from ElasticON Singapore 2026].
Ken Exner: I just came off stage, so hopefully my voice survives through this. Good to be with you, Bernard.
Bernard Leong: Just relax and we will start. The first important thing is I want to start with your origin story, as we always do. How did you start your career in technology?
Ken Exner: How did I start my career? Well, this is a long time ago because I'm old, but I actually began as a database programmer and statistician. I worked as a database programmer and statistician at a marketing company. It was a little bit boring, but I thought it was pretty cool. After that, the internet happened. I had been learning HTML and CGI scripting, and ended up pivoting from database programming into web programming. That took me to SGI and a bunch of startups, and I eventually found my way to Amazon, where I spent a lot of my career. I was at Amazon for over 16 years.
Bernard Leong: You are definitely one of the true ex-Amazonians that I know. What attracted you to Elastic, and how did you come to your present role as Chief Product Officer?
Ken Exner: From AWS, I could see which segments were growing and interesting. I started getting curious about some of the observability and security spaces. But I also saw what was starting to happen with generative AI. This is right when I started at Elastic. You could start seeing that a lot of the new models and stuff happening with OpenAI β this was even before ChatGPT β was very promising. In Elastic, I found a company that did all three. They were poised to do well in the AI age with their vector search capabilities, and they were leaders in observability and security as well. I was like, well, this is great. This is perfect. Let's do this.
Bernard Leong: If you were to look back at your career journey across all these large-scale infrastructure platforms, what lessons would you share with my audience about building products that operate at enterprise scale β or internet scale, as we call it today?
Ken Exner: You always learn from mistakes and outages. We had a few of those at AWS over the years. Part of my job, at least in the middle part of my AWS career, was building all the internal tools and development practices that Amazon uses. I was responsible for the software development lifecycle, including how we operate software. I was constantly trying to figure out how to make us more resilient and how to solve the problems of outages β reducing their impact not only for Amazon, but for AWS customers.
Initially, we focused a lot on preventing issues through testing, but eventually you start to realize that's not enough. Mistakes will get through. It was only when we started also focusing on two things β MTTR [Mean Time To Repair], making sure you can remediate an issue as quickly as possible, and blast radius reduction β that things clicked. The combination of the three: preventing issues through better testing, reducing impact through blast radius mitigations like cellular architectures, and being able to resolve things quickly by rolling back β that was core to why AWS was able to have such resilient operations. They focus on all three, not just mitigation, but reducing impact. If there is an impact, roll back as quickly as possible.
Bernard Leong: I'm going to come to the main subject of the day. We are going to talk about Elastic, and thanks to the pre-call we had, I now understand that Elastic has evolved from search to AI infrastructure. To baseline my audience, can you explain what Elastic originally set out to do and why search infrastructure has now become so fundamental in today's modern AI applications?
Ken Exner: As you mentioned before, we're one of the most popular open-source projects of all time β Elasticsearch. The original reason that Shay Banon, the founder, created it was actually to help his wife manage her recipes. It was a search engine and data store for recipes. But it ended up becoming a very popular search engine for all kinds of different use cases.
Initially, people were using it to add search to websites or search to products. But one of the things people realized is that search was a good foundational technology. You can actually build all kinds of applications on top of search. People started using us to build things like Uber matchmaking β matching a car to a rider β matchmaking sites, recommendation engines, signal intelligence systems, and a lot more. Search became a foundational building block, a foundational platform.
A couple of things led to where we are today. Early on, people wanted to do more than just search through text. They wanted to search through images or videos. Adobe, for example, was an early customer. They wanted to search through images. So we needed to be a vector search engine as well. We were actually one of the very first vector search engines out there. That poised us to be ready for the age of generative AI, because we were a vector search engine as early as 2017 β way before generative AI was even a thing. We were supporting vector search in Elastic.
The other thing that happened is people started using us to search through logs. That's what took us into the observability space, and that's what took us into the security space as well. Once you start searching through logs, it's a natural evolution to start searching through metrics and everything else. We became a full observability platform. Searching through logs and security event data took us into the security space, where we are one of the most popular SIEMs β security analytics platforms β out there. From there, we expanded to cover other areas of security, including endpoint protection and more.
Today, we are more than just a search engine. We are a solution for context engineering and building AI applications. We are also a full observability platform. Then finally, we are a complete solution for the modern-day SOC [System and Organization Controls] , helping with security operations.
Bernard Leong: Today, Elastic offers the full platform across search, observability, and security. How do these three areas actually reinforce each other in the enterprise stack when customers adopt Elastic?
Ken Exner: Security and observability are easy because they're different sides of the same coin. You typically have the same log data being used for observability and for security. A lot of things we do in one area tend to improve the experience for the other. When we introduced Discover and Timelines for one solution, that became useful to the other. When we introduced ES|QL, our piped query language for security, that became super useful for observability as well. They're self-reinforcing. They use the same collection techniques. They have a lot of the same analytics capabilities but support slightly different workflows, slightly different users, and they benefit from the improvements we make to the other.
With search and our AI capabilities, this is interesting because we started developing technologies to help people build AI applications. The interesting thing is, within observability and security, we get to use those same tools. One of the reasons we've been early pioneers in leveraging AI to transform the observability and security space is because we build tools for AI practitioners and AI developers. We get to use those same things ourselves. I'm often surprised when I look at the products out there β they seem like we've shown them the path of how to use AI in security, for example, and they still haven't caught up. I think a lot of it is because of the natural advantage we have. We get to use our own tools. We leverage being a vector database and a context engineering solution. All this is built into the platform, so we get to move faster.
Bernard Leong: Given these three big areas, how does it reshape the company's product strategy moving ahead? Is it about integration points, or is it more like you're still discovering innovations along the way that will take it to the next level?
Ken Exner: One thing is, I fundamentally believe that observability and security are going to be transformed by AI. There's no doubt in my mind. The experience for observability and security practitioners in a couple of years, maybe even sooner, is going to be very different than what it is today.
Even the term "observability" is quaint and antiquated β it's describing the act of observing infrastructure. What do we do in observability? We look at graphs, we try to visually look for anomalies. All these things are going to be better done by machines than by people. You're going to start seeing a lot of the work that goes into discovering issues β whether it's a security issue or an observability issue β all driven by AI. As runbooks and playbooks for remediation get automated, a lot of that's going to be handled by AI as well. The experience is going to be fundamentally transformed.
Bernard Leong: You alluded earlier that Elastic implemented vector search pretty early for semantic search and recommendation use cases. I want to dive deeper into this. How did that prepare Elastic for the generative AI wave? Was it very natural β you started doing vector search for images and videos, and then discovered the next stage of where AI is going together with what you already had?
Ken Exner: I'd love to lie and say that we knew exactly where the AI industry was going, but I think a lot of it was just that we saw what customers were doing and were able to start providing additional capabilities. It started with being a vector database. People wanted to start augmenting prompts with data, which led to retrieval-augmented generation. You augment a prompt with data rather than just passing a prompt.
We started providing additional capabilities to make that easier for our customers. As you moved into agentic architectures, things changed a bit. People wanted to not just pass data into a context window. They wanted to develop MCP tools on top of data, or they wanted to develop skills on top of data. We provided those capabilities. We've been evolving and staying ahead of what people are wanting to do with AI, and also broadening our platform β covering not just retrieval techniques, but helping people get data into our ecosystem. We have tons of different connectors into different data systems. We have embedding models, vision models, and OCR models for helping parse data. We've been broadening our capabilities to cover all the different aspects of context engineering β from ingesting data and polling data from different data sources, to retrieval techniques, to LLM observability, to evaluation frameworks, to agent building and more. There's a whole set of capabilities that we now have for context engineering.
Bernard Leong: One interesting thing with respect to retrieval-augmented generation is dealing with a lot of unstructured data. There's still the structured data part that needs certain integration, like transactional data. Where does Elastic sit within this modern AI architecture stack? Or is it still evolving?
Ken Exner: We're agnostic. It doesn't matter to us β if data is messy and unstructured, we work with it. If it's structured, we work with it. We can help pull data from across different data sources. We have connectors into every type of major enterprise system. We can run retrieval techniques across these different data sources, both for ingesting and indexing information, but also for federated search. We're helping with the hard problem of how you find relevance across different data sources through things like re-ranking models.
Where search is going today is interesting. Up until now, it's been using search to power AI β the consumer of search is AI. You're building a search result for an AI-based system rather than for a human. But the other thing happening right now is we're starting to use AI to power search. It's AI powering search, which is powering AI.
Bernard Leong: It's a flywheel loop.
Ken Exner: It's inception.
Bernard Leong: Many companies are experimenting with vector databases today. What differentiates Elastic's approach to vector search and retrieval for enterprise workloads? One thing I know is that across the largest enterprises, when they want to do something as important as RAG, the vector database has to be enterprise-grade. What is the differentiation?
Ken Exner: You're setting me up. Well, first of all, to be a great vector database, you have to be highly performant and highly efficient. I'm always going to be trying to make sure that we are the most efficient, most performant vector database. Period, bar none.
But one of the things that really distinguishes us and is one of the reasons why so many enterprises and governments use us is that we have all the enterprise capabilities they need β whether it's the security model or permissions model. RBAC [role-based access control], ABAC [attribute-based access control], attribute-level, file-level permissions. We have audit logging, all the enterprise-grade capabilities you would need for a database in an enterprise. The reason is because we built this into our search engine, into our data store, that we've been working on for 15 years.
I sometimes get asked, what do you think Pinecone or Weaviate or some of these smaller purpose-built vector databases are going to be working on over the next 10 years? I said, I know what they're going to work on. It's going to be what we worked on the last 10 years β making sure that they can provide enterprise capabilities. We already do that.
The other thing we do that's different than most vector databases is that we know it's not just about vectors. It's not just about how you store and query vector embeddings. There's a lot more you need to do as part of that typical workflow for RAG or for agent building. It's figuring out a chunking strategy, figuring out how to parse data, running embedding models β multilingual, multimodal embedding models β combining different retrieval techniques. There's a lot more that's part of that workflow, and we're solving all of it, not just the storage and querying of vectors.
Bernard Leong: I speak to many business leaders in enterprises and work with them on generative AI. One thing that clearly stands out is sometimes they have this notion that if ChatGPT can do it, so can their engineering leaders. I have to tell them, in order to do the retrieval part, you must have a proper vector database and you must know how to chunk. Your engineers need to know how to chunk the documents, but they have no context of the business problem. So you have to help them. Are there other things that may also be important when building the retrieval piece?
Ken Exner: Well, the embedding β you need to create the vector embeddings, which means you need an efficient, accurate, performant way of running inference on that data to create the vector embeddings. Being a great vector database means you store it efficiently. We want to make sure that we're always the most efficient place to store vector embeddings while delivering performance at the same time. I'm very proud that over the last 18 months, we have introduced two orders of magnitude improvement in memory utilization β much more efficient than any other. I think we're seven times more efficient than OpenSearch. But still also delivering really good performance.
Then it's combining different retrieval techniques to make sure we're helping customers tune for relevance. At the end of the day, you're not just trying to store and query vector embeddings. You're trying to create relevant results so that the AI system you're building gives you relevant answers and takes relevant actions. All that takes work to make sure you're tuning for relevance.
Bernard Leong: I come to one of my favorite questions. What is the one thing you know about Elastic the company that very few people do?
Ken Exner: One is that we like to use our own capabilities. Our InfoSec team uses our security products. Our SRE teams use our observability products. We also use our own AI capabilities as I've been talking about. I think this has made us a better product development team because we are our own customers. But we also are able to move faster because we're leveraging these phenomenal capabilities for doing observability, for doing security, for building AI applications. As I mentioned before, being able to use our own tools for AI has made us really good at AI in observability and security. I think it's a powerful story to talk about being customer zero.
Bernard Leong: I want to come to the discussion on context engineering. Recently, I've been reading a lot of articles on Elastic talking about context engineering. From your standpoint, can you explain what context engineering means today and why it's now becoming critical in building reliable AI systems?
Ken Exner: This is something we've been hearing in the industry a lot over the last six to twelve months, and I think we're going to hear the term context engineering a lot going forward. It's critical to doing AI right. It started with prompt engineering and expanded into RAG, but as the different tools and techniques for passing data to an LLM or an agent kept evolving β whether through the use of MCP tools, skills, or CLI β you now have a number of different techniques for how to do this. That broader discipline is called context engineering. It's everything from how you get data β connectors into data, federated search β to running inference on data, to different retrieval techniques. All of it is context engineering. It's important because it's what differentiates a good AI application from one that isn't performing as well.
It's also increasingly how we are going to build and use AI-based systems. I was talking to one of the developers on my team β one of the developers who has been embracing AI coding tools. I asked him, now that you're using AI coding tools for everything, what is software development like for you these days? What do you do? His answer was: context engineering. I said, what do you mean? He said, that's all I do all day long β context engineering. I figure out what prompts and what data to pass to an LLM. I found that fascinating. In some ways, context engineering is the new architecture, the new application architecture. It is how we build AI systems. It's how we drive the results we want from an AI-based system.
Bernard Leong: If I look at the big AI companies β Anthropic, OpenAI, and Google β the maximum context window today is probably about two million tokens. I used to work in the construction space, so I have this fondness for dealing with building information models, which are all gigabyte files with very large token counts. Do you see the ability to increase context scaling further? Or is it still a matter of figuring out, as your engineer says, dealing with different context?
Ken Exner: You don't want to pass all the data in the world into an LLM via context window, even if you could. Because it's expensive, it will take time, and you still need to get to the right information. The other part is that data is dynamic. Data is constantly changing. Data is also variable based on the user. You interacting with an AI system are going to have a different set of permissions and information that you have access to than me. How do you model that? You still need to make sure you are using search and retrieval techniques to get to the most accurate information relevant to that user β not shoving every possible thing into the context window.
Bernard Leong: So even for very large data, you can split it into different parts and use that as part of the context to drive the application itself.
Ken Exner: Some of it is because you're trying to reflect the user's permissions. Some of it is you're trying not to pass an entire product catalog into a context. You can pass all of an eCommerce store's product catalog into a context window. That's not a smart thing to do. Or you can figure out how to dynamically present relevant information and get the LLM to focus on the right context. There's also a narrowing part of this. You need to scope the result and not just provide all the information. You need to make sure you're driving focus on a particular corpus of data. There's a narrowing effect to search that gets the LLM to focus on the right thing.
Bernard Leong: How does Elastic help organizations bridge the gap between enterprise data β which is very large β and large language models? When you think about just having five MCP connections, it's already making AI agents slow down significantly because most MCP connections are single-threaded. From your standpoint, from how Elastic has been working with customers, where are you with that?
Ken Exner: Stepping back, we are the bridge between companies' data and LLMs. That's the role we play. We're the search and relevance layer that helps them connect all their different data sources to these LLMs. Typically, companies have a lot of data tucked away in different applications, different data silos. As we mentioned before, some of it's structured, some of it's unstructured. We have connectors into all those different systems and we can help make sure we provide relevance and retrieval layers in between all that data tucked away inside an enterprise and the LLM or agent they're trying to connect to.
Bernard Leong: I want to get into the agents part because I know you've introduced Agent Builder, which allows developers to build AI agents grounded in enterprise data. You've also introduced Workflows to orchestrate actions across systems. What are the problems you're trying to solve with these two solutions?
Ken Exner: The interesting thing about Agent Builder is that we were originally trying to solve an internal problem. We had been starting to develop lots of agent-based applications inside Elastic β like Attack Discovery in security, for example. Each of them used a slightly different architecture, and we wanted to create a unified way of building these agents. So we started creating the platform that became Agent Builder.
What we realized is that this wasn't just useful for us. This was going to be a useful way for anyone to build an agent application on top of Elastic. We had customers who wanted to do more than just RAG. They wanted to build MCP tools on top of data, or they wanted to build skills on top of data. So how do you do that? We created Agent Builder for helping customers do all the configuration management, prompt management, tool building, memory management, knowledge graphs β all the things you need for a modern agentic architecture. It even comes with a default conversational agent you can use for talking to your data, but you can use it as a reference architecture for building your own agentic applications.
Workflows was a sister release to Agent Builder because it helped with building agentic workflows. It also helps with a bunch of other things β it's a core primitive in Elastic. You can use it for observability workflows, like ticket routing or alert deduplication.
Bernard Leong: So you're trying to get deterministic behavior as well?
Ken Exner: Workflows is about deterministic, rules-based workflows. In the case of AI, sometimes you want deterministic workflows. You don't want the LLM applying reasoning or judgment to something like a payment system. You don't want the LLM making a decision about how often to pay you a paycheck. You want it to use a deterministic tool. So you apply an agentic workflow to doing something like that.
But the combination of Workflows and Agent Builder is interesting because it works in both directions. It's about creating deterministic tools that an agent or LLM can use, but it's also the other way around β you can now embed reasoning steps and the capabilities of LLMs into workflows as well. It's powerful that you can do both. You can add reasoning to workflows, and you can add workflows to reasoning-based systems.
Bernard Leong: The Elastic framework is focused a lot on data, tools, and context. Some frameworks also talk about a reasoning layer. Why is the approach of data, tools, and context important from Elastic's perspective?
Ken Exner: Tools are a way to express data to an LLM or an agent. I always say that this is the latest hotness β actually, the latest hotness is probably skills and CLI. But this will keep changing. The techniques for how we pass data to an LLM or how we pass context to an LLM will continue to evolve. Context engineering is an evolving discipline. It started with prompts, moved into RAG, moved into tool building, resource building, skills, and CLI. We will continue to evolve the architectures and ways of passing information. The great thing for Elastic customers is we will continue to evolve and be leaders in defining the new architectures and techniques for how you pass data and context to an LLM.
Bernard Leong: Many governments and regulated industries today are still running on-premises in air-gapped environments. How important is this capability for Elastic when thinking about deploying AI infrastructure β when they need their large language model within their on-premises system and it needs to be fine-tuned most of the time?
Ken Exner: Public sector is actually our largest sector at Elastic. One of the reasons is that we support cloud-based deployments as well as a serverless architecture on cloud, but we also support on-premise, self-managed deployments. Customers can use Elastic their way. If they want to run it in an air-gapped environment, they can do that. We have people running Elastic on battleships. We have a rover on Mars that uses our endpoint. You can use this in all kinds of different contexts β not just our cloud-based services but in all kinds of unique public sector use cases as well. This applies not just to our AI-based capabilities, but to our security product and our observability product. You are able to use any of them in air-gapped or on-premise environments as well as cloud.
Bernard Leong: You mentioned earlier your developer told you that everything he's doing is context engineering. That really makes me think about how AI tools are transforming software development. From your perspective today, how is software engineering going to change β with entry-level engineers and very experienced engineers coming in at the same time, how are they going to use AI coding assistants and development tools? I'm of the camp that the software engineering job is not going to go away, but it's going to be transformed. Where do you think that transformation will be?
Ken Exner: It's happening fast. The last few months have been disorienting. It started around November, early December. The new coding models β like Opus 4.5 and the new Codex model β were remarkably better. That, combined with the new AI tools that were now agentic in nature, suddenly everything came together and clicked. One of the things people started reporting over December and January is that these tools had just gotten remarkably better β not just incrementally, but orders of magnitude better. Suddenly, these AI coding tools were better at coding than people. It's been an interesting couple of months in the software development industry.
Everything we've taken for granted about software development β that developers code, that developers comment and write documentation β these things are now better handled by these new models and AI coding tools. Where we go in software development: I like to think of the future as developers will be managers of agents. They're no longer going to be ICs β they're going to be managers. Every developer is going to be a manager of agents. They're going to be doing context engineering. They're going to be figuring out how to pass context and data to an LLM or an agent. They're going to be goal-setting. They're going to have their team of agents and they're going to give them goals and review the output.
The work is going to change. How we do software development is going to change. But I do think all of it will be additive, not subtractive. It'll allow us to uplevel what we do and focus on creating goals for these agentic-based systems. But it'll be disorienting because a lot of the things we've taken for granted about software development is changing.
Bernard Leong: A lot of the popular AI coding tools are still operating locally in user environments. What are the challenges in bringing these agent systems into enterprise environments where security, governance, and reliability really matter? I think you're running one of those organizations, so I don't assume you'd just say, "We use the user environment AI coding tools."
Ken Exner: A lot of these new autonomous AI systems or AI coding tools operate in user space, which is good and convenient. But the problem is they operate in user space. If you're an enterprise, this is the problem we've had for a long time. We used to have personal access tokens and delegation approaches to security. But we don't like those because you have credential leakage. They're not very secure ways to manage your enterprise. You want to move to things like service roles and ephemeral service identities.
The challenge we're going to have with these autonomous AI assistants is that you want to have differently scoped permissions and roles than what the user has. Right now, they all operate in user space, and that's going to be a long-term problem. You're also going to have to figure out how to determine whether this agent is really the representative of this person, and how to separate those permissions out.
Bernard Leong: The authentication protocol.
Ken Exner: How do you authenticate that this person is actually acting on behalf of this person? How do you authenticate a real person versus an agent? There are some big challenges ahead of us.
Bernard Leong: I have two more questions. The first one is, what's the one question you wish more people would ask you about the future of AI infrastructure?
Ken Exner: Where is the knowledge? The reason I say that is because everyone always thinks the knowledge is in the LLM. Kind of. But where does the LLM get it? They get it from data systems. For most enterprises, the knowledge is in all their different data stores. It's in all their logs. It's in all the operational data they have. It's in the security events. The knowledge is in all these different messy data stores across the enterprise. Figuring out how to get value from these is about figuring out how to provide the right retrieval and relevance capabilities that pass this information to an LLM. The LLM does not have that knowledge. It needs to get that knowledge.
Bernard Leong: My traditional closing question: what does great look like for Elastic in the next five years?
Ken Exner: One is that we become a foundational part of the modern generative AI and agentic AI tech stack. We have a huge opportunity to be that layer in between companies' data and LLMs β that relevance and retrieval layer. That is a huge opportunity for us.
On the observability and security side, as I mentioned, these areas are going to be fundamentally transformed by AI, and I want to be the one disrupting ourselves. I want to be the one disrupting the observability and security space and transforming what observability and security tools look like going forward.
Bernard Leong: Ken, many thanks for coming on the show. I really enjoyed geeking out about where software development is going, where generative AI is going, and most importantly, I learned so much about Elastic through this conversation with you. In closing β any recommendations that have inspired you recently? Books, articles, movies, ideas?
Ken Exner: You know what I watched recently? I watched WarGames again. I don't know if you remember this.
Bernard Leong: That's my all-time favorite too.
Ken Exner: Is it really? It was amazing because it's an AI-based system that controls the nuclear arsenal for the United States. I remember watching it when I was a kid and I was fascinated by it. I watched it again as a teenager and said, this is ridiculous β autonomous weapon systems, that's crazy. Well, autonomous weapon systems are actually possible now, and these conversational AI-based experiences are very real. Now, with a lot of the political conversations going on around Anthropic and others, these are very real conversations that we need to have. It's interesting to see that the thing that seemed so fanciful 20 or 30 years ago actually seems very real.
Bernard Leong: How can my audience find you and everything about Elastic?
Ken Exner: You can find me on LinkedIn β LinkedIn is my social media channel of choice. Follow me or add me as a friend and we can have a conversation there. Elastic.co is our website.
Bernard Leong: You can find this podcast anywhere. Subscribe to us and drop me any feedback possible. Ken, many thanks for coming on the show and I look forward to geeking out again.
Ken Exner: Sounds good.
Podcast Information: Bernard Leong (@bernardleong, Linkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraig, LinkedIn). Here are the links to watch or listen to our podcast.