Open Source AI: Faster Innovation Through Community Across Asia Pacific with Simon Milner
Simon Milner, Vice President of Public Policy for Asia Pacific at Meta, joins us to explore how Meta's deliberate commitment to open source AI is reshaping innovation across the world's most diverse and dynamic region. He shares his journey from the BBC to nearly 14 years at Meta, where he built policy teams from the ground up to lead Meta's Asia Pacific strategy. Simon unpacks Meta's open source philosophy behind the ILAMA models, explaining how openness accelerates innovation through community scrutiny, provides governments greater control over sensitive data, and enables local developers to fine-tune models for languages like Korean, Vietnamese, and Bahasa Indonesia. He highlights compelling use cases across the region—from AiSee helping the visually impaired navigate Singapore to JAMSTEC advancing scientific discovery in Japan and AIM Intelligence strengthening AI safety in Korea. Looking ahead, Simon reveals why the future of AI isn't on our phones but in wearables like AI-enabled glasses that create always-on assistants seeing what we see and hearing what we hear, enabling us to be more present in the world while Meta supercharges its family of apps serving billions globally. Last but not least he shares what great looks like for Meta in the Asia Pacific on open source AI.
"We believe that openness is actually a really key feature of accelerating innovation because it fosters inclusion, it builds trust, and it ensures that the benefits of AI are more evenly distributed around the world.The openness of models allows other people to, as they were, push and pull and prod at the models at a fundamental level in order to see where might the problems be. And so that kind of community, the developer community scrutiny around open source is fundamental to spotting issues and addressing them quickly.Actually, the story of AI is about yes... that is important. The investments that companies like Meta and others are making is important, but actually, it's really about local ownership and local innovation." - Simon Milner
Profile: Simon Milner, Vice President of Public Policy for Asia Pacific at Meta (LinkedIn)
Here is the edited transcript of our conversation:
Bernard Leong: Welcome to Analyse, the premier podcast dedicated to dissecting the powers of business, technology, and media globally. I'm Bernard Leong, and with me today is Simon Milner, Vice President of Public Policy for Asia Pacific at Meta. As Asia rises in the global AI conversation, we want to explore Meta's vision for open source AI, the ILAMA incubator's recent launch in Singapore, and what openness might shape up the future of AI policy, innovation and governance across the region. So, Simon, welcome to the show.
Simon Milner: Thanks very much, Bernard. It's a real pleasure to be with you and your listeners.
Bernard Leong: Before we begin, I also would like to know my guests better. Can you share how your journey in public policy led to your current role at Meta?
Simon Milner: This is a bit of a long story because I am so old compared to probably a lot of the other people that you interview. But I guess there were three big companies that have been important to me in my career. The first was the BBC, which I think everybody knows. The other is BT, which is an old British telecoms company.
Then I joined Facebook almost 14 years ago, and obviously we became Meta more recently. One thing I've done, particularly in this company, is build teams because when I joined this company, we were only 3,000 people and now we are many tens of thousands of people.
I've been there through this journey and our policy team has also grown. For instance, I grew the first team in the company to handle policy relationships for the Middle East and Africa. It wasn't a part of the world I knew well, but I did know the company well. Then when I moved to this region to live and work in Singapore in 2018, I built our team—effectively built most of our team for the Asia Pacific region. So that's how I got here. I think one thing I've learned in my career is it's pretty important when opportunities come along that yes, of course every career opportunity involves risk, but unless you take risk, you will never move.
So taking, not just anything that comes along, but taking relatively risky opportunities has been part of what my career has been about.
Bernard Leong: I think you have worked across different markets starting from the United Kingdom, Europe, and now Asia Pacific, Middle East as well. What are some of the defining lessons in navigating technology policy across maybe very diverse regulatory landscapes?
Simon Milner: Great question. Look, one thing I would say is actually the policy issues that I'm typically—myself and my team and colleagues—are talking with policy makers about are pretty much the same everywhere. I mean, if you think about the issues that we're going to be discussing during the course of this podcast, those are the issues that technology-focused policy makers are asking Meta and other companies about around the globe, whether they're in Latin America, the Middle East, Africa, Europe, or here in Asia Pacific.
I think in terms of what I've learned, it is really important to have empathy for policy makers. These are hard jobs, right? Trying to figure out what's the best way of enabling our people and our economy and our society to get the best out of technology whilst also managing the downside risk is not easy. There's no template. So we need to be empathetic towards policy makers. I think it's really important to be excited about the technology and the products that you're building and about the benefits they can bring.
Frankly, if you're not excited about them, then you shouldn't be working in this space. I mean, because if you are not excited, how can you possibly get other people excited? The third thing I'd say is it's really important to turn up. So when people are talking about your company or the technologies or the products that you are building, if you are not in the room, how can you ensure that that conversation's informed?
So I think a big part of what I've learned is even if you're going to get a hard time, people are going to think, well, you know, you've shown up here, therefore I'm going to have a real go at you. At least you can try to inform the conversation and show empathy for the people you are engaging with. Those are some of the things I've learned over the years.
Bernard Leong: That's really great, and I think you're very excited about the intersection between AI policy and economic development in Asia. I can hear it from your enthusiasm, but I just want to ask, for youngsters out there who are listening to this podcast, what will be the core lessons of your career journey you can share with the audience?
Simon Milner: Well, I've shared some of them. One thing is you can never quite know what's going to come along. When I reflect on my career, and I expect you may be the same, Bernard and many of the other people that you interview, there's probably nobody who has been able to say, all right, when I was 20, this is what I was going to do for the next 30 years. Right? In my case, nearly 40 years.
I didn't know I was going to go work at the BBC when I was an academic like you are, working at the London School of Economics. I didn't expect to end up at the BBC. I certainly thought when I left the BBC I would never work for a more important organization. I now think I do work for a more important organization in terms of the sheer number of people that use Meta services. You know, many times the number of people that consume the BBC. Doesn't mean the BBC is not important. Of course it is. But Meta, I would suggest, is more important to more people more of the time in more countries.
Frankly, the range of policy issues that we therefore have to handle as a policy team are much greater and frankly have more significance across quite a range of sectors and cultures and economies. So take opportunities and don't expect the plan that you have when, say, you're 20 is going to play out quite the way it was. Be open to opportunities and recognize that some things may not work out. But if you don't try them, you'll never know.
Bernard Leong: Great career advice. So we get to the main track of the day, which is talking about Meta's open source AI strategy. I think open source AI is core to Meta's strategy. As a university lecturer teaching large language models, the ILAMA models are one of the most important ones. Of course, in our daily lives we're either using Facebook, WhatsApp, or Instagram to communicate with friends across.
So maybe to start off, let's talk about the AI strategy. What does openness mean to Meta, and why is it a deliberate choice in a landscape that is actually currently shaped a lot by the closed models out there?
Simon Milner: I mean, look, there's a lot to unpack here, and I know we're going to talk further about this. So let me just try and frame it for you. We believe that openness is actually a really key feature of accelerating innovation because it fosters inclusion, it builds trust, and it ensures that the benefits of AI are more evenly distributed around the world.
So this is a commitment that we brought not just to AI, but to a number of different technologies that we built over the years, including, for instance, around infrastructure and data centers. It's a very deliberate choice to try to democratize access to AI and really to drive positive impact at scale. So it's about making our really very advanced AI models and tools and research widely accessible to developers, to businesses and communities.
When we started our original research foundation more than 10 years ago, FAIR, you can see that over the years they have produced hundreds, if not thousands of different research papers and models and all kinds of different technologies which have been open. As you know, as an academic, that's a big part of also the academic environment because how do you collectively learn if you're not sharing your research. So we think it's fundamental, it's a deliberate choice to, as I say, accelerate innovation and provide equitable access to build trust, transparency, and security. Also to help to drive global standards and responsible development about AI.
I might also say it's not just Meta. I know you said it's a world built around closed models, but actually there are other companies that have open source models and some of the companies that we associate with being closed have also produced open source versions of their models. So I think it's actually much more of a mosaic now than it was perhaps a year ago. So we're pleased about that. We certainly don't want to be the only company that's got open source models. So I think now there's generally accepted that this is a very important part of the AI landscape in these still relatively early years of its development.
Bernard Leong: I think it's actually because Meta started the trend of having these committed to the open models and it actually pushes, like, for example, OpenAI to actually recently release their open weight models. Not truly open, but also the Chinese AI companies also doing the same. I think one interesting thing, of course, everybody has recently known that Meta SuperIntelligence team made big news by bringing very influential AI leaders like Alexandr Wang and Nat Friedman, who basically signalled the acceleration of your AI ambitions. I think one question I have is how do we now think about the evolution of Meta's open source AI strategy in light of these new additions? What kind of shifts or refinements do you see in taking this openness forward that's implemented going forward?
Simon Milner: Look, I mean, I appreciate that you've seen that and others have seen that as a really pretty significant additional investment by Meta and the creation of Meta's SuperIntelligence.
Bernard Leong: That's right. And also Mark's open article on personal intelligence article as well.
Simon Milner: Yes. So there's a lot there that demonstrates what we're about is this core focus on building the best AI models and products. That's what we're about. As is typically the case in a company like ours where we are really at the bleeding edge of innovation, then of course we're looking for the best talent. We are sometimes reshaping the way that talent works within the company. We are not standing on, well, we've always had this structure, we're going to keep it. Mark has always shown as a very future-focused CEO, and I've worked for him all my time in this company. I've seen it time and again—Mark is looking much further ahead than anybody else that works for him. Right?
So this is part of that. Part of that is about who we hire and who we put in charge of things. But at the heart of it is this sense of, look, we want to build the best models and the best products that enable us to fulfil our ambition to bring the benefits of these technologies to the world and to kind everybody. That means both incorporating the value, incorporating these into our existing family of apps, really supercharging the benefits that those apps bring to individuals, to communities, to small businesses, to whole economies and into the world, but then also looking to build wholly new products.
I don't know what's coming. Right. I'm excited about what's coming because I can see that with Mark, he's continued to focus on having the best possible teams and talent and structure within the company to enable people to do their best work. Certainly, my experience of working in this company is we have this—one of our ambitions is that people do the best work of their career here. So when you hire the best people who've already done amazing work in their career to come and work here, you can be really sure that what they produce will be world-changing. So I'm excited about what's to come.
But just to be clear, it builds on the success of the company, right? This doesn't come out of fresh air. So that's what we're really about in hiring great talent.
Bernard Leong: So I should focus more on thinking from a policy perspective. What are the key differences now in thinking about innovation, safety, and control between open source and closed large language models given the landscape out there?
Simon Milner: Sure. Thanks, Bernard and I appreciate you coming back to my field. I'm the policy guy. I'm not a product leader or an engineer. So when we talk about—but equally, look, one of the reasons why I'm doing my job as opposed to those people is I also can speak in the language of policy makers because most policy makers are also not deep—you know, many of them, some of them are technologically experienced. Many of them are not, and they're generalists, like me. So it's about also interpreting between the deep product and engineering and AI specialists for them.
So, look, the way we explain this, and I think we've been reasonably successful given where we are now in the policy discussion, is what open source models allow on the innovation side is things to be driven faster because more people, not just the people inside the company that are employed by Meta, but many thousands, if not hundreds of thousands of people who then use those open source models can help to make them better. Because, so the, their very openness and the transparency around that is a positive thing from an innovation perspective. Whereas closed models, to some extent, can limit innovation because the only people who work on them are the people employed in that company.
On safety, the openness of models allows other people to push and pull and probe the models at a very, at a fundamental level in order to see where might the problems be. So that kind of community, the developer community scrutiny around open source is fundamental to our kind of addressing, spotting issues and addressing them quickly, whereas closed models might rely more on internal processes within the company.
Then when it comes to control, one of the main things we've seen, particularly from governments about why they have been enthusiastic about open source is you can take that model and deploy it on your technology, on your systems. You do not have to keep doing callbacks to the original company owning the closed model. So that openness gives much more control to whoever wants to use it. That's particularly attractive for governments, for instance, who may be dealing with very sensitive citizens' data, say in a health setting or education setting. All kinds of reasons why they might want to do that.
So we see some real benefits in open models and I'd say in open source models. I've seen that in my dialogue over the last couple of years, and my team's dialogue with policy makers across the Asia Pacific region. Many examples of that in practice.
Bernard Leong: Then I think one question is usually there are two sides of it. There are also critics who argue that open source large language models could accelerate misuse or proliferation of unsafe AIs. I think there's no right or wrong here, but maybe from your viewpoint, how do you address criticisms of such nature?
Simon Milner: I would say those were particularly vociferous, loud criticisms two years ago, maybe two and a half years ago, when really the world was becoming very interested in this whole question about AI, the growth of AI and open source versus closed and safety issues. There were a lot of doomsayers who were saying, well, we clearly got to regulate open source models much harder because there's greater risk. That has not proved to be the case. We have not seen, I would say, really extensive or well-evidenced examples of that being true or mass.
It's also important to note that closed models can and have been compromised by bad actors as well. What crowdsource security expertise enables, which is what you get with open source models, that means that you can identify security issues much more quickly. Whereas in closed models that may, they'll be the case. Our experience, and this is the key thing, I think our experience is that the catastrophes, the doomsayers were wrong so far. That's not been, that's not how things have played out. Certainly in respect of Meta's Llama models.
Bernard Leong: One interesting thing is I teach some of the AI governance and of course we usually take out different kinds of responsible AI documents: Meta's Responsible AI document, one of them, I'm quite curious to hear from you from the horse's mouth. What are the governance mechanisms or guardrails that can make actually open source AI both safe and sustainable from your perspective?
Simon Milner: I'm happy to give you my perspective, but I would also love to hear yours, but I mean, you're teaching this. I'd love to, it's always good to get feedback and you're a real expert, so maybe I'll give you my perspective and then—
Bernard Leong: That one I can share, but what I think—
Simon Milner: I think the great thing about a podcast conversation is it's a conversation, right? So I'd love to hear yours. Look, in terms of some of the mechanisms that we have in place and the kind of guardrails, we're pretty careful about release. We only kind of release our open source models where we're confident in their quality and that we can deliver them in a way which is valuable for people. So one of our highest priorities is to ensure that they meet the highest standards for safety and reliability before we consider them for open release. We build in safeguards. Whether that's about red teaming, feedback tools, enhanced transparency within the models themselves.
Ongoing collaboration, as I talked about in response to other questions, is an important part of this. It's not just kind of release a model and then get on with the next one. Certainly one thing with my team. So I have a team of people who are based right across the Asia Pacific region. We do a number of different events with developers and governments and academics on our models and ask them to tell us about their experience of using our products. That's incredibly valuable in telling us about things that they see where we might have some vulnerabilities.
So overall it's about transparency, accountability, and community engagement. As, you know, genuine kind of ways in which we provide governance and guardrails around our open source models as we release. What's your experience?
Bernard Leong: My experience with Llama's responsible AI document—I think the transparency piece is a key and important part from my viewpoint when I explain it to maybe, sometimes I teach government leaders and I teach business leaders. The ability to decipher AI as a black box is relatively important to them. What are the guardrails to deal with the inputs and the outputs like securing, securing the information and the call, but innovate around the edges, I think where everybody is not sure is how sophisticated these models are.
So I think everybody has very varying degrees of seeing the model. I think by actually, even just by laying out the lay of the land, just comparing between different, how different companies think about the responsible AI policy. My general reaction from my students would be like, oh, you mean they actually have documents like this? So I find they're very surprised, right? What usually happens, it's just a question of education. Then I say, before you put any judgment on the company and their AI policy, maybe you should take a look at how they explain their responsible AI policy. That will be a more fruitful conversation as you have also done with communities who use Llama to do with open source efforts. Given the source code has been open source and also the weights, it gives a lot of confidence for people to know exactly what kind of AI model they're using. That's something that I think has actually benefited across different regions given the last two years of accelerated AI development that I've seen.
Simon Milner: That's terrific. I really appreciate you explaining that. Again, to me, that's great feedback on these things. I would say as well, look, we've been around quite a while as a company. We're, you know, 20 years old last year. We've had to learn a lot about what it means to provide services to billions of people around the world and to enable them to communicate. So we've learned a lot about how do you—what should be the standards that you apply in these different communities, and then how do you enforce those standards?
AI has been a very important part of that, not about setting the standards, about the enforcement. So we have, I'd say as a company, we've learned a lot about the importance of transparency, about explaining things and about recognizing there's a kind of whole of society or a whole of AI community approach that's required. We have absolutely some really core responsibilities as a company providing models and products. But it's not just going to be Meta—how well, how hard we try, you know, somebody could still try to create harm or cause abuse. So it is about working with others, working with developers, working with governments, thinking about what's the right framework of regulation and just being open-minded and open in our approach to that.
So some of the things that you've described come from our history as a company and what we've learned from that. Some come from looking at other companies and seeing how are they doing it. So a lot of the time when we're talking about new issues within the AI world, we'll say, well, how are the other companies approaching this? So the fact that there is actually movement around the companies, people who have, may have worked at OpenAI come have worked here and vice versa, et cetera. That's also a good thing. We learn from each other. We're all kept trying to grapple with the same problems. Moving collectively together and working with governments on some of these issues, I think is tremendously important just as it's been in the previous age of technology development that we've been part of.
Bernard Leong: Do you have any like examples of how like open models like Llama are being adopted by, say, startups or developers in Asia? I'm sure you speak to some of the community and get their points of view on how they're using the technology. Do you have any stories to share?
Simon Milner: I do. I mean, look, one thing that I think it's incredibly exciting about being based in Singapore and working across the vast and diverse APAC region is generally the incredible enthusiasm that most people I meet have for new technology and innovation. That includes policy makers, that includes people running small businesses. That includes, for want of a better phrase, ordinary people who are just using our products.
This is a part of the world where generally people will only assume things might not be great if they've got some good evidence for it. Whereas there's certainly some other parts of the world that are generally a bit more ludic, I would say, in their approach towards regulation. So to give you some examples, we see in Singapore, where you and I both are, there's an organization called AiSee which is using open source AI to help blind and visually impaired people navigate the world with greater confidence. Indeed, this is an area I think we're going to talk a bit more later about some of the hardware that people will use AI in. We believe that our wearables, but we've seen a number of examples of our wearables being used by people who are visually impaired to help them see the world better.
In Japan there's an organization called JAMstec which is using open source models to advance scientific discovery and environmental research. In Korea, where I was recently at the APEC conference, there's an organization called AIM Intelligence which is using open source AI to make technology safer and more culturally aware. So as you can imagine, in a country like Korea where there's a unique language, fine-tuning models for the Korean language to address sensitive topics and strengthening AI safety is really hard and we see AIM Intelligence doing that. So that's just three examples. We've got many others that we see around the region and we're always fascinated when we meet up with developers or small businesses and academics who are using our technologies to help communities they understand better than we do.
Bernard Leong: Sometimes the better stories on AI is not like what you just said, right? The blindness situation. This is clearly an area where you can actually help people and how to accelerate scientific discoveries. I have a question—what's the one thing you know about public policy on open source AI or its potential in Asia that very few people do, but should?
Simon Milner: One aspect of this is a lot of people think open source AI—indeed, AI generally—is all about big American and big Chinese companies creating hugely expensive models, which they are. I mean, just to be clear. Like frankly, where everybody else is just waiting around for one of these big companies to produce the latest model, right? Where without that, everybody's kind of stuck. That's just not true.
Actually the story of AI is about, yes, that is important. The investments that companies like Meta and others are making is important, but actually it's really about local ownership and local innovation. We see tremendous examples of this and local companies, developers coming up with amazing products that are like the ones I mentioned earlier, which are incredibly well-tuned for communities. Either thinking about the blind or visually impaired community there, that's a global community. Well, it's a global group of people that can benefit from those technologies.
When it comes to the Korean language, most of the people who speak Korean are in Korea. So this is really about people in Korea, but I think that local innovation and useful, the local usefulness of AI being deployed is incredibly important. Asia is where this is happening more than anywhere else. Now, that could be just because I'm based here, and therefore that's where I focus. But certainly from a company perspective, there's tremendous interest at the company leadership level of what is happening in APAC and making sure that we are not just creating for APAC or this, but we're listening and learning about what's happening here. What are others doing? So I think that's a fundamental part of the story that most people aren't aware of.
Bernard Leong: I think also I've seen local universities, even not just within Singapore, but maybe the surrounding Southeast Asia, they're also using Llama to fine-tune, say like, for example, Vietnamese or Bahasa for Indonesia or Malay, right? So the use of open source AI is actually quite crucial to how they can fine-tune for their languages within these different cultures. Maybe from your viewpoint, how is Meta actually thinking about AI development in a person of this diverse, geopolitically fragmented type world? Especially we are now seeing more national champions in different countries, but I think there's still some interconnectedness between us to share knowledge from that perspective.
Simon Milner: Yes, look, it's something that we are very focused on as a company. We want to ensure that the benefits the world has seen, genuinely, the world has seen from the open internet, from the ability of people with an internet connection, providing they're not in a country that blocks our services, to be able to use our products very easily pretty much for free, apart from the cost of their data, which hopefully in most cases is pretty cheap, to connect with people all over the world. Whether that's just the people that they already know or there many others they'll get to know. That is also true of AI. I mean, the benefits of AI are undoubtedly going to be greater. The more open the world is to enabling people to use different AI models and products all over the world, just like they use the internet, internet-based services today.
So one of the areas that, or one of the issues therefore, that myself and my policy team are very focused on, is trying to ensure that we don't get new barriers to openness in the AI space. So for instance, one of the issues that's talked about is sovereign AI and this idea of that actually each country should somehow own or frame, somehow protect the AI space within their country to be purely sovereign. But that's a pretty dangerous concept in terms of where it might lead, in terms of the benefits of AI, that a world of separated off AI bubbles is going to be a much lesser, less beneficial environment than one where AI is allowed, AI is allowed to flourish and be connected across borders just like the internet that we've all grown up with over the last three decades.
So that's an area of a pretty significant focus for us. Now, I'm not suggesting there's any way doing it just yet, but certainly I think an important area that is going to be, for instance, debated at the upcoming AI Impact Summit in India next February.
Bernard Leong: I think because you see how the development of AI and regulation are going hand in hand across this region. Do you have any perspective in what governments in Asia Pacific can better do to support that open innovation while also at the same time safeguard their national interest then?
Simon Milner: It's something that we are very focused on trying to work with governments. Look, we are an American company, just to be clear about that. We're proud of that, and we're proud to be able to bring American, in the investments we make in the U.S., which is our home, to the world. That's something we've been doing by the first 21 years of our existence, and that is absolutely the case when it comes to AI. We want to help ensure that the benefits of AI are widely shared, not concentrated, in that the U.S. and its allies can set the standards for responsible, transparent and inclusive AI development.
In this region, there are a lot of allies of the U.S. There are some countries that are a bit more neutral, if you like, and we also want to help them in setting good policies around setting, having a national AI strategy about being focused on AI skills and AI literacy. We have a number of programs of our own which are focused on that. Then ensuring that policy making is well-informed. That's where we play a role. As I mentioned right at the start of our conversation, we need to be in the room when these conversations are happening, and we find that whether it's in Vietnam or Australia or Korea or here in Singapore, that's how we show up.
In general, we find that most countries, they are just keen to figure out, okay, how do we prepare us? How do we make the most of these technologies now? How do we prepare ourselves for what's to come? How do we ensure that we can adapt to—yes, there are going to be some pretty significant transitions in economies because of these technologies that we're ready for that. Therefore being open to listening to companies like Meta and many others, and experts like yourself, Bernard. That's incredibly important. So there's a whole range of things that are involved in that, but these big summits that have happened around the world, including in Korea last year and next year in India, are an important part of that dialogue. So we'll be there in force and really wanted to contribute positively to—
Bernard Leong: I think it's really—
Simon Milner: That conversation.
Bernard Leong: To be part of that conversation. I think recently I had a guest who came onto my show who wrote a book on AI, and his view was that everybody should be part of that conversation. I think I always have this question on my, what's the one question you wish more people would ask you about Meta's current AI strategy or public policies, but they don't ask you?
Simon Milner: I actually think the one thing that's, aside from all the investments we've made to build great models and products, actually we are having a conversation about how are we actually going to use these technologies in the future. At the moment, I think most people just assume, alright, it's going to be apps on my phone because that's how we think about the most of the ways that people use online technologies. In our view, that is way too prosaic. It's way too much of current day thinking. Experience shows that if you assume technology is going to be just like it is today, you're going to be wrong. It's not going to be like that. So actually being open-minded about how else might I be using AI other than on my phone, to me feels like the most important question.
Bernard Leong: Then if I were to follow and ask you, then how are we going to be using the technologies? Where are you going to point me? Wearables?
Simon Milner: In here, in our [Meta AI] glasses. As it happened, just because I'm talking to you, it feels easier not to wear my AI-enabled glasses, but when I do, they enable just a completely different experience because rather than me on my phone typing in what it is I'd like some help with, the AI in my glasses sees what I'm seeing and hears what I'm hearing.
Now, we're still very early stages. We started to prototype some of these things. We demonstrated at our Connect conference a couple of months ago. But we envisage a world that's not too far off you could have, I think we're almost on, always-on AI, which is listening in and it can see what you see, is listening to your conversations and is giving you advice. It's giving you the answers, maybe warning you about something that you haven't seen yourself. Can just do, just be that kind of always-on assistant which is completely different from how you're looking at a phone and enables actually to be more present in the world than is currently feasible.
So we think that the wearables is a fascinating area for AI use. We're already seeing some terrific examples of what it can achieve. When I was at the APEC conference a couple of weeks ago in Korea, we had a booth with some of our devices. It was amazing just to see the enthusiasm, whether it were, you know, from yes, some young folk, but also some people who are my age, who are policy makers in countries that are traditionally seen as perhaps rather more traditional, like Korea or Japan, are incredibly enthusiastic about our devices and wanting to experience them as the new versions of them and being very excited about bringing these technologies to their own countries.
So I think we've found, we've hit a really rich vein of excitement here around a new way, a new computing platform and a new way of experiencing AI, not just experiencing AI, but AI being useful to me. That's why that's the core of what this should always be about.
Bernard Leong: You know, when you talk about the glasses, one of the things I really thought about is every time when I'm in Japan, I'm lost in translation because everything is speaking in Japanese and all the characters in the signs, signs, and out there. Then sometimes I really wish that there is something like glasses that just tell me exactly what I see is translated to what, so that I know exactly where I'm going on the, just very little bit basic.
Simon Milner: It's very basic, but also much easier than getting your phone out and pointing it at something. So actually just to look at that sign or listen to maybe the announcement on a subway train and be able to understand it without having to, you know, that's the thing. You don't have to get your phone out and find the relevant app, et cetera. I mean, you know, we're still, we're building some of the technology. We can also, we are already demonstrating, some of them are already integrated into products, which are being consumed at a reasonably high level. But there's so much more to come in this space. It's incredibly exciting and it's, somebody's been wearing glasses for, you know, more than 50 years. I'm pretty excited about it.
Bernard Leong: You got to show me the glasses next time when I pop by Meta.
Simon Milner: Next time. Absolutely. We'll do that.
Bernard Leong: Okay. So I have a traditional closing question. What does great look like for Meta's AI initiatives in the Asia Pacific for the next few years?
Simon Milner: The key thing for us is, on the one hand, we want to supercharge the benefits that our family of apps already provide for people by further integrating AI features within them. Whether you are an individual, well, we're all individuals, for individuals, for communities, for businesses, for whole economies. So there's a lot more to come on that front. I'm incredibly excited about that.
Then there's also these, the whole new product area. So I've already talked about wearables. I think we're all excited to see what Alexandr Wang and his team at Meta Super Intelligence and labs are going to produce. So look, we're going to build on success. We're going to build on what we've done as a company, a company that is very focused on bringing the benefits of these technologies to billions of people, and in the Asia Pacific region, we've been, I think, pretty successful so far. But wow, there's so much more that we can achieve over the next three to five and I think 10 to 15 years. Bernard, let's not stop at five. If you're interviewing Mark, and I know you'd love to do that, he would say, that's not about five years. Let's talk about 15 and what—
Bernard Leong: 30 years I think.
Simon Milner: It's an incredibly exciting period to be working for a company like Meta and to be focused on these technologies. I love the fact that you're a real champion for being excited about technologies, being realistic about how do we, that we've got to manage some risks associated with them. But by being engaged together in a conversation about this, we can all move forward and ensure as practically as possible that everybody gets to benefit. That's what certainly what we're about, and I know that's what you are about with this podcast. I appreciate you having me.
Bernard Leong: So Simon, many thanks for coming on show, and thank you for spending the quality time with me to really help me understand a little bit more about Meta's policy in terms of thinking about AI and all the different initiatives. Of course I'm excited about the wearables, but in closing, I have two more questions to ask you. Any recommendations that have inspired you recently?
Simon Milner: Look, there's a book called From Strength to Strength by Arthur C. Brooks, which I read recently, which was recommended to me by a former colleague. It's all about how our intelligence changes as we get older. I am, as I've mentioned, I'm in my fifties now and I'm thinking about aspects of that, and it's a really interesting book. So if I know you're very focused on youngsters, but if you have any other, any listeners who are also in that kind of generation, I'd really encourage them. I thought it was a great read.
Bernard Leong: I've read that book actually. I'm also in my fifties, and I'm also thinking about the transition of a role.
Simon Milner: Bernard. No way. I can't believe it's true.
Bernard Leong: Last but not least, how can our audience follow your work or stay updated on Meta's initiatives in the AI space across Asia?
Simon Milner: Okay, well, look, I'm pretty active on LinkedIn, so that's a good place to see, to get updates that I'm involved in. But we also, you know, we have an active newsroom where we post about what we're doing. So I'd encourage people to check that out. Then we're also pretty active on social media, so follow some of our leaders, like Mark and whenever we launch something new, Mark will post about it on Instagram or Threads or Facebook. So check those out. I'm really delighted that you and your audience are interested in what Meta is up to.
Bernard Leong: You can definitely find the podcast anywhere. So subscribe to us at Analyse Asia. We're on LinkedIn, YouTube, Spotify, and everywhere else, including Meta as well. So Simon, many thanks for coming on the show and I look forward to speak to you soon.
Podcast Information: Bernard Leong (@bernardleong, Linkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraig, LinkedIn). Here are the links to watch or listen to our podcast.