The AI Industry Is Building Modern Empires with Karen Hao

The AI Industry Is Building Modern Empires with Karen Hao
Karen Hao, author of "Empire of AI" exposes how AI companies function as modern empires, extracting resources and exploiting labor in a quasi-religious quest for AGI that demands public accountability.

Fresh out of the studio, Karen Hao, investigative journalist and author of "Empire of AI" joined us in a conversation to unravel how companies like OpenAI, Anthropic, and xAI have become modern empires reshaping society, labor, and democracy itself. Karen traces her journey from mechanical engineering at MIT to becoming one of the tech industry's most critical voices, sharing how Silicon Valley's innovation ecosystem has distorted toward self-interest rather than the public good. She unpacks the four characteristics that make AI companies mirror colonial empires: resource extraction through data scraping, labor exploitation of annotation workers, knowledge monopolies where most AI researchers are industry-funded, and quasi-religious quests to build an "AI God." Throughout the conversation, Karen reveals OpenAI's governance dysfunction stemming from its contradictory non-profit-for-profit structure and shares the inspiring story of Chilean water activists who successfully blocked Google's data center from draining their community's freshwater resources. She explains how Sam Altman's plans for 250 gigawatts of data center capacity—equivalent to four dozen New York Cities—would be environmentally catastrophic, while demonstrating how China's export restrictions paradoxically spurred more efficient AI innovation. Last but not least, she argues that empathy-driven journalism remains irreplaceable and calls for global citizens to hold these companies accountable to the broader public interest.


"These empires are amassing extraordinary amounts of resources by dispossessing a majority of the world. That includes like the data that they're extracting from people by just scraping it from online or intellectual property that they're taking from artists and creators. Most AI researchers now work for the AI industry and/or are funded in part by the AI industry. Even academics that have stayed within universities are often funded by the AI industry, and the effect that that has had on knowledge production is akin to the effect we would imagine if most climate scientists were bankrolled by the fossil fuel industry. I cannot stress enough how much they genuinely believe that they are on the path to creating something akin to an AI god, and that this is going to have cataclysmic shifts on civilization." - Karen Hao, Author of Empire of AI

Profile: Karen Hao, Author of Empire of AI and Investigative Journalist (LinkedIn, Personal Website)

Here is the edited transcript of our conversation:

Bernard Leong: Welcome to Analyse podcast, the premier podcast dedicated to dissecting the power of business, technology, and media globally. Today we explore one of the most consequential questions of our era: Who owns the empire of AI? With me is Karen Hao, investigative journalist and author of her very famous book, Empire of AI, a sweeping examination of how companies like OpenAI, Anthropic, and xAI have become modern empires shaping the future of society, labor, geopolitics, and democracy itself. Karen, welcome to the show.

Karen Hao: Thank you so much for having me.

Bernard Leong: I think we crossed paths with a couple of friends, and they all recommended you to me. So I have to thank Grace Shao and Jing Yang for getting you onto the show. Let's start by understanding how your career journey began and what led you from doing mechanical engineering at MIT into journalism and now deep diving into the coverage of artificial intelligence?

Karen Hao: It was very much by accident that I got into this career path. I studied engineering thinking that I would go work in the tech industry and have a very traditional trajectory in tech. But when I started working in San Francisco at a startup and being part of Silicon Valley, I realized that the Silicon Valley model of innovation incentivizes profitable technologies, but not necessarily technologies in the public interest. That's why I went into tech—I wanted to build tech in the public interest. When I didn't really see a pathway forward within San Francisco, I thought maybe I could just switch to a more publicly aligned industry, and that's when I went into journalism and started covering the tech industry. My first job just so happens that I was assigned to cover artificial intelligence.

Bernard Leong: I think you've covered the tech industry for years. You've seen both idealism and its dysfunction as well. What are the biggest lessons you've taken away from reporting on AI within Silicon Valley and the culture that shapes AI today?

Karen Hao: My biggest takeaway is that the initial feelings I had working in tech have largely played out and gotten worse. This industry has become more and more powerful over time and has wielded that power in ways that continue to distort the ecosystem of innovation towards making technologies that primarily benefit them, rather than benefit a broad base of people around the world. One of the important functions of journalism is to hold power to account. Whereas before, I think tech journalism was seen more as similar to science reporting, where primarily the role is for the journalist to explain a technology to a lay audience, now I think of tech journalism as more akin to political reporting where you are trying to hold a core power center of the world accountable to other people.

Bernard Leong: Maybe we should just get straight into the main subject. I want to start with the core of your book. Why do you propose that companies like OpenAI, Anthropic, and xAI can be understood as empires of AI? I think that's the core part of how this book is built.

Karen Hao: Very related to what we were just talking about, these companies have consolidated an extraordinary amount of not just economic power, but also political power. The way that they're going about consolidating this power is very similar in terms of tactics to what I call the empires of old. There are basically four parallels that I lay out in my book, but you could broadly summarize them as this: these empires are amassing extraordinary amounts of resources by dispossessing a majority of the world. That includes the data that they're extracting from people by just scraping it from online, intellectual property that they're taking from artists and creators, and the economic value that they're extracting from labor that rarely sees proportionate economic value in return. It also involves this ideological quest that undergirds the AI industry's expansion. If you look at what the AI industry does purely from a profit perspective, you actually wouldn't get a full picture of why it's doing what it is, because the business logic is not 100% sound. But there is this kind of quasi-religious quest that also pushes the industry towards building larger and larger models and consuming more and more resources to do so. It's because they believe that they're ultimately trying to build an artificial general intelligence, an AI system—a theoretical AI system—that would be akin to a human or what some people colloquially call an AI God. So the fusion of capitalism and ideology, the extractivism and the expansionism—all of these things are essentially very fundamental characteristics of empire.

Bernard Leong: The four traits you mentioned are resource extraction, labor exploitation, knowledge monopolies, and this good versus evil narrative, coupled with this obsession about building AGI. Which of these do you think the world still underestimates most when they see these AI companies at work?

Karen Hao: It's a good question. I would say the third one is probably the knowledge monopolies. Most people don't realize how much AI research as a scientific discipline is no longer truly scientific because most AI researchers now work for the AI industry, or are funded in part by the AI industry. Even academics that have stayed within universities are often funded by the AI industry. The effect that has had on knowledge production is akin to the effect we would imagine if most climate scientists were bankrolled by the fossil fuel industry. We would not get a clear picture of the climate crisis, and we're not getting a clear picture of the true limitations and capabilities of AI, especially these frontier models. This is one of the things that when I speak with policymakers is central to their inability to consult with independent experts to put together sensible policy for holding the companies accountable or even just fact-checking the company's claims. Most of the experts that they can consult are in some way financially tied to the industry. So there's just this echo chamber of self-reinforcing opinions that all lead to people agreeing with what the AI industry says and its agenda. That is a very shaky foundation upon which the industry has been allowed to continue doing what it's doing without really having many people challenge what it says.

Bernard Leong: To be told, I was working in AI in the Human Genome Project, and when I recently looked at all the LinkedIn profiles of the people I worked with, including myself, none of us escaped working for a tech company. One curious question: what's the one thing you know about the global AI industry now that very few people do, but they should?

Karen Hao: Different from the previous answer, I guess I would pick the fourth parallel that I draw, which is this ideological aspect. The thing that people don't really pick up on is the degree to which there are what I describe as quasi-religious movements in Silicon Valley. It's really hard sometimes to explain the cultural aspects of San Francisco in particular to someone who's never been there, who's never talked with these people. Because anything that you say to describe these people sounds crazy if you haven't been exposed to their ideology directly. I cannot stress enough how much they genuinely believe that they are on the path to creating something akin to an AI God, and that this is going to have cataclysmic shifts on civilization. They've developed an entire vocabulary around this. When you listen to them talk, if you have not been exposed to their fanaticism in the past, you wouldn't even understand what they're saying because their vocabulary has become so specialized.

Bernard Leong: It is actually due a lot to science fiction. But science fiction also gives you parallels. Like in Foundation, they end up destroying all the robots, and in Dune, the same thing happened—they destroy all the AI as well.

Karen Hao: There is definitely a science fiction element to it. A lot of the people who work in the AI industry and are big fanatics of creating AI were huge science fiction buffs growing up. There's this kind of tantalizing premise of getting to be part of now the quest to turn these science fiction dreams into reality.

Bernard Leong: One thing I picked up from the book, and also looking at all the different AI books I've read, is that the founding of OpenAI—the origin story—is actually quite interesting. Particularly the Sam Altman and Elon Musk dynamics and that unusual non-profit-for-profit structure. How did it create the kind of governance problems that we see today? Also, all the offshoots—the Anthropic team also came from OpenAI and broke out and started their own team, and now we have SSI and all the different companies that evolved from that.

Karen Hao: At the time, Musk and Altman created OpenAI as a nonprofit specifically because Musk in particular really did not like the fact that Google was dominating AI development and Google was a for-profit. So they thought, let's create something that's the antithesis to Google. After reporting out the whole first 10 years inside story of OpenAI, in hindsight, my guess is that there was not high conviction from Altman in keeping OpenAI a nonprofit even back then. I think he probably saw high utility to starting it as a nonprofit to accumulate some goodwill, create a positive narrative around the org, and also to recruit talent because they couldn't compete with Google on salaries, but they could compete with Google on a sense of mission. That is in fact what allowed them to recruit Ilya Sutskever, who became the chief scientist of OpenAI and was the first major important hire for the company. But of course, within a very short amount of time, OpenAI decided they wanted to pursue a path of AI development that was very capital intensive. They couldn't retain a pure nonprofit, so they created this for-profit arm within the nonprofit, and that is the heart of all of the governance challenges that OpenAI then faced. They recruited some people on the basis that OpenAI is a nonprofit and they recruited other people on the basis that OpenAI is a for-profit. It's one of the only organizations I've ever covered where there's a fundamental confusion among employees about what the organization actually is—is it mission driven or is it profit driven? Because of that clash, there are different factions within the company that believed in one or the other, and they would have these really big fights over whether a decision about building this technology or that technology, deploying it faster or slower, were tied up in their philosophical differences about what they believed OpenAI was. Anthropic was founded by former OpenAI executives because they disagreed on this fundamental premise.

Bernard Leong: I guess this was one of the aspects. Two years back we had that firing of Sam Altman, which was also quite dramatic in tech history. From how you observe it and looking backwards now two years later, what do you think was the board's decision then? Does it actually, on hindsight, if everything had played out then versus today, reveal how the governance of AI has actually now accelerated more as compared to it was within control at that point in time? Or maybe all along this whole AI train was just going to go in the trajectory it should have been?

Karen Hao: I don't think the board makeup at that time was in any way effectively holding the company accountable. Clearly it did not, because when it tried to hold the company accountable, it failed. I think part of it is because inherently even that board was acting in a very anti-democratic way. They weren't broadly consulting lots of different stakeholders about the direction of the company. They were deciding amongst themselves what the direction of the company should be, just in the same way that OpenAI's current executives are deciding amongst themselves without actually consulting with the public about what the direction of OpenAI should be. It's just that each group has a different idea of what the direction should be. Basically what happened was that the board was much more leaning towards the doomer ideology, which—part of these quasi-religious movements—believes that AGI is imminent and the cataclysmic shift that arises will potentially be devastating to humanity. OpenAI's actions at the time were more boomer leaning, which is this belief that the AGI, the cataclysmic shift that comes, will be extremely utopic. So they clashed. The board was deeply concerned about the direction that Altman was taking the organization, and they were also deeply concerned about feedback that they were getting from Altman himself or about Altman himself from his senior lieutenants who were flagging that they felt that he had a pattern of abusive behavior, of manipulative tactics, of being untrustworthy. That is what led them to decide that they should oust him. But you could say that OpenAI has since then accelerated more because the board is no longer philosophically opposed to the CEO and executives. The board are now pointing in the direction of boomer, just pedal to the metal, accelerate the expansion of this technology as quickly as possible. But in terms of the governance of the org, I don't think we were in a better state before than we are now. All of it was not a good robust setup for governing one of the most consequential technologies of this era.

Bernard Leong: Now it's a lot more reviewed with the recent deposition from Ilya during the xAI-OpenAI trial. You also have some information from The Information which recently talked about Dario almost coming back to take it over and all that. Had that alternative history happened, we would be really interesting to see what the alternate universe looked like. But let me ask this question: given that all the founders now left OpenAI, or even part of the founding team of OpenAI have now created their own labs to pursue their own idealized versions of what AI should be, are we witnessing more fragmentation or more consolidation of AI power amongst all these players?

Karen Hao: We're definitely seeing more fragmentation in terms of the number of competitors that there are. It is not a coincidence that every single billionaire has an AI startup. I think this is evidence number one for realizing that the AI race is driven in large part by ego. Each billionaire wants to make AI in their own image and has a story that they tell themselves and the public about why their version of AI is the more superior version as compared to all of the other billionaires' versions of AI. But I said fragmented, but I think it would be a mistake for people to view this as somehow power being distributed more evenly across the broader public, because of course it's still all elites that are consolidating their approach to AI.

Bernard Leong: It's actually maybe concentrated between two countries, which I didn't mention—China and the U.S. There is this point of view that U.S. policymakers and even all the American AI builders use China as the rhetorical shoe to resist regulation. I don't know how much your understanding of the China AI ecosystem is. How do you compare and contrast between the U.S. and the China AI ecosystems from your perspective, just seeing how things played out publicly so far?

Karen Hao: I'm definitely not an expert on the China AI ecosystem, and I would direct people once again to Grace and Jing's reporting. They're my go-to sources as well. But I think broadly, just from reading some of their reporting and watching the movements in the industry, I would say that the biggest difference that has happened this year is that because for the last few years, China did not have access to cutting-edge computer chips because of U.S. export controls, many Chinese companies ended up just taking a different path from Silicon Valley where they are innovating around the constraints. Instead of trying to just scale, scale, scale and build larger models based on the amount of computational resources they have, they innovated around and found ways to do it much more efficiently. That is what's leading to a blossoming of a lot of different, more efficient models that they are open sourcing. This has led companies in the U.S. as well as academics in the U.S. to actually increasingly turn to Chinese open source AI models instead of American closed source ones, which has been one of the most fascinating trends that's happened.

Bernard Leong: It's quite interesting to see Anthropic's CEO talking about Qwen as the model that they use. He's not usually given the OpenAI open models. Isn't that ironic from that point of view?

Karen Hao: It's super ironic. I think there's this really big narrative in the U.S. and Europe right now—I don't know if this is also true in Asia—that regulation is antithetical to innovation. Yet this case study shows that with the right regulation you can spur enormous amounts of innovation. Because when the U.S. placed export controls on these Chinese companies, that was a form of regulation that hindered their ability to take the same path as Silicon Valley, and then within that pressure cooker, they figured out a way around it. It's just incredibly funny how things panned out this way.

Bernard Leong: I find it ironic because the most controlled environment is open-sourcing it, and the most democratic environment is trying to control it. So it seems like they're doing what each other don't want each other to do. But for a global audience, what are the biggest misconceptions governments have when they think about how AI is developed, deployed, and monetized?

Karen Hao: A common misconception that I've heard is people believing that American companies somehow have American values. They think that Silicon Valley, because it is in the U.S., which is on paper a democracy, that somehow these companies are also ultimately trying to build democratic AI. OpenAI perpetuates this narrative by saying in company blog posts that they want to bring democratic AI to countries around the world. This is how they're courting a lot of governments. What I'm trying to help people realize in part through the book is that actually Silicon Valley should not be thought of as part of American democracy. It is its own power center with its own political agenda that is not in line with the American political agenda most of the time. It's actually an inherently very anti-democratic place, which is why you see them really trying to exert more and more control and develop closed source models rather than the open source approach. I think if policymakers understood that more—that they are ultimately interfacing with companies that have their own ideological bent—that would lead to more sound understanding of who they are actually interacting with.

Bernard Leong: I thought one of the most striking parts of your book, which I read and use in a class I teach on generative AI at the university here in Singapore, is the part about the real AI supply chain. We talk about the annotation workers, the content moderators, the communities that are hosting data centers. I do tell one of those stories: the AI that you are using shouldn't be taken for granted because there are a lot of people who put blood, sweat, and tears into it. What does that actually teach you about the things that Silicon Valley—the AI companies—consistently overlook?

Karen Hao: I don't know if it's overlooked more than just deprioritized, or they do not care about, or are negligent of. It's the fact that they're perfectly comfortable with the degree to which their technologies are built on labor exploitation and environmental extraction. There's not any confusion among employees about the fact that this happens. It's more just that they justify it because they use a utilitarian, "the ends justify the means" kind of logic, where they think that if they're ultimately on the path to AGI, then it doesn't really matter what harms are inflicted along the way. That is the ruthlessness of the empire.

Bernard Leong: Then of course the story of the Chilean water activists that resisted a data center. What are the things that the global audience or maybe other parts of the world can learn from this experience as well? I thought it was quite an interesting story to talk about.

Karen Hao: There's this group of Chilean water activists called ModaCat that I profile in the book. In 2019, they realized that there was a Google data center being proposed within their community that could use freshwater resources to cool the facility. When they started looking into the numbers, they were completely alarmed by the degree to which this facility could take up their water resources at a time when Chile was suffering a historic drought. So they just decided not to accept the project and they essentially protested in every form possible. They protested in the streets by canvassing their neighbors. They protested to local politicians and really applied a lot of pressure to local politicians to do something to stop the project. They were so effective at raising the alarm around this particular facility that it escalated all the way to Google Mountain View, which then sent representatives to speak with the communities, and then escalated all the way to the Chilean national government where the government started thinking about how do we actually make sure that other data center facilities that come into our country do not end up engaging in this kind of extraction. The outcome, which is still ongoing, is that thus far Google has agreed with the community that they're not going to use freshwater for cooling the facility anymore, which is a major win. Also, the Chilean government has created this roundtable to consult with residents and activists for all future data center projects to establish a more democratic governance structure around these projects. To me, this was such an inspiring story of how to actually hold these companies accountable. You could say that these Chilean water activists have the least amount of power in the global world order. They are in a developing country and in an impoverished neighborhood within that developing country. They all work on a volunteer basis, they don't have any funding, and yet they're going up against one of the most powerful companies in the world from the most powerful nation in the world. They were still able to significantly establish guardrails around that company's activities as well as future activities to come. That is the kind of spirit that I think everyone, no matter where you sit in society, needs to have to hold these companies and technology development accountable to the broader global public. Everyone should feel that they have agency and voice to push back when these companies do things, whether in their local community or their country or their broader global community, that they don't actually agree with. If we all did what the Chilean water activists did, I think we would get to a much better place with technology development where we see the innovation ecosystem bend more towards the public interest.

Bernard Leong: There are similar cases that just happened in the last couple months in the U.S. There was a Microsoft and a Google data center project that was blocked by local communities. I think we all know, even as AI people ourselves, that the environmental demands of AI—specifically energy, water, and compute—are rarely part of that conversation. We talk about it, but people don't realize how important it is. How severe are these externalities and what kind of trajectory are we now on in terms of the natural resource side that's inflicted by where AI is going?

Karen Hao: When I wrote the book, the numbers that I have in my book were based on reporting that had come out at that time that OpenAI was considering building supercomputers that could eventually, within a few years, be powered by five gigawatts. At that time, five gigawatts is almost equivalent to the average power demand of New York City. So they were talking about this one facility that could take the power demand of this huge metropolis. That in and of itself was already shocking. Now fast forward a year and Altman is talking about building 250 gigawatts of data center capacity, which he estimates could cost $10 trillion. That is now four dozen New York cities of power, because New York is 5.5 gigawatts average power demand. If we actually follow through with what the AI industry wants—and that's just one CEO, one company—then think about all of the other companies that would then try to compete to build similar degrees of infrastructure. That would be incredibly environmentally devastating because the energy that would be used to power those facilities will largely come from fossil fuels around the world since there is not enough nuclear capacity. We wouldn't be able to build the nuclear capacity fast enough, and renewable energies are just not sufficient for running these facilities 24/7. That would be huge amounts of emissions that go into the atmosphere as well as huge amounts of pollutants that go into people's air and water and lead to an exacerbation of existing public health crises. We cannot allow, we cannot afford to allow this to pass.

Bernard Leong: But then also, do you feel that now in terms of the frontier model research, it's also slowly reaching some kind of asymptote ceiling and maybe shifting towards products? What does that really mean for AI development in the next five years then?

Karen Hao: What we've seen is specifically scaling the base model has seen it approach an asymptote. But the companies are now shifting to continuing to scale the inference compute, and that hasn't approached an asymptote yet in terms of how Silicon Valley wants to approach things. Just to highlight again, even though we've already talked about this, this scaling is actually completely scientifically and technically unnecessary because that's why we've seen Chinese companies be able to do the same exact thing with significantly less compute resources. But from Silicon Valley's perspective, this is what they know. This is how they've been gaining over the last few years. It is the most intellectually lazy yet most time-guaranteed way of achieving the kinds of gains that they hope to achieve, at least it seems for now. So they're just going to continue pursuing that. That's why even though we've seen scaling base models reach an asymptote, Altman is still talking about building $10 trillion of data centers. It's because the overall compute scaling has not reached its end from Silicon Valley's perspective.

Bernard Leong: But then there's also the problem with evaluation. Not all AI evaluation is accurate because we don't know what the training data is. Whether the open source or closed source model, they don't review their training data. One of the questions even I have myself is: what should policymakers or even the public understand about the limits of even the current AI benchmarks? Is there a benchmark actually?

Karen Hao: What they should understand is that benchmarks are extremely faulty, in part because even determining what benchmark you should use to measure a model is more art than science. Why do we decide to measure some model's capabilities on the LSAT versus the AP biology exam versus a game versus complex logic puzzles? That's why benchmarks are constantly shifting again and again in ways that are not exactly helping improve public clarity. But the other aspect which you were getting at is that we literally don't know what is being put into the models, and therefore any benchmark measuring the capabilities—the results—cannot be relied upon because literally the essence of the science of deep learning is that you need to know what goes in to know whether the capabilities on the way out are actually learned capabilities or just memorization and regurgitation of the training data. All of the benchmarks of closed source models are effectively useless.

Bernard Leong: If I were to ask you to look from the other side and provide an argument that is in favor of AI development, not just for the sake of trying to steelman them, what would the argument be? If there's a powerful AI advancement, what would be your argument for AI advancement?

Karen Hao: For scaling? Their best argument is that this is an approach that has worked, that is the most predictable because it follows scaling laws. When you are able to advance AI based on certain types of observable measures, then you can know roughly in advance how capabilities might shift and change. That in and of itself is helpful to figure out what outcomes—societal or economic or whatever—might come of the next generation of models. The other one being that if AGI is going to arrive at the end of this path, then it's all going to be worth it. So I think there are multiple different arguments and tactics that they usually take to justify why they're doing this thing.

Bernard Leong: What is the one question you wish more people would ask you about your book, Empire of AI, but they don't?

Karen Hao: I feel like most of the questions—I've been surprised by the diversity of questions that I've been asked about the book. I feel like I've often covered pretty much every aspect of the book. But one of the things that I don't cover as often is the journalistic process that went into producing the book.

Bernard Leong: So if I ask you what's the journalistic process that produced the book, can you talk about it now?

Karen Hao: I think this is part of also why I'm not very bullish on the idea that AI is ultimately going to take away a ton of jobs and somehow produce the same quality of those jobs, especially when it comes to information processing. Some people say, "Well, journalism is ultimately information processing, so why are we not automating journalism away?" I think the core thing about my reporting process was that the number one tool for me to connect to, to ultimately get all of these details and find all these stories, was empathy. It required me traveling to a lot of different places and sitting with people for long amounts of time to slowly give them a listening ear and to slowly tease out all of the different lived experiences that they had as part of grappling with the AI industry. I don't think that can be underestimated. Some of the Kenyan workers, for example—I spent a week in Kenya and my interviews with these workers were hours and hours long, sometimes the entire day. With OpenAI researchers, there were many who sat with me for 10 hours to recount all of the different things that had happened during their time at the company. It was ultimately because I took the time to ask them, to build a connection with them, and then to listen and just keep listening and listening, and give them that sense that they finally were able to talk about something that was on their mind and that they had lived through. I hope that ultimately the book is not just the content—not just a celebration of the importance of centering people and centering humanity—but also that the process was as well.

Bernard Leong: I think of the book as a reflection of where the acceleration goes and some of the human costs that it bears, and what we need to do to address the human cost. That's my view after reading the book. Even though I'm in the AI space myself as an entrepreneur and even teaching it, I also bear those costs in mind to know that in order for us to reach this, these are the costs that we have to bear. But maybe my last and traditional closing question: what does success look like for you with your book, Empire of AI?

Karen Hao: Sparking that conversation that you just mentioned. Helping people realize that there is another side to the equation that is often not talked about. Hopefully once we have all of the costs and benefits—a more nuanced picture of them—then we can really map out how do we make sure that AI can broadly benefit the global public rather than just a tiny few.

Bernard Leong: That's a good place to end. Karen, many thanks for coming on for the conversation. I truly enjoyed your book because it actually took me an entire weekend to think through some of the things and even brought to some of the things I teach in the university so that people understand that all these advances that brought about AI—just because you can summarize and generate a picture doesn't mean it doesn't come with actual human cost. In closing, I have two questions for you. Any recommendations that have inspired you recently?

Karen Hao: I really love the book Hope in the Dark by Rebecca Solnit, which most people would on the surface think that it's not related to my book, but it really is about how to continue having human agency in a world that often feels like it's trying to extract that agency away and how to build progressive movements that lead to social progress, not just technological progress. I really enjoy reading Brian Merchant's Substack Blood in the Machine, as well as this newsletter from Latin America that's in Spanish called La Neta. It's a Latin American perspective on AI development.

Bernard Leong: How can my audience find you and your book and your ongoing work?

Karen Hao: I'm on Twitter, BlueSky, LinkedIn. That's where I post all of my work. I have a website, karendhao.com or empireofai.com.

Bernard Leong: You can definitely find us on any of the podcast distribution channels and subscribe to us at AnalyseAsia.com. Karen, many thanks for having this conversation and I'm truly glad to be able to talk to the author of the book that I've enjoyed reading. Thank you very much.

Karen Hao: Thank you so much, Bernard.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.

Comments