The Future of AI Trust: Why Guardrails Actually Accelerate Innovation with Sabastian Niles
Fresh out of the studio, Sabastian Niles, President and Chief Legal Officer at Salesforce Global, joins us to explore how trust and responsibility shape the future of enterprise AI. He shares his journey from being a high-tech corporate lawyer and trusted advisor to leading AI governance at a company whose number one value is trust, reflecting on the evolution from automation to agentic AI that can reason, plan, and execute tasks alongside humans. Sabastian explains how Agentforce 3.0 enables agent-to-agent interactions and human-AI collaboration through command centers and robust guardrails. He highlights how organizations are leveraging trusted AI for personalized customer experiences, while Salesforce's Office of Ethical and Humane Use operationalizes trust through transparency, explainability, and auditability. Addressing the black box problem in AI, he emphasizes that guardrails provide confidence to move faster rather than creating barriers. Closing the conversation, Sabastian shares his vision on what great looks like for trusted agentic AI at scale.
"You can try to develop self-awareness and take a beginner's mind in all things. This includes being open to feedback and truly listening, even when it might be hard to receive. I think that's been something I've really tried to practice. The other area is recognizing that just like a company or country, as humans we have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time, what's your portfolio of passions? How do you choose—as individuals, as society, as organizations, as humans and families with our loved ones and friends—to not just spend your time and resources, but really invest your time, resources, and spirit into areas, people, and contexts that bring you meaning and where you can build a legacy? So it's not so much advice, but more like a north star." - Sabastian V. Niles
Profile: Sabastian V. Niles, President & Chief Legal Officer, Salesforce (LinkedIn)
Here is the edited transcript of our conversation:
Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business, technology and media in Asia. I'm Bernard Leong. How does trust and responsibility be incorporated into enterprise AI? With me today, Sabastian Niles, president and chief legal officer for Salesforce Global to tell me about how AI governance can actually be placed with digital transformation together. With AI today with Salesforce. So, first of all welcome to the show, Sabastian, and congratulations on the launch of Agentforce 3.0. I just got the announcement, thank you for hosting me in the Singapore Salesforce office.
Sabastian Niles: Delighted to be here and happy to join you today.
Bernard Leong: So, prior to this I did a lot of research on you, so to make sure that I get a good origin story from my guests. How do you start your career?
Sabastian Niles: Remember the origin story? Well, I had always had a passion for connections between topics and areas and potential areas of impact. One in particular was around the intersection of law, business, and technology. Actually when I had graduated high school, my mother had actually found an old article, like high school paper, and it's like, oh, what are the people going to do? What's their plans?
Plans can change, but the reason she should have pulled this up, and it was after I had joined Salesforce in this capacity as president, chief legal Officer, is there was a line in this article where it says what do you want to do when you grow up? It's in high school. There's a very specific phrase. It says that I want to be one day a high tech corporate lawyer. She was chuckling about that, but I do think just that intersection of how do I think about law, how to think of policy, how to think of impact but then particularly in the context of the role of business as well, and what is the role of the private sector, whether it's in helping to solve some of our most complex problems, and ultimately solve some of the problems of, could be of other businesses, could be problems of people, could be problems of planet, problems of communities, and how can businesses work together, but then with the public sector, and with the civil society. I think I do very much believe, and Salesforce, it's been a key part of our ethos and our founders that business can also be one of the greatest platforms for change and positive impact.
Bernard Leong: You have a long and illustrious career as a partner from Wachtell, Lipton, Rosen and Katz. Since graduating from Harvard Law School and then you've been staying there for pretty long time, I really admire the kind of tenacity and loyalty to the firm. How did you end up in your current role with Salesforce?
Sabastian Niles: I had one career for almost 20 years or so, and when I had transitioned to this role at Salesforce, a close friend of mine who is a CEO of another company, he sent me a text message. He goes, oh, Sabastian, you finally have a real job. I said, what? What do you mean I have a real job. But, it is a transition. But I think what's been fascinating though for me around it is as a partner at Wachtell, Lipton, Rosen and Katz and leading various practice groups, I had very much developed and had the privilege of serving as a trusted advisor at scale to companies, governments, different contexts around the world, either on top of mind critical opportunities they were seeking to achieve or challenges that they were seeking to navigate.
So in joining Salesforce though, going from being a trusted advisor in that one context to joining a global company whose number one value is trust. At a very meaningful and genuine throughput there, because what I'd also share with you what I continue to find at Salesforce is in so many ways, we are serving as trusted advisors to so many different companies, industries, governments, nonprofits, as each of these organizations and institutions seek to, whether it's transforming themselves or reimagining themselves with AI or trying to achieve their mission, or purpose. But they're really partnering with Salesforce. So that's just been an interesting element. Then how to bring to bear, particularly for customers and partners and a broader ecosystem to have the kind of change that they're trying to achieve with our solutions. So much of it does revolve around, okay, how do you bring trust to it? How do you think about responsibility? How do you think of impact? So that's been just kind of a little more surprising than I was expecting, that there actually is that type of a throughput, if that makes sense.
Bernard Leong: Before we get to the intersection of trust and responsibility with enterprise AI today. I want to just ask you, from your career journey, what are the valuable lessons you can share with my audience?
Sabastian Niles: Well, I always try to be cautious in giving advice. Definitely like one size doesn't fit all and everyone has their own context and situation. I suppose not so much advice but perspective is, as much as you can try to develop self-awareness and try to take a beginner's mind really in all things, which also includes. Are you open to feedback? Are you able to really listen even when it might be hard to receive that feedback. But I think that has been something I've really at least tried. I think that's an important area. As we think of one's life as a company where our country can have so many different stakeholders as humans, I'm sure this, you find this, you have many stakeholders. You may wear many hats in different ways. So as we think of the totality of your life over time what's your portfolio of passions?
How do you, how do we, how do as society, as organizations, but as humans and families, our loved ones and friends, how are you choosing to not just spend your time and resources, but really invest your time, your resources, your spirit, hopefully into areas and people and context that bring you meaning and where you can build a legacy. So not so much advice, but it's like a north star that I think is important.
Bernard Leong: That's very good advice and leads into the topic that we're gonna talk about, which are main subject of today. We're gonna talk about Salesforce Agentic AI trust and responsibility. I think to always baseline my audience, can you briefly introduce Salesforce and its global mission. How is it currently evolving in the AI-first world?
Sabastian Niles: Sure. At Salesforce we have for about 26 years now, have really focused on how do we deliver on five core values. I start with values because it actually really informs how we think about supporting our customers and organizations and reimagining themselves with AI. In our five core values, number one value is trust. Then we have customer success or customer stakeholder success. Then innovation. Then equality and sustainability. When we think of that kind of portfolio, those values of trust of customer success, innovation, equality, and sustainability, each of those have guided how we approach, whether it's helping our customers connect with their customers in a whole new way.
We've been pioneer particularly around initially predictive AI and then now around generative and agent AI. But, CRM the number one AI CRM people would say about us, but also more broadly when we sort of saw our customers we're very led by, what are our customer's priorities? What's the feedback? What's the feedback that our customers are giving to us about what they need, what their challenges are? So some of that may be they're dealing with siloed data, they're siloed themselves, fragmented data. So we need solutions for that data cloud solution.
But I highlight that because all of our different set of solutions across our deeply unified platform are designed to enable our customers to really connect more effectively with their own customers or achieve their own objectives, in various sets of ways. You mentioned Agentforce earlier, which we can talk about as well, if you'd like.
Bernard Leong: I think this is so interesting, so one of the other things that we probably also come to, and also to help my audience to understand is the concept of Agentic AI. How do you currently define Agentic AI and how is it different from traditional automation or AI tools? I think you have a pretty unique definition towards that.
Sabastian Niles: So as noted Salesforce comes at this we really only specialize in that kind of enterprise grade, enterprise ready type of solutions in partnership. When we think of artificial intelligence particularly the current era of Agentic AI, which people have said Salesforce we're really the ones who had defined it or are leading it through Agent Force.
But the idea is that, we will have no doubt live in a future of humans with AI advancing wherever the priorities may be. Humans with agents driving stakeholder success together and so these AI agents, under supervision, control direction of the organization, of the teams of the humans able to act to reason, to plan to execute, tasks to achieve the objectives that are set. As you can imagine, when we think of artificial intelligence evolving in that way.
So you really have this robust partnership between humans with AI agents. It raises the stakes around the need for responsible AI, we know when we listen to our customers, they all like, they're telling us they really want, they want safe, trusted, responsible AI services that will deliver at scale, with speed, and trust. So it's just kind of that's how, and we think, again, it'll apply really across all sorts of very practical use cases. Singapore Airlines are actually one of a kind of a really incredible customer that's been using a number of our different set of solutions to really reimagine how do they really take their customer experience to the next level of personalization of impact as well as all their other sets of goals. The reason I mentioned Singapore Airlines as well, which it goes to your AI governance point.
Singapore Airlines is partnering with us. Others, it's actually around what's the thought leadership around? How can we think of AI trust, AI responsibility, AI impact to create really kind of in that industry, but can be in any set of industry, just new motions, new operating models, so that they are really kind of defining the future in a world of humans with AI, and agents.
Bernard Leong: So, coming from that point of view, what could be your mental models to think about the concept of trust and responsibility in the era of agent AI? I think you brought up a very good example of the Singapore Airlines, the intersection of governance and ethics across the organization on AI itself?
Sabastian Niles: Well, when it comes to kind of mental models or operating models I've probably come at it, in as follows. I think sometimes people in organizations different contexts, can, I think wrongly view trust and innovation as somehow at odds.
Bernard Leong: They're not mutually exclusive.
Sabastian Niles: I think that's really just fundamental. I think that's kind of one of the key kind of mental, sort of operating models is actually rather than trust, innovation, sort of being at odds, particularly for the types of impact and change and transformation that's possible now.
Trusted approaches and bringing trust into the product design development, deployment, adoption at the outset and not as an afterthought actually becomes an accelerant.
Bernard Leong: Yeah.
Sabastian Niles: It's more like trust is almost a propulsion to give people, not just faith but actually enable you to go faster around these set of topics.
I think that's just a critical kind of piece around this, across whatever the sort of use cases may be.
Bernard Leong: How does do you have any thoughts about the use cases that allow the shift from automation to agency where the trust and responsibility is clearly visible today?
Sabastian Niles: I think it's an important point. So I think ultimately what we find with our customers and look even ourselves, because at Salesforce we actually say, hey, we need to there's a couple of cliches, let's say eat, eating our own cooking. But we said we need to be customer zero. If we want all of our customers, our customers tell us they want to be agent force companies, they want to be agent first. They want to be AI first. Then we say, okay, well we need to show that model and be in the arena with them.
So how we've been deploying agents, deploying Agent Force, whether it's help dot, Salesforce dot com actually external facing as well as a whole slew of internal agents in terms of the human teams being able to rely on this set of technology.
Sometimes, look, sometimes it is just providing 24/7 type of engagement and support. Sometimes it's actually really unlocking capacity to tackle projects that maybe we otherwise wouldn't have the capacity to tackle. You've heard the concept that Salesforce we've been talking about is when you think of humans with AI driving success, well, what it means is you're gonna have this mix of human labor with digital labor. That's right. Dealing with a lot of shortfalls and the healthcare system. We see that so much.
There's real shortfalls of folks. You also have experts and people who are working around the clock. Who, how could you unlock their capacity? So it could be automating certain tasks or taking on routine sort of type of administrative work. That's a critical part of all this. But then there's also, what does augmentation look like? How do you improve the decision making using AI around these sets of issues or have consumer open table.
Open tables using agent force. Can you imagine kind of around the world, to kind of help people who want to have kind find connection, community, good food reservations. They're doing that at scale to actually execute on these sometimes different sets actually, complex logistics.
Just like we may have governments. Working on these things very large enterprise financial institutions manufacturing companies, kind of a whole sort of slew of folks. I think that ends up one piece, you'd asked me earlier, and I, apologies I didn't really get to it, is, so if we think of humans with AI agents working together, our vision around that for what is trusted agency mean, it's going to have to have robust governance. It's going to have the need to have the right ethical guardrails, and then it's gonna have to have, yes, compliance at scale, but a critical piece. For any institution enterprise of not just size or scale, because this also applies for small, medium sized businesses, but any institution or company of seriousness, they actually need to have the right governance around it. So what I mean by that is they need visibility.
Bernard Leong: Yeah.
Sabastian Niles: That's right. They need control. They need observability. What does it actually mean? What are the agents doing? Also when are the agents effectively doing handoffs to humans? Then when do humans actually be able to say, okay, hey, let's go get the AI agent doing it. By the way, here's the thing that. Really, we have these weekly AI summits where we're in deep with our incredibly brilliant, our chief AI scientists, our technology product teams, I, and sort of the rest of the C-suite we're in the thicket with them every week going through all these sets of items.
Is as we look to the future, but actually the present that sort of we're building is, it's not just human to agent handoff interactions. It's actually agent to agent.
Bernard Leong: Yes.
Sabastian Niles: You can have agent to agent to agent, then sort of go back to the human. So how does when you think of these guardrails, there's gonna be different sets of guardrails. Some are preventative, some are runtime guardrails, some are like self-improving guardrails. But you can say, I get pretty excited about this because I think the impact, please.
Bernard Leong: It's interesting, because the guardrails are determined by humans, we have to set certain rules to make sure that things don't go our hand. Also you rightfully also point out things like observability within the AI agents, what are they really doing? I also notice in the agent force three launches, a lot seems to be centered on the command and dashboard to help monitor the different agents and what they're doing. Because think if you're an agent boss.
Sabastian Niles: If you yourself you're running, commanding a, managing a whole sort of fleet of agents. You have your own agent span of control.
Bernard Leong: That's right.
Sabastian Niles: You need a command center.
Bernard Leong: That's right. Also I know model contact protocol is very, is a sexy topic now, but people are not thinking about the trust level where you need to have some trust before it just. The AI agent goes through and do something else. So if I were to ask that, we often talk about the promise of artificial intelligence, but what are the real world consequences with untrusted AI from your point of view? Particularly I think you have rightfully pointed out, like in healthcare, in financial services, these are domains were high trust. Are really required because we have so much confidential data about the customers.
Sabastian Niles: You've touched on it, that's exactly right. Is that we're going to be seeing artificial intelligence deployed in context where the stakes are very high. Again, there are upsides and promise to that you can think of. There's a lot of areas where unfortunately, there are medical mistakes.
That have occurred, including things that affect people with fewer resources. So part of the promise is actually how could you use artificial intelligence to actually raise the floor of that quality. So that really, every individual human can have essentially access to this virtual team of experts, humans, human physicians. Absolutely. But also AI supporting those diagnoses of diagnoses, of supporting the follow up. Also sometimes reducing burnout, because sometimes mistakes happen in healthcare or other things from humans that are tired and burnout. So that's also, or maybe can AI can help sort of around that, but to your point, like what are the stakes? You highlighted some of them, which is, and we see this still with the generative AI issues that come up as look accuracy, hallucination.
Hallucination. I think trusted, GenAI does rely on being grounded on trusted data. But also have visibility and understanding, what are the data sources coming from, how do we understand it? How do you also have consistency maybe across different data domains in different sets of areas.
But I'll sort of step back a little bit when we think of I. Agents and we, something we think of, we think of our, we think of agents, we think of the different apps. We think of the different versions of AI. We think of the metadata. We think of adding agent layers to these things. I highlight all this because what we also think about quite a bit in these areas is. There're kind of five attributes as such. Like what is an agent, an AI agent supposed to do. One is going to be okay, what's, what's the role? What's the task that they're doing, working with humans? Another one is the data or the knowledge, like what's the knowledge they're supposed to have access to? Or maybe what's the knowledge data they're not supposed to, so to have access to, that's a guard rail. Then you have a whole set of guardrails. What should they do? Not the by which, by the way. Well, I'll get to this sort of in a moment. The channels, where are they going to work? Like maybe in Slack.
Slack has been such an incredible, I was just speaking to a. Customer the other day, and they're like, we basically, slack is our operating system for how we kind of, which is like really exciting. My company runs on Slack, your company runs on Slack. If you have feedback, I want to make sure you're successful of AI by the way.
Bernard Leong: AI by the way.
Sabastian Niles: Oh, that's terrific. Great to, great to hear that. But when we think of agents, what's the roles, what's the capabilities, like the task, what should they be able to do? What are the guardrails? Where are the channels in which they work. What's the data, the knowledge they have to, here's the interesting and complex element. So take you and me, we got jobs. Well, as humans, what are the elements of our jobs? We have guardrails, codes of conduct requirements, things we're supposed to do, not going to do. Responsibility. We have responsibility. We have roles, we have.
What's the knowledge? What's the data we're supposed to access? What are the channels? Confidentiality. Where do we work? Do we do our work in Slack? Do we do the work on the phone with voice? Do we do it in person? But that's, I think, an interesting element when you think of these five dimensions of how you would design a craft, an AI agent.
They're actually very analogous to kind of what are the dimensions of just how we approach our work and not to overly anthropomorphize. Yet I think there are certain important insights around that when we think of what's that future human AI workforce, human labor with digital labor coming together. That actually, I think does give us different sets of insights, including how you think of trust, responsibility, and impact so that you can achieve the results that you want to achieve.
Bernard Leong: How would you think about this situation where in some situations when the agent AI are a bit like black boxes, where you really do not know what is thinking and what is doing, how do you put that guardrails into the kind of situation.
Sabastian Niles: I think this is critical. I think that for enterprise grade, enterprise ready agentic AI systems there are gonna be three elements that are fundamental and this is how we build and develop it sort of with our customers.
We're going to need transparency. We're going to need explainability. We're going to need auditability.
Bernard Leong: Yes.
Sabastian Niles: Meaning what occurred? And why. Be able to kind of go back and sort of see the different steps because, and that's different than maybe kind of consumer facing consumer relevant, but at least again, within the enterprise, there's this heightened expectation of understanding what exactly occurred and we actually do believe it's achievable with the right, if you build actually the technology in sort of the correct. The correct way so that you actually deal with this black box.
Bernard Leong: Because the audit trail is important because one of the things about agents, they could make 1000 decisions and 999 will work. Then there is just one decision, maybe just miss something. Then you need to have a way to actually go and audit back the trail. Why do you do this wrongly? Just like human beings.
Sabastian Niles: Just like human beings. This is exactly right. Look, and when we think of the roles of the future, the jobs of the future, the needs of the future, we're going to have like what is quality control look like, in an agentic era. How do you think about, and also errors. How do you think about checking, how do we have those sets of issues? Because it's not just, okay, were mistakes made or, did it fully align with what we sort of wish to occur. It's also, okay, how do we drive better effectiveness?
I think the other piece is, what are the new roles, the new tasks, the new jobs that certainly get created around this. Some of them do involve by the way, who are the people who are going to really kind of understand and unpack.
What exactly do the AI do? How do we think of that, experts in that collaboration between human, and AI. Then there's a lot of, obviously you can imagine legal and policy and different set of issues, but also for any member of a C-suite, for a CFO. For a Chief operating officer. For all these sort of different sets of folks, how will they lead effectively and reimagine some of their own functions and priorities and teams when you have the possibility of AI to help you accelerate, but also make better decisions.
Ultimately kind of presumably achieve your goals faster. I think the other element there though is, effectively, if we kinda get to, how do we get to better decision making between humans and AI, well, maybe the AI can help you or help your organization come up with the right goals, in a way that hadn't been before. So there isn't, there doesn't just have to be the element of what do we automate, and what do we, how do we get rid of the tedious tasks. Absolutely. But it can also maybe have a very sort of as a sort of strategic partner, thought partner routine of the priorities, and help you to organize those priorities into yourself. Think that is actionable.
Sabastian Niles: I think I agree with that. Well, now what about you though? Tell me how are you kind of tackling AI? What do you see as the opportunities or the risks or the.
Bernard Leong: I work in a enterprise AI company. Since I'm a startup, I have the perfect architecture to do a lot of things using AI. So what I discover now is that a lot of tasks that is repeated or I found a way to automate using AI.
Sabastian Niles: Wow.
Bernard Leong: So you can think of like a via customer call where I, we still require G rail to ask the customer, can I record this? But the biggest difference now is that I can capture most of the requirements through that conversation. The beauty is that I just have to press one button and ask the Large language model and say, hey, can you retrace the conversation? Can you bring out all the features that the customer wants? Then can you organize it into a statement of work for me? That is something that used to be done for about three to five days by a consultant. That is now being reduced into minutes. So I think that is one of the biggest change of how to work productively in this. But I have to ask you this question, what is the one thing about trust and responsibility in the era of agent AI that very few do?
Sabastian Niles: Oh that's a great question. Just that it requires vigilance, it requires also taking trust seriously in the generative AI era, I used to have a phrase that trust begins long before the prompt and continues long after the output. I think that element of what do we think of end-to-end trust first processes and taking responsibility and impact seriously.
Bernard Leong: That's a good one. I'm actually gonna be teaching a class on that. After we have this conversation. I like that line.
Sabastian Niles: Feel free. But okay. That's interesting.
Bernard Leong: Coming back to this conversation now, I think given that you talk about working with customers, how is now, Salesforce working to ensure, transparency and accountability in a way that the AI agents make decisions, but also at the same time being able to put the guardrails there. So, I mean, we talk about agent force for example. How is this being done in reality?
Sabastian Niles: Sure. Look, one, it does depend sort of on the use case, what is needed, we try to really design and deliver solutions that are responsive to the customer needs. As you can imagine, particularly we have a lot of customers that really want to benefit from our deeply unified platform and like the full force and power kind of across the sets of solutions. But sometimes it can involve trust patterns. Really kind of understanding what are the different sets of pieces of how it comes together. So that again, you achieve the outcomes that you want. Effectiveness and relevancy accuracy. Sometimes that also evolves. Again, we have a trust layer, because there is at the base. So one thing is at Salesforce, at least we're not sort of releasing just agents into the wild. The sort of agentic technology that customers expect from us and ask us to build and deliver is really grounded and just can be decades of trusted protocols of respect for enterprise, say access controls. Also being dependent on customer specific workflows and data that they've been using and relying with us for a long time.
So those are, I think some of the elements around that. But again, companies need to have, what's their data strategy, what's their data governance, making sure again, the agents and sort of all the different sort of solutions are grounded in that relevant data. The workflows that are most important to that customer. Look, we also try to make things easy for people. Salesforce is a company. As we've always really believed, when we think of business as the greatest platform for change and impact, we believe in democratizing access to the technology empowering people. So, look, we have out of the box use cases across every single industry. Again, it could be for governments how they think of serving their constituents better, financial services, healthcare, manufacturing, consumer facing, retail.
I could go down sort of all these different sets of buckets, but really kind of just have, okay, here are the workflows that we know are most impactful and aligned to what your business objectives are, what your actual sort of needs are. Then we just sort of can build it together and just it, it was with a customer the other day the CTO, she's like, this just works.
Similarly, we do a lot of work with. We think of enablement at democratizing access technology. One of our enablement engines is we have something called Trailhead. Which is really kind of just how do we drive one of the number of these business councils around AI impact and AI ethics, but also how do we have AI acceleration. Trailhead is one of the methods where we think, how do, how can communities, how can companies build capacity, AI literacy, digital literacy around these items. So if you're ever won, go on Trailhead, it's available.
You can become, we call it a, become an agent Blazer. For years we've had this thing called trailblazers. Trailblazers, and we have millions around the world. We want to have, there be millions of agent blazers, but. We care about this because we actually don't believe that.
AI and Genix should be wholly implemented and delivered fully, top down, the point is, you should actually have the actual people, the teams, the companies at any level yourself, but actually any of your team members come work, come build on our platform, we'll give you a button.
Build. Build my first agent, because having your hands in the soil, it not just gives you kind of confidence and control that this technology is real. It's present, it can be developed, but it's actually very empowering because you may come up with use cases that are even more impactful sort of than the ones that we may sort of have out of the box. Then we can build something new for you, particularly as you think of bringing together disparate data and silo data and the like, so that ultimately you can kind of have all the different sort of enterprise sets of areas of importance be brought together. So you have effective agents or other AI tools.
Bernard Leong: Interesting that point. I think there was a very good study that 70% of the AI that is supposed to be implemented is actually all done in the wrangling and the harmonizing of data. You brought that out really specific. I actually know, foundation, we have an office for ethical and human. How does that office work and how do you partner with your product and engineering teams? I think this is one of the most unique things about it's we you haven't seen yet.
Sabastian Niles: Well look, I think rooted in our number one value of trust is how do we operationalize that? So yeah, we have our office of ethical and humane use of technology. Just, someone was said to me the other day, he said, you guys are the very rare comp rare company that it's actually willing to say that. A core value of yours is trust. Similarly, like you're one of the rare companies that would actually say, we're a company, we're gonna have an office of ethical and humane use.
Of technology to really grapple with these sort set of different topics. As noted. But the office of OEHU, office of Ethical Human Use is an office I oversee. It also is overseeing engaged within our technology and product organization.
Because we want to have both of those sort of deep sets of areas. So the work of that group, whether it's on policies, whether it's on. I mentioned earlier there's a concept in cybersecurity of a shift left. Are you familiar with it?
But similarly, when we think of trust and ethics and responsibility and observability, is okay, how do you build it in to the product design early on, and then also testing, and sort of really testing and being willing to sort of assess is this doing the right things? Is it causing, are there issues being created? How do you resolve that? I encourage you to read, and actually anyone who's listening or watching we've issued our inaugural, it's the first ever, trusted AI and agents impact report. We really seek to engage and sort of think about these set of issues.
As a company, we're a public, publicly traded company. We have a board of directors, we have our internal governance systems, and we actually deeply care and think about what's that oversight of artificial intelligence as a company, and across sort of our different sets of sets of stakeholders so that we're navigating and calibrating opportunity and risk effectively.
So we engage really at every level, whether it's our board of directors on these issues, whether it's our management teams, whether it's again, each of the sort of different sets of functions, both on customer zero, but also making sure we have these really rapid feedback loops. Just like with customers we want to make sure we're really, we just had a CIO advisory board meeting two days. Just really, in there with these incredible leaders who are building, who are architecting, who are deploying and making sure we're getting all this sort of feedback.
Even the tough feedback because it's really important because we are really committed to that north star of customer success.
Bernard Leong: I guess the, when it comes to customers, then how do you approach building trust with enterprise customers? I think they also become increasingly nervous about handing control way to autonomous AI systems. I think this is one of those things. Plus now AI agents are also evolving from rules to reasoning, so how do you help them or guide them towards thinking about handing that part of that control, but still able to still allow them to function in this new world.
Sabastian Niles: Thank you for posing that look in the enterprise grade context and a sense similar to how cybersecurity evolved. But when we think of AI systems and agentic AI our concept that we invented of trusted agendas actually hits this point, which is what's the shared responsibility. What's the shared trust? The shared discipline around these areas where actually the customers want to have control. They should. They want to have confidence in these systems they want to also make sure they're achieving compliance at scale.
So the reason I mention that is. Just this area of okay, the customer will have certain specifications and what they want. The agents, say to do or they may want sort of their other AI systems, but then they also want to have the control and flexibility to kind of sometime fine tune the different set of areas and so, we may have tools and solutions, trust related items for that. Then it's the customer's decision actually in terms of, okay, where do they really want to turn the dial? Different sets of issues around sort of the use cases on the context.
So a big piece of this actually, again, there is the making sure the customers are empowered, whether it's through the command centers for humans and agents, but that visibility, the observability the governance, but then also what are the, how they think about the guardrails and their view on guardrails may also evolve.
Bernard Leong: So this is a question that I usually get from CEOs when I teach class corporate classes. I bring consulting for them. They will tell me that my IO kids should, what should I do with them, with AI? My first answer says, yeah, you should let them use it, but you should be there with them as a guide. So what they're really worried about is that the kids outsource their thinking into the AI. But then I'm gonna move this question this, and I take this analogy to solve, bring it to our conversation. So in the future, how would you envision humans, AI agents collaborating effectively without one side being overly dependent on the other? I mean, humans have that tendency of being overly reliant on. Humans. AI, well outsource our thinking away.
Sabastian Niles: Hey, I mean, look, I think the reality of that is don't humans also sometimes have a tendency to maybe overly on, other humans in a context. I mean, we all do that. I share that actually again, because I think there are similar principles. If you also mention around as we think of what's the society that we want to have, hopefully the society we deserve to have. But the humanity is actually very much in control of these items, of defining what is, I used to think that the secret impact or success was having a really first class view of the future. I don't really think that anymore. I think the secret is more how do you have a first class view of scenarios around the future. We've had at Salesforce, a futures team for a very long time. I bring them in and we kinda debate, we kinda see differences as the scenarios. When you think of as parents or educators. It all starts with really embracing and understanding again, that the future is one of humans with AI, humans and AI agents together. So what that means is, okay, how do you educate? For my own view, I have three children, blessed to have them. My view on it is I actually really want them to be engaging with the technology actually say, here, take some of these AI fluency courses.
Engage with it. Debate. Sometimes I get emails and I'm like did they write this? Did they send it to the, but the key thing is using that to then engage in the human conversation because ultimately we need AI to help us spend more of our time on that, which makes us more uniquely human.
When you think of if you're from of the sustainable development goals, for example. All these issues of clean water. Access to resources, healthcare. Look, these are all areas that humanity has yet to solve and get right.
That's one of the reasons now, we've had these series of global AI summits. Salesforce, we were very engaged in each of them. They started in Bletchley Park in the UK around safety. Then when I was in Korea around the next summit that talks about innovation, safety, and inclusivity. Then in Paris we're talking about AI action. Then there'll be one in India that we're working on with the government and the team there, is just. We really need to embrace the technology so that one, we understand it, but two, we're able to deploy it in the ways that are most meaningful and useful to us, because one of the challenges I used to do work around the digital divide. Similarly going forward, what is that? Are there gonna be sort of divides, are you gonna have certain companies in certain industries that embrace this in a trust first way, and then are able to accelerate and leap past their competitors, but similarly people are worried about that in their communities and their neighborhoods families is, what does competition look like? But I think really it's it's gonna have to be more of, right now at Salesforce, we every year we do a future force.
Which is we bring like of the pleasure of having these interns come in. So I've got a whole host of interns at Salesforce and just for example, I was just with some of them, they're the ones who chose to join our legal corporate affairs team. I was just chatting with them around AI. How they use it. They just start showing me like, yeah, here's how we're using AI to like work through this problem. Here's that. We did actually a little clip because I said, wait, this is really kind of exciting. They're just so natural with it and comfortable with it.
I said, hey, you've got this wisdom here. We need to make sure we're sharing that with sort of everyone. We've got 75, 70 7,000 people at the company. We have to really be learning from each other constantly just earlier today I was doing a global town hall. We started to do it from here. We're doing it for here from Singapore because the region's very important to us globally. But one of the areas that we were really talking about is, ah, what does AI transformation mean? But in very hands-on level.
A key piece of it was we're gonna handle it in a very empowering way, because each of these folks, on our teams, they know best their own work. They know what excellence and effectiveness looks like. So we want actually these teams to be the ones that start to say, ah, how might I use AI to augment myself to automate away the things I really don't want to free me up for sort of additional sets of tasks. That's what it means. You can democratize access to these technologies, not just kind of in an abstract way, but like with the people with whom you're working, and leading and managing. Then also how again, kind of at every level and be open to just learning.
I think that's gonna be one of the most important areas going forward for our families, for our communities. Is curiosity.
Bernard Leong: So what's the one question that you wish more people would ask you about trust and responsibility with agent AI?
Sabastian Niles: What's the one question. Well, maybe a little sort of silly, but I think you'll get it. more of how can governance and guardrails and guidance be a catalyst and be an accelerant for innovation. Rather than kind of this sort of cliched view of, oh, it's a blocker, or, oh, I don't want to kind of deal with these sets of issues. I think honestly that's I think a key area.
Bernard Leong: How would you give them like some high level guidance, like this is how you should approach it. Then if I were to ask that question to you, like just give them like maybe three very important principles.
Sabastian Niles: Look, I think when you're dealing with enterprise grade, needing enterprise ready solutions. Guardrails, provide confidence to move faster. See, I believe that effective governance, common sense guidance, sort of ethical common sense guardrails, provide permission, and they provide clarity because then folks know, okay, how can we, how do we move forward, that's right how do we make this safe, responsible, ethical? I think that's actually part of why so many people think of it in personal relationships or contact the speed of trust.
But similarly that's I think that's a key piece to this, is that in an organizational context in order to have adoption at scale, which is what you need with AI. In order to really be comfortable with using agents to drive forward your workflows, to partner humans with AI on these issues, it actually only will occur with a strong sense of, okay, where does trust, responsibility, impact come into play? Because we're not actually just playing around, which is good. It's good to play, it's good to games. But again, in these more serious contexts, people want effectiveness. They want to know sort of what's the business value, but they also want to have again, that visibility and the observability and the governance around it so that they can stay in control, have that command center and then correct sort of along the way. So does that address kind of what you're asking?
Bernard Leong: I think that that's a very good advice. So my traditional closing question then, what does great look like for trust and responsibility on agent AI for Salesforce in the next couple of years?
Sabastian Niles: What does great look like? Look, I think we are in a just really once in a generational moment here. At Salesforce people have said that we were the first to move on agents at scale. What does great look like? It means we're also known to be the first to move with integrity, with trust at the forefront. It's going to mean, even wider sort of global adoption is going to mean more use cases across all sets of it could be regulated industries, it could be other sets of cases.
The other thing honestly, where I think what great looks like is going to be at that kinda regional level sort of in each country, in each context. Not one size fits all. One big area that we're very committed to and focused on is actually for the small and medium sized businesses and commercial businesses as well as we serve the largest enterprises. But we really see this incredible opportunity for the small and medium sized businesses here to scale up. There's a common sort of statistic that also says, oh, most small businesses that really, the engine of so much growth. Most small businesses fail.
Where our vision is with Agent Force, with different sets of our solutions, but really small businesses really embracing the humans with AI, bringing in sort of the agentic layer to the small, medium sized businesses. What if that was no longer true? What if instead of most small businesses failing, what if more small businesses succeeded? Then they were able to grow their impact. Grow their revenue. Include, I mean, imagine what that would mean, for communities. For family. So I think that's a big piece. I think the other thing, what does great look like? Salesforce has always been committed to the ecosystem approach to success.
So an ecosystem of AgTech trust, an ecosystem of what a customer and stakeholder success look like at scale when it comes to AI, an ecosystem approach to innovation, ecosystem approach to equality, an ecosystem approach to sustainability. All of these sort of sets of areas.
That's how we think. We have Salesforce ventures, where we're just have the privilege of partnering with a lot of these companies. We're investing in these companies, but we're so committed to that. Customer success is what great looks like, is just really incredible levels of continued customer success at scale with speed, but always in trust.
Bernard Leong: I'm gonna look forward to see what's gonna come out in ASEAN forces, the next Agent Force 4.0. You're already thinking about it. Sabastian, many thanks for coming on the show. So, in closing, very quick one. Any recommendations that have inspired you recently?
Sabastian Niles: Oh, for books. Oh my gosh. Movies, food books, movies. Well, actually there's so many terrific books out there around AI for AI 2027, there's this or that, but there's actually a book I've been rereading and it's called the Future Self. It's why a friend of mine, a professor, but what it really kind of talks about is that when we want to have impact, when we want to have hope, when we want to have just sort of a view of kind of what are we going forward, it's not actually the past that drives us, it's actually a view of the future of our own futures as individuals, as humans of our organizations. This is, I think, just a such a powerful book, particularly now of what's that view we're gonna have? Of the future because once you have a first class view of at least a set of scenarios around the future, you then are empowered to decide which is the future scenario I want to avoid, but which is the future scenario I actually want to accelerate and have manifest.
Bernard Leong: Great book recommendation. How do my audience find you?
Sabastian Niles: Oh online. You can just type my name in if you want. Sabastian Niles, Salesforce. But I would also say try our help.salesforce.com. Give us feedback. But yeah, happy to hopefully they'll find us on this podcast.
Bernard Leong: Of course you can definitely follow this podcast and many thanks for coming on the show. I definitely can find us on Spotify, YouTube, every channel. However I'm really enjoyed this conversation and I think I took a lot of great lessons before going back to teach my class. So Sabastian, many thanks for coming on the show.
Sabastian Niles: Wonderful. Invite me to your class one day, perhaps. It's real pleasure.
Podcast Information: Bernard Leong (@bernardleong, Linkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraig, LinkedIn). Here are the links to watch or listen to our podcast.