In the latest episode of Le Podcast on Emerging Leadership, I sat down with Sebastian Cao, a user experience and technology leadership visionary. Sebastian, with experience at Red Hat, Tesla, and a recent teaching role at Stanford, shares insights from his Stanford course on the future of user experience (UX). This episode covers Sebastian’s philosophy of blending advanced technology with human insight to foster truly impactful UX.

Key Takeaways

  • Empowering Human Judgment Alongside AI
    Sebastian stresses the importance of allowing human intuition and experience to complement AI’s predictive capabilities. He shares how Tesla’s technicians combine machine-provided data with their own observations, showing how AI can augment human skills rather than replace them.
  • Fostering Empathy in Tech
    To drive meaningful user adoption, empathy is essential. Sebastian discusses how understanding user motivations and barriers, particularly in industries where users fear automation, can help build trust in new tools. His focus on “service augmentation” rather than “automation” at Tesla reflects his emphasis on enhancing human capability.
  • From Ghost in the Machine to Human Trust
    The “ghost in the machine” concept highlights the need for transparency in AI. Sebastian advocates for AI solutions that clearly explain how predictions are made. He shares the importance of helping users understand the underlying data and reasoning, which fosters trust and mitigates fears of AI as an inscrutable “black box.”
  • Empathy as an Engineering Skill
    With his diverse career path, Sebastian suggests leaders in technology must balance technical skills with psychological insights to understand and anticipate user needs. He emphasizes user interaction as essential to understanding how technology serves real-world functions, especially when AI is part of the equation.
  • Open Source and AI Ethics
    Sebastian calls for transparency and an open-source approach to AI, especially in environments where frontline workers use technology for critical decision-making. He advocates for AI models that are accessible and accountable, helping companies balance innovation with trust.

This conversation with Sebastian provides a deep dive into the future of UX, balancing AI innovation with ethical considerations. Leaders in tech, product managers, and anyone interested in the evolution of user-centered technology will find actionable insights to enhance their approach.

Tune in to hear the full conversation on Le Podcast on Emerging Leadership!

Here is the transcript of the episode

Alexis: [00:00:00] Welcome to the podcast on emerging leadership. I’m your host, Alexis Monville. Today, we have a fascinating conversation with our guest, Sebastian Cao. Sebastian is a visionary leader with a wealth of experience at the intersection of technology and user experience, having held pivotal roles at Red Hat, Tesla, among others.

He recently delivered an insightful course at Stanford on the future of user experience, where he explored critical topics like AI’s role in augmenting human capabilities. So we are thrilled to dive into these topics with him today. Sebastian, welcome to the podcast. How do you typically introduce yourself to someone you just met?

Sebastian: Thank you Alexis.  I love to be here. I usually speak that I’m an engineer that can talk about [00:01:00] Problems that could be solved with technology. I like to talk about problems. that are worth to be solved, like real problems talking about as a species worldwide, globally, problems that we have and what are good options and good ideas to solve with technology.

So that’s my, how I introduce myself. 

Alexis: I love those kinds of introduction where I need to ask more questions to know a little bit more about your background and all those things. But we will go through that at some point. You did a Stanford course. about the future of user experiences, and you emphasize the balance between prediction and judgment.

Can you tell us more about that? Because I’m very curious about that thing. 

Sebastian: Yeah, as I mentioned, I’m an engineer, computer science engineer, but I’m certainly not the kind of guy that would go and hack and make a model, an LLM model better, or just make a little bit more incremental performance out of a model.

I’m more [00:02:00] concerned and interested about how that model could really solve human problems. We can go into that and move into Silicon Valley, like a few years ago. And you always read about all this story and the heritage around this area. And you see all this company, but it’s still a really engineering led culture.

So everyone right now is like competing about it’s an arms race, right? Who has the biggest, boldest, more expensive model in a way. But I was concerned and we can certainly touch into that. My experience of 2 years at Tesla about how we can solve real human everyday problems for frontline workers that.

So, when I was discussing that we engineers that were getting coding these big models back inside Tesla, we’re always. Computers will always be better than us and doing prediction made by that we mean getting a huge amount of data historical data and see patterns things that [00:03:00] are repetitive over there and say, OK, this is with a high.

That’s a probability, right? With a high degree of a chance, 90 percent 80%. So this will happen because data show us that happened before. And that’s great. We shouldn’t be doing that. So in the case of a car company, any company, you get all the historical repairs and you know that certain type of cars in certain weather, when driven by certain patterns, you usually break this part of this subsystem X amount of kilometers or miles or whatever.

So that’s great. We were doing that. We predicted diagnostic to Manchell learning. What I wanted to add to the equation, because it’s the part that’s easy to forget is What about the judgment? What about the human judgment? The mechanic or technician will see the car coming, and we might see by touching it, by feeling, by looking at the car, might see that something else is off.

Maybe there was a, I don’t know, heartbreak or there was actually a crash or something that happened before that we’re getting all [00:04:00] this data and all these signals coming from the car that might also come into play. And that’s what we humans are good at. We remember seeing that before. We made a decision back in the day that the outcome was a particular outcome, and that kind of is part of your knowledge base and your experience.

So how we can merge both how we can merge the cold data coming from the car in this case, but leaving the opportunity for the technician to make. a judgment call. So I was pushing not for a automated kind of result, but like a guided, guided diagnosis process where the machine will provide all the probabilities looking at the car and getting all the data coming from the sensor.

But the technician will actually use that to make their own judgment and say, yes, I will do that. Or maybe now we’ll do the other thing. So I think that is a, that’s a great concept that I’ve been talking and I’m In love right now, and I think it’s really important because we’re seeing this much development in a it’s [00:05:00] thinking about a not only as artificial intelligence, not even as artificial intelligence, but more like augmented intelligent or amplify intelligent where we make humans better, they can do more because we’re feeding them with data pre analyze data with a lot of prediction by your living room for judgment.

Alexis: Okay, so a large place for human, but not only human, the experience they have in a particular field, and that could be any field. 

Sebastian: In this case, we’re talking about mechanics technicians, people that do wrenches, they take wrenches, I mean, with their hands, they’re not coding, they’re not engineer, they don’t care about at all about machine learning.

How you can give them More especially there for example compensated and then we get into compensation and kind of incentives that’s a cold economical kind of analysis and there’s a lot of research about that they were incentivized by the number of cars so why don’t we tell them a story that they [00:06:00] will be able to use him.

Augmented. Tools or machine learning or whatever, it’s not about machine learning. They can actually go through more cars through the day, through the week. So they’re more productive. They get a better paycheck. So we’re all happy. But sometimes I feel as an industry coming from any, again, as a software engineer all my life, and we met in a software company, we tend to fail to explain that we go into, okay, this is cool.

This is the latest, this is the latest model. LLMs, but we understand who is the customer who’s on the other side consuming that technology, how they’re incentivized and what is that they’re trying to solve. And we certainly fail at telling that story sometimes. 

Alexis: I feel there’s something deeper that we can grasp there.

And so AI is not a replacement for human intelligence and definitely human intelligence, their experience and how they understand the world is something important. So they could be augmented. How do you do that? Practically. 

Sebastian: [00:07:00] That’s where I learned a lot and made a lot of mistakes doing that. And I think that’s what I’m looking at.

What is the company or yeah, a company, a software provider actually going to crack that code. I think right now everyone is fighting about releasing the biggest model and spending a lot of billions of dollars in training, but no one is certainly there might be companies doing that. But we haven’t seen that and the headlines and the stories in the media.

It’s all about the biggest model and the competition between the providers. No one is okay. This model, whether it’s the biggest or not, it’s actually increasing productivity for it. Frontline workers, technicians, insurance clerks, customer support operators, airlines, whatever, we’re still not seeing that because we’re failing at deploying those models along human beings, working side to side, the whole copilot idea, whatever we would like to call it.

So when I’m seeing failings first, as an engineer, we tend to, it’s too [00:08:00] complicated. We throw a lot of technology in a lot of. Explanation and a lot of, we tend to use automation and artificial intelligence a lot. And if on the other side you have someone that doesn’t come from our industry, the first thing that comes to mind is, okay, this is automation, this is going to replace me.

No matter your intention is the first, because it’s the human reaction to that. One of the first things that I did at Tesla’s team that I inherited when I joined was called service automation. And we were supposedly, we were tasked to, okay, let’s create more tools. For internal customers to employees, more tools for them to be become better.

I say, okay, the first thing I want you to change is the name because every time I present myself as service automation, they say, okay, you’re coming for my job. So it was super quick and everyone say, okay, that’s cool. That’s a good idea. So we change it to service augmentation and I started sharing a lot of like research, not papers, but just.

Headlines and professors kind of [00:09:00] analysis at both sides of the aisle, as they say here in the US, both the engineers are building the software and the consumers on the other side, the technicians say, Hey, this is what we’re trying to build. We’re trying to build something that will help you go through your day that are going to augment you.

Augmentation is still like a 20 word. As I say here, it’s like too complicated. Maybe the amplification. But just change the name because words carry a lot of weight. Right now in the media, it’s much more interesting to publish. story about automation or AI taking out jobs than talking about AI making people better.

Alexis: It’s very interesting how we oscillate between a 1 world and 5, 000 world, and we are mixing them in one sentence and it’s scaring everybody. 

Sebastian: It’s a human behavior and we go from, this is going to be a great feature, to Skynet and Terminator and we’re going all like the matrix. So, and that sells. So I think it’s for people like us, like you to understand the technology, but I think you need to [00:10:00] go further and explain the technology, explain what’s going on, explain why you’re using 

Alexis: it.

And it’s a very good point. You need to care about the users themselves, the people who will really use the technology and go a little bit further in understanding how they work and what they are trying to achieve. And it’s not a game about feature or that’s not only that it’s really about. what they need to accomplish, even if they don’t really know what kind of feature they would need on pantyhose with users, I feel is very important.

Absolutely. You picked an example about the ghost in the machine. And I was very curious about that because yeah, I’m probably old enough to know about that album from a police, uh, from the police, 

Sebastian: 1981 great songs that I was doing, but I always heard, I mean, I always listened to music that is. Yeah, but yeah, I got a t shirt and I used that t shirt that’s about that album that is called the policy of ghost in the machine.

And I used that t shirt when I went to a meeting that I want to explain the [00:11:00] concept and say, the ghost in the machine is that idea. It’s a phrase that had been going on forever. It’s just. You can also talk about the Turing test and all that. Okay. If it is a machine, if it is new enough or strange enough, and I think most people got that experience with JGPD like two years ago, you do feel that there’s something else about a program there.

That is the ghost. There’s a soul, there’s a human touch. At the end of the day now, if you delve into it and you get into the research and you do, you understand the transformer model and all that, okay, it’s a pretty big program choosing what’s the next word to use. But at the beginning, it feels magical.

I think that is the idea is people will tend to think, okay, this is actually sentient. This is actually thinking by itself. So with the ghost in the machine, I tell the people, if we don’t explain them what’s going on, if it is a black box, that is a concept that we also use a lot in software, you’re not explaining where’s the data coming from for you to make the decision, the prediction, where it’s coming from, who selected the data, [00:12:00] who labeled the data, and then you don’t explain how you use that data to make a decision.

And then you explain. In simple terms, kind of the, like the confidence interval, I say, okay, we’re pretty sure up to 80 percent you don’t need to use percentage worth. I was pushing a lot for like graphical representation, easy to understand that this is a recommendation based on all this data. I think we also need to get better at that with sending all these models that you ask a question, you get a response, but there’s nothing that will explain you.

How that response got constructed, how that response came to be, and then we get into a lot of and we all we saw that already a lot of crazy stuff on really dangerous stuff about labeling and who’s bias data and all that. So I think that is an also an important concept. We’re dealing with a frontline workers say sharing with them.

Okay, we’re giving you this recommendation because A, B and C or D happened before in my case, I was pushing, but it was a pretty simple concept to fix [00:13:00] an issue. You’re relying on millions and millions of rows of data, lines of data coming from previous repairs. The repairs, historical repairs you have done 10 years of experience, those repairs were done by other technicians just by sharing to the technicians.

Hey, this is actually recommending you what to do, but it’s trained in a way or based on what your peers have done in the past. It’s like this shared knowledge of all your peers that you look up to. It’s not a machine that the machine is just sorting the data and just going through that really fast.

That was a good example of how I was pushing for the ghost in the machine. We will need to explain because they will embrace it much. There will be much more open. That is just again, a black box. They’ll say, Hey, the machine is telling you to do this. Then they will know, you know what I’m doing the other way around.

Alexis: What I really like there, it’s not just trying to explain how the feature work, but basically showing the work that is done, explaining all the reasoning [00:14:00] that got us to the conclusion. So you need to explain a few things. You did it very well to say, okay, that’s basically the model is just trying to predict what is the right word to use after the previous one based on historical data.

That’s probably a rough explanation, but that’s pretty cool because then, okay, I understand that this is the data, this is how it works, and I can trust. that thing. In addition to that, I have a kind of confidence level that is shown to me. I can really rely on it or I can say, ah, okay, the confidence is very low.

There’s maybe not a lot of historical data on my current situation. You probably need to pay attention a little bit more. That’s very interesting, I 

Sebastian: feel. It’s spot on, and you mentioned such a key word, Alexis, that is trust. And the other word that I kept on using in all my meetings with product managers and engineers is empathy, too.

You need to increase the empathy for them to say, Hey, this is actually helping me. And [00:15:00] I’m actually rooting for the software, rooting for this solution because it becomes better, the solution becomes better or machine learning, whatever the AI becomes better, I become better. We’re all peers. We’re all partners.

If I think you’re trying to replace me, then I will do all my best to actually hijack and just kill your project. 

Alexis: You have quite a fascinating career trajectory funding companies in Latin America, working with RADAT in Latin America and in the U. S., working in the Silicon Valley for Tesla. How do you see the role of technology in customer experience?

in the future. And how have you seen that evolve? And how do you see that for the, in the future? 

Sebastian: That’s a good question. You know, Alex, one thing that I keep repeating myself, just not to forget, and I keep telling friends that I have in Latin America, they go, okay, Tesla, Silicon Valley is great. And you can relate to that being in France is at the end of the [00:16:00] day here, you will see probably bigger, Ammunition, bigger weapons, or bigger things that they’re building for a global scale, we’re solving the same kind of problems.

Cultural change, resistance to change, human behavior is the same in Silicon Valley, in Paris or France, in Buenos Aires, in Argentina, in Brazil, or in Africa. This problem of, let’s say, for Tesla, but if you’re throwing a fully automation machine learning, whatever, diagnostic to a technician. In Tesla and Silicon Valley, without explanation, they will resist to it.

You do that in France, they will resist to it. Also, you do that in Turkey, they will resist to it. And the same in Latin America. That’s again, that was an insight that was a realization for me. I finally understood that, okay, I’m here because you get exposed to global scale of solving problems. You probably have bigger resources and tools to solve that problem.

But at the end of the day, the problem that you’re solving is still a human problem that is the same, no [00:17:00] matter what language you speak or the color of your skin or whatever, to be honest, this is amazing because even with everything that we’re discussing about AI and all that, at the end of the day, human beings at the core, we are still the same and we fear the same things and we need the same kind of help.

So that’s probably what I think it’s the biggest. Outcome of my journey so far, but yeah, as I mentioned here in Silicon Valley, you see that we go really fast and sometimes too fast. So I like being here and seeing everything that’s going on with AI and everything that we’re thinking about building at the same time.

I’m super interested in how we are going to build all of that with a good adoption and with empathy. So this is a great 

Alexis: place to try all of that. So that’s the right balance of technology and human touch. That’s the empathy that you build with the users and to foster the adoption of technology or foster the idea of innovation itself.

Sebastian: I read as many psychology books as [00:18:00] Coding or AI machine learning algorithm books. I think we need both, especially with AI right now. Any, any you on your, what you’re working on your consultancy and we need people that talk technical because you’re going to be exposed to technical discussion or code or a solution or diagram.

Okay. This is what we’re building, but I think we need more people that can understand, okay, we build this and we ship this product. This is going to happen. And if you don’t know, at least you’re going to, Catching a bus or a taxi and you go there with your user and you sit with them, you sit with them and see them in action.

In my case, it was going to the Places where they were actually branching car and working with them. You have to work with them, understand what they’re doing. So if you’re just shipping code, pushing code into production without ever talking and touching and feeling your customers, it’s going to be hard.

Alexis: I had a, I had a conversation with a really high performing team. I was looking at what they were doing every week to have a sense of what they are, the [00:19:00] things that were important to them. I noticed that. All the team members had user, real users, interviews every week. Not all of them. Every week there was a contact with a user.

at least one. And that was different people on the team. And they had a user interview guide that was constantly evolving because they were testing their assumption with different users. And I was looking at it and say, Oh, okay. So probably a successful team needs to be in contact in touch with their users at least weekly that showed up in their work.

Of course, 

Sebastian: I agree with you. I think they’re really successful like B2C consumer companies. The product management team had been doing the, they know that, and they’ve been doing for AI, we’re trying to hopefully not replace, but augment decision. It’s even more that you need to be there and understand how that person is making decisions.

If [00:20:00] you trying to build something. That person was going to use on their day to day. Now you end up with Clippy from office in the 90s. They’ll say, Hey, what do you need to do? Do you need to print? Hopefully we’ll become better than that. 

Alexis: Yeah. That’s the first question everybody asked was how to turn off that thing.

Absolutely. Finally, as a leader who worked in different high tech environments, what advice would you give to a new leader who want to effectively evolve in that world? 

Sebastian: Advice. Okay. For leaders, I would say maybe what we’ve been discussing, it’s, I think today you need to have exposure to the technical part of things, understand everything that is going on, how it’s been created, why it’s been created and by who, and there’s a lot of, we know.

Political things at stake and companies competing against each other. So you need to understand them. We probably need another podcast to discuss open source versus closed source for [00:21:00] things like AI and all that. But you need to understand where everything is coming from. But those again, those are tools in your tool belt.

What I would like leaders. I think it’s very important. We’ve been discussing, understand, be empathetic, understand who’s on the other side. Who’s your customer? Who’s consuming that? It’s a B2C, it’s a B2B. Are your users experienced with AI or whatever technology you’re using, or they are not? Do they trust it or not?

And if not, and if you’d make those questions and you get answers, work with those answers, I think one thing that I see a lot here is, again, we are shipping code without asking any questions and we think that code is the best and that option will follow. And I think we need a little more human touch on that.

So that will be my recommendation for leaders. Again, human touch and empathy. 

Alexis: Excellent. Oh, I cannot resist. People will not see that on video. I can see it on your wrist. You have an interesting message. Tell me more about that. 

Sebastian: All right. Yeah, I am. I just I was it was lying around. [00:22:00] This is a wristband that I got from one of my favorite places in the U.

S. That is the Air and Space Museum in Washington, D. C. That you have all these. Historical planes and the Apollo mission, all that. And this is a wristband that says failure is not an option. And there was a wristband that was created for the Apollo team before sending someone to the moon. And then you see how much was achieved in collaboration between private and public sector, different political views and all that, how much was achieved in like six years, that’s amazing.

Alexis: I like the story, I like the message and at the same time you mentioned open source a second ago, so don’t we say fail often, fail fast or something like that? 

Sebastian: Oh, you got me there. Okay, we have another hour and a half to keep the discussion. I don’t like the idea of with AI and everything that’s going on that fail often, fail fast.

I mean, just releasing whatever it is because now this is something that is talking at you and many people are making decisions based on the responses that [00:23:00] they got, the answers that they have. So if it’s not curated, if it’s biased, a lot of things can go wrong. And we have seen examples. So I think for the ethos of just move fast and break things, I’ve never liked that much.

And especially here right now with AI. And the other part that you asked me, we share that background together is I think there’s need to be a much more open source involved. And again, ghosting the machine and the black box. If that model is answering me questions, I want to understand who built the model and who made those initial training, initial answers.

And that’s what I love. What a lot of companies are doing in France are taking the more human approach and they’re mostly based in open source. If we go and this is a personal opinion, so I don’t know, I don’t care about all the comments that we may have. This is handled by one big corporation with all the data and it’s closed.

We have seen that before, and it’s never a good story. So I will push for [00:24:00] maybe we have seen you and I were competing against other companies. Maybe you have your closed source. That’s good. And you have your open source of that is good enough to and it kind of the similar. It’s your choice, but at least you have an open source choice.

I wouldn’t trust my frontline workers to make decisions that will affect customers based on a model that I don’t know exactly how it was built. Personal opinion, 120%. 

Alexis: Totally agree. And that we are back to the trust aspect and transparency is the foundation to build trust. So I love that. I’m happy that I asked the question about failure.

Thank you for joining the podcast, Sebastian. Thank Alex. You have been great. Let’s do this. 

Sebastian: Once again, in the future, okay? Pleasure. Take 

care.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.