Blog

AI can’t do it all: Why human brains still rule in systems thinking

Team Button
Team Button
May 12, 2026

Can AI replace content design thinking? Learn where AI helps, where it falls short, and why human expertise still matters.

Let’s talk about today’s special event! It’s a little different than usual. We’re going to have a live chat with a few special guests and then answer some burning questions you might have at the end of our time together. Nothing like a full live session to get the juices flowing. So without further ado, I’d like to welcome Keri Maijala. Keri leads the content design team at LinkedIn, digging into organization design systems thinking and what it means to be a leader in an AI-first world. She’s been creating and organizing clear, relevant content for humans for almost 20 years. In her non-professional time, she’s hanging out with her husband and Pomeranian in Santa Cruz, California, and trying to figure out how to sneak roller skates into Disneyland. 

Keri: Yeah, that does sound like me. Hello. Hello. So happy to be here. Thanks for having me.

Torrey: Thanks, Keri. Next, I’d like to welcome Cindy Xiong. Cindy is a content designer at LinkedIn who loves turning messy copy problems into systems. Based in New York City, she’s currently building AI-powered tools that help teams write clear, consistent product copy at scale. When she’s not shipping agentic tools, Cindy enjoys baking bread and doting on cats. Welcome, Cindy.

Cindy: Thank you so much, Torrey. Happy to be here.

Torrey: We’re so excited to have both of you here. Let’s get into it. This event is titled “AI can’t do it all,” which makes me think that you two have done a lot of work to figure out what AI can and can’t do for you in your content space. I know you aren’t here to speak on behalf of LinkedIn, and you aren’t here to endorse any particular AI tech or AI toolchain, but can you each give us a little more context on how you’ve used AI so we know where today’s advice and reflections are coming from? Cindy, let’s start with you.

Cindy: Yeah, thanks, Torrey. At LinkedIn, I’ve done content design for user-facing AI assistance. And in those cases, I sometimes was working on sample responses and sample prompts, but even when I wasn’t closely working on the prompt design, we still had to design for possible AI pitfalls. So I did learn a lot about AI model behavior from that. And then, currently, I'm working more on our internal AI tools that help scale content design expertise across teams. Our main one is a content design assistant that’s integrated directly with Figma and other surfaces that designers are already using. And it can do things like reviewing copy, writing copy, and looking up guidance. And I think that the development of this tool will be where a lot of the reflections are coming from today. And in creating our internal tools, we’ve gone big, exploring and experimenting with what AI can offer us before landing on what we have now. And then personally, I also use AI a lot for vibe coding, for planning, et cetera. I really think it’s a great tool to help me organize my own thoughts.

Torrey: Thank you, Cindy. And then the same question to you, Keri.

Keri: Yeah. So, everything Cindy said, she’s doing all of that and more. So for me, because I’m a leader of a horizontal team, we have nine people across the entire product. So, one of the problems I’ve been trying to solve is around visibility and trying to show kind of the depth and the breadth of the work that we’re doing. I’m vibe coding as well. Ask me how that’s going. My poor, my poor brain. So, for example, I’m building a dashboard, and I’m like, “Cindy, I need your help because I can’t figure out how to do this.” But the idea is, you know, trying to take AI and use it to solve problems that maybe you’ve been experiencing for a while and that you’ve been struggling with. And I'm finding it’s really helpful in organizing vast amounts of information, summarizing it, and putting it in a way that makes sense and that is scalable. So that’s what I’m doing right now.

Torrey: Fantastic. I want to jump right into our first question here. Cindy, maybe I can go to you for this one. Where have you been most surprised by AI? Either by something it did that was better than expected, or gloriously wrong?

Cindy: Yeah, I think, when I first started developing AI tools that use LLMs, I was often surprised by how less instructions sometimes perform better than more instructions. And early on, I think I accidentally over-engineered a lot of prompts. This came about because you know, when you’re working on a prompt, and the first try doesn't go quite how you expect it to, and then you end up adding more instructions because you’re trying to fine-tune those little details? But in the end, you have this gigantic prompt, which honestly is probably confusing even for you to read, much less an LLM, right? So, I used to think it was very important to design and plan for all these edge cases and that more specific instructions would make the AI better and able to handle more complex situations. But I realized that the answer isn’t always more words, because the more words the AI has, the more it gets confused on what to prioritize. So, there are times where less instructions actually do better because it allows the AI to be more flexible, and it allows the AI to decide on its own how to navigate more nuanced situations instead of being locked in by a set of instructions. So, instead of planning a decision tree or something for the AI, I think it’s better now to give it a mindset and more of a list of principles to follow. But that being said, I think designing for use cases is still important in some cases, like when you only have a few main paths that you want to follow. I would also prioritize putting that knowledge in something like a data file instead of the instructions, where the AI can easily get overwhelmed by what to prioritize and to only call that when necessary.

Torrey: Yeah. Keri, I saw you nodding along to that. Is that something that you’ve also experienced, that less is more?

Keri: Kind of. There are two parts to it, right? I’ve always kind of thought that if I ever had a genie in a bottle and I needed to make a wish, like how very, very, very specific I would be. Kind of similarly, I used to say like include this and this and this and this and this, and Cindy’s exactly right. It gets confused very easily, and it’s hard to keep track of your prompting technique and what works and what doesn’t, because you lose sight of the problem that you’re trying to solve, and you can’t iterate on that. So, yeah, everything that was just said. But also, it’s hard to duplicate the results every time. So, I’ve experienced, “I’ve got it. I’ve got the perfect prompt,” and then all of a sudden it doesn’t work. Or the opposite will happen. It’s like, this doesn’t work, and then one day it works, and I have no idea what I did. Like, don’t breathe, don’t move, don’t taunt the prompt. Because otherwise everything will break.

Torrey: I need a t-shirt that says “Don’t taunt the prompts right now.” I’m fascinated by this because, Cindy, you mentioned that you want to design for all of these edge cases, you want to design for all of these things. But giving it the principles to follow. Can you give us an example of something like a principle, or is that going too detailed for this conversation?

Cindy: Maybe that would be a little too detailed, but I guess it’s just the type of … let me think about it from a general sense. Maybe it’s just the type of behavior and responses you wanted to give in a more nuanced situation, or when it runs into something unexpected, how would you want it to reason through that response, rather than saying, in this situation, do this. Because that’s where it starts getting locked in. And that’s where it starts trying to fit all of the questions it gets into one of those paths, and that’s where it starts getting really confused when it doesn’t perfectly fit into one of the predefined paths.

Torrey: So it sounds like this is, and I’ve experienced myself, an awful lot of work to get the prompt right and then do a lot of, I don’t know, candle lighting and freeze framing to say like, please just let it work. Keri, where do you draw the line between AI accelerating your work and AI replacing your thinking? You know, once it’s working, it gets kind of easy to hand it off.

Keri: Yeah, I mean, yes and no. I mean, everything that you just talked about, how much work it takes to get it right. It’s never one and done. It's never, “I made the tool, and now we're done here,” and we're handing it off. I think the line is drawn between: are you handing over your thinking, you’re handing it over to AI, when you understand the problem space and the audience, and what it is that you’re trying to do. And then you’re looking for a more specific result from AI. I think once you start handing over the problem space to AI, that’s when you lose a lot of that human thinking. And I think wrestling with looking at a problem from all angles. That’s where the “aha” moments happen. That’s when you start getting into weird spaces. And you want the weird spaces! You want to exist in that space to figure things out. And then once that’s formulated, then you have a very specific task that you want AI to do for you. So if you’re not getting weird, then don’t hand it to AI. 

Torrey: That reminds me of something I tell my students, which is to dwell in the ambiguity for a while.

Keri: Yeah, yeah, absolutely.

Torrey: That weird space.

Keri: Yeah.

Torrey: And until you’ve done that. I guess what you’re saying is, it’s not time to hand it over to AI yet.

Keri: Yeah, yeah, that’s exactly right. And if I can go back to the kind of the thing I’ve been saying for almost 20 years, is that the writing part of content design is actually like the smallest part that we do. It’s like 5% of what we do. The rest of it is existing in this other space.

Torrey: So a bunch of questions came in while we were talking about taunting the prompt. And I want to talk about this to make sure that we’re responding even better than an LLM would. So Donna had asked, I think, to clarify, Cindy, something that you had said, “Are you suggesting fewer guidelines in the instructions of your project or in the prompt or question itself?”

Cindy: Oh yeah. When I was talking about the prompt earlier, I meant the instructional prompt that you're giving to the LLM when you’re creating it versus the prompt that the user is asking, if that helps clear things up. But yeah, I think it’s just like less instructions in that master instruction prompt and more in the guidelines that it can search and call from in a data source or something like that.

Torrey: Yeah. So, getting really, it sounds like, getting very deliberate about this set of content is available to the AI when it needs it. This set of guidelines is part of the instructional prompt or the system prompt. All of the different systems seem to be using different words for these things, and the terminology seems to evolve every week or so. So, putting the principles and the role and the mindset into the guidelines and putting the data and maybe even guardrails into the sort of data source. Is that, do I have that right? I’m interpreting on the fly here.

Cindy: Yeah. Yeah. I think that’s exactly it.

Subscribe to the Button newsletter!

Get more valuable content design articles like this one delivered right to your inbox.

Thanks! Check your inbox to confirm your subscription.
Oops! Something went wrong while submitting the form.

Torrey: Thank you, Cindy. Another audience question: “Is it everything breaking after taunting the prompts or the feature of the AI learning from outside your prompt?” So, is it like, I think this is talking about when things are changing on the fly, is the AI learning or changing, or are model updates happening? They say: “I struggle with this, and I wonder if I need to shift my expectations?” Keri, let’s go to you.

Keri: Yeah, that’s a super interesting question. I mean, why not all of it? The LLMs are kind of this mysterious space, right? Because I think even the people who create them don’t know exactly what’s going on behind the scenes. And you know, some of the results that are happening are just kind of like, ah, it’s, you know, it’s being an LLM. So it, and Cindy may have a better answer for this, but, for me, it’s another proof point of why having a human in the loop to kind of shepherd these tools is so, so incredibly important. And the thing is, it’s not only for the prompting and the output and kind of reestablishing our own process, but, you know, for the decisions that we’re making that we put in them, because it’s only kind of repeating what we’ve said, you know, from a broad scale with all of the internet at its disposal. I’ve already lost sight of what the question was … got on a soapbox! I think it might be doing all of those things. So it’s up to us to understand that’s a thing that might happen and provide time and space to let that happen, and we learn from it, and it learns from us.

Torrey: Yeah. I think that sort of knowing that things will be unknowable, you know. Is it changing for this reason or this reason?

Keri: Right, right. Yeah, exactly. We may know, but we may not, so let’s just plan for the fact that it’s going to be unpredictable and stay on top of it.

Torrey: Yeah. The old, simple days when what we programmed came out are not where we are right now. I want to dig a little further into this human-in-the-loop idea. We hear about this a lot. But who should be that human in the loop when it comes to content design with AI? And Cindy, let me go to you first for this one. Who should that be, in your opinion?

Cindy: Yeah, for sure. We’ve done a lot of debate on this internally. We’ve considered product designers, content designers, and product designers with some content design knowledge. And then we’ve also thought about whether we should limit tool use to those who have that baseline knowledge. Or should we just give everyone free rein? And when we’re building tools that help with content design, this question comes up a lot because, if we can’t trust the LLM completely, we need someone to be in the loop. Right? And it would be great if somebody with content design expertise were able to review the outputs from an AI every single time, but it’s just not possible because we don’t have that many content designers to go around. And it also risks becoming a bottleneck for teams. So, since we usually give our content design tools to product designers to use, we assume that product designers would be able to be that human in the loop. But product designers usually have varying amounts of confidence in their own content design skills, their own writing skills. And we’ve actually gotten feedback in user interviews from product designers saying that when we give them multiple choices of copy and ask them to pick the best option for their situation, sometimes they’re even more confused on how to proceed because they’re like, all of these options … we’re making sure that all of the options they’re given meet a baseline quality. But since they all meet baseline quality, they find it hard to know whether they’re picking the right one. A long way to say, I think that we still need product designers to have a certain amount of confidence and knowledge to be that human in the loop. And we can do this by building education into the tools, but also providing training to the designers. And I think that’s the way we’re going about scaling our expertise, more so than having a content designer review all of the responses.

Torrey: Oh. So let’s dig into that. What kind of training is the right training for the human to be in the loop? I noticed this for myself recently when I was going, okay, if I was gonna do marketing on TikTok, what even does that look like? Like, how would I make a plan for myself? And I said, well, here I am in the AI age, I’m gonna squint real hard, and just what would that look like? And what I got back looked very believable. And as soon as I scratched the surface, I immediately went … I do not know enough to know if this is good advice. Right. I have not had the appropriate training to know whether this is good advice. Thanks, self, for proving that to myself again. So, Keri, as somebody in charge of a team trying to scale this horizontally. I know you’ve dealt with this problem.

Keri: Yeah. It’s a challenge. So, you know, to try to get at the heart of the question. I think it comes back to respecting the craft. So, you know, we talked at the beginning of this conversation about how I’m vibe coding, right? I have no idea what I’m doing. I’m not a coder. I decided a long time ago that I did not want to be a coder. I made that very conscious decision, and here I am. So I’m using this vibe coding to develop this thing I want. But when things go wrong, I don’t know what’s wrong with it. And sometimes I will get a clue in vibe coding, it won’t work. And that’s my clue that something won’t work. But I think for more esoteric disciplines that rely on things like, oh, I don't know, taste, to inform our decisions, it’s less clear. So, leaning heavily into trusting those who have developed careers and points of view and taste and craftsmanship and understanding what your own limitations are for what good looks like. And, as Cindy talked about, we’ve developed this idea around baseline quality. And baseline quality simply means that things are to style. Like, we’re camel-casing LinkedIn correctly, that kind of level. But in terms of understanding nuance and tone and where to focus and where to pull back, that’s a content design area of expertise. So that’s something. And not only do we need to evangelize for these are the kinds of things we’re bringing. We also need to respect those that are in other disciplines who are dealing with the same things and look to them for their expertise. So that is the kind of philosophy I hope I'm bringing to the team. And extolling our virtues for everybody who’s worked on their craft.

Cindy: I want to add that Keri makes great points about training people in content design expertise, but also, I feel like part of the training necessary to be the human in the loop isn’t just having the subject matter expertise, because that’s just not always possible, since we can’t, we wish we could, but we can’t learn everything under the sun. So I think it’s also important to train yourself and train others on their expectations surrounding AI and understanding how AI systems work. Like when I interviewed users on our internal tools about what they found frustrating, the answer didn’t usually actually have to do with how we designed the tools or the content design guidance itself. It actually was often about the limitations of the system that we could mitigate, but not always completely control. And they didn’t know that that was part of the LLM’s behavior and not really the content guidance behind it. So, for example, common pitfalls, like the AI being really confident in its answer even when it's wrong. A key principle that really stuck with me when I was first studying about conversation design was that AI models are also trained to please their users. And being agreeable will cause the user to return and keep using the service. So that’s why when users instruct the AI to go against its original system instructions, the AI actually does its best to comply, like oftentimes. So that’s something that we actively design safeguards against if we want to make sure that accuracy is first and foremost, but that’s not always controllable because the model design itself also has those elements. So we need to train users on recognizing that too and having more patience in these types of situations. And we can educate people through training sessions, but I think one of the most effective ways is also building that education into AI. So, for example, AI can provide reasoning every time it gives a response, right? Or it can give something like a score or a confidence level about its response. And then things like this both build trust and educate people on how it works.

Torrey: I really love that answer because it isn’t just about training in expertise, it’s also training to use a new kind of tool that has never existed before. We have never had an on-call sycophantic (the fancy way of saying very agreeable no matter what) coworker, right? And a lot of people are coming to prefer this coworker that always tells them how smart they are, whether it does that explicitly or implicitly. Like, “Great question, or a lot of people come up with this problem, or how important that you are trying to solve this user problem right now.” All of those tiny compliments add up to wear away our resistance to it and make us want to use it more.

Keri: Yeah, it feels good, doesn’t it? Like you’re so smart.

Torrey: “You’re so smart, and you’re pretty too, even though I can’t see you, because that would be creepy, unless you want me to see you.” No. I’ve been saving this question in this sheet of questions here. Megan and Alice both ask: “Do you worry that you are building the tools that are going to replace your team? And what do you say to people when they are scared of that?” Keri, I’m gonna start with you, manager lady. 

Keri: That does feel like a Keri question, doesn’t it? Yeah, I mean, of course. I would be incredibly naive to not think that. So I think the important thing, well, lemme take a little bit of a step back. The thing I fear is not AI replacing us; the thing that concerns me is people in positions of power thinking that AI can replace us. And I think those are two very, very different things. I think it’s important for anybody in a leadership position who manages teams that seem to be at risk right now to really be vocal and clear about what the human brings to the tools and how the tools cannot exist without the human beings behind it, making the decisions. I posted on LinkedIn (plug, plug) that I think people who are making the decisions to lay off people in favor of AI are looking at it from a very shortsighted point of view. The tools, and ask Cindy about this, the tools require constant updates and monitoring and feeding and care for them to work correctly. So when the people who have put all of those good decisions into the tools and are keeping them, and they’re gone, you know, it might not happen immediately, but the output is going to suffer. It’s not going to work anymore. And then they’ll be scrambling and looking for us again, and we’re going to be living our best lives. Which is why visibility, storytelling, and showing the breadth and the depth of the work are so important. It’s why I’m building this dashboard and doing other things around it. So, short answer. Yes. But I think if we’re all very clear about what it is we’re doing and how much hard work it takes, I’m hoping that won’t happen. I’m hoping people will make the right decisions.

Torrey: Yeah. Sometimes it feels like all we can do is hope, but I think what you said at the beginning of being vocal and being in those rooms and advocating for the good decision-making, the good ideas, that can only come from lived experience and expertise. Not a, as I’ve heard it called, not a Plinko board of word tokens that are related to each other. So Joel says, as a follow-up, “Everything I’ve heard today tracks with my own understanding of how AI and LLMs work. My question is, what kind of empathy do your leaders have for the advantages of shortcomings of AI and LLMs, and how do you help them understand the importance of understanding those shortcomings?”

Torrey: Yeah. Cindy, how do you talk to leaders when you’re in the room, and you’re saying, “Actually the LLM can’t do that.” What do you do?

Cindy: I think that before going into … because we’re always constantly creating new tools, we usually do some sort of share-out or presentation. We have this design tools lab meeting at LinkedIn where designers share out tools that they’re building. And before this, we always prep very carefully about the perspective that we’re going to give around this tool. Like, it can do this, it can’t do this. It should be used for this, and it can’t be used for that. And we always have to set those expectations really carefully because we don’t want it to be scaled to a point where we haven’t tested it yet, or we haven’t evaluated all of the consequences for it yet. So I think it’s just that, it’s how you understand the importance. It’s just making sure that we set those expectations first before everyone gets their hands on the tool and starts using it however they want.

Torrey: Yeah, that seems critical. It actually reminds me — it’s a very small tangent. My uncle is the foreman for a golf course. Big important golf course, big millionaires walking around playing golf. Getting new tools is a big deal. But some tools can literally cut the grass to within millimeter precision. This baffles me, but I believe them that this is important to them. If you were to use that tool that cuts with the millimeter precision of the grass on this part to, say, grade the sand trap, you’d be in big trouble. It just won’t work that way. That metaphor, brought to you by randomness. Keri, is there anything you wanted to add to that about building your leader’s understanding of these things?

Keri: I mean, it can be as simple as just showing them that it’s best. I mean this, you know, it kind of goes to show don’t tell. And this is just advice across the board, you know, showing the output or showing the results is a lot more impactful than just, "This is bad and here’s why.”

Torrey: Yeah. So yeah. “Try it for that, exec. Let me show you what happens when you do that.”

Keri: And, you could be like, “Oh, that’s interesting. Let’s see what happens.” And let them discover it on their own.

Torrey: I want to ask a question that Mike had asked earlier: that they work in a heavily regulated field in healthcare, and their medical-legal regulatory panel wants to review, they want to be that human in the loop, and review what their agent may say. “With generative AI that can create a variety of new responses, how do you navigate stakeholder review?” So this is sort of that human-in-the-loop, but at a big regulatory scale. Have you had to deal with that?

Keri: Yeah, I mean, of course, heavily regulated. So, we have a trust team, and we have a dedicated content designer on our trust team; her name is Shauna. She’s amazing. And, when we talk about these kinds of tools for her level of work, we’re all agreed that there is always a human in the loop on that. And we understand that it is a longer process, so we can … that doesn’t mean that there is no possibility of developing any tools at all, you know, because there are still standard responses as decisions are being made, you know, and it goes into our tool set. But we will, ideally, I mean, never say never, but you know, as we stand right now, we will never let something deploy without a human looking at trust content. That would be wild. In something like healthcare, something financial, you know, something that is very heavily partnered with legal, you wanna talk about high risk? No, no, no, no. We need a human in the loop for that. So I would say to just set those expectations that, yes, there are some things that we can automate, but human in the loop is absolutely necessary. And plan for that.

Torrey: Yeah. Cindy, did you wanna add anything to that?

Cindy: No. Yeah, I completely agree. I feel like it’s just that, with those sensitive topics, and especially when there are a lot of legal stakeholders, regulatory stakeholders, you definitely need someone else’s eyes on it before it goes out. They definitely can’t just take the LLM’s word for it. And especially if they themselves don’t have that understanding or expertise, it’s not a situation where it can just be somebody without the expertise and the AI. That just won’t go over well at all.

Torrey: Yeah. Just won’t fly. David asked a related question here. “Transparency around when mistakes can happen, and I guess when mistakes do happen, needs to be communicated. How do you handle that kind of transparency? If mistakes happen from the AI, how do you handle that kind of transparency at scale?” Cindy, have you run into that?

Cindy: Yeah, I mean, the AI, since it’s an AI, it’s obviously going to make mistakes from time to time. What we do in designing for the AI’s response is one of the things that we do is that we can’t always tell if the answer is correct or not for the user, because it also depends on nuance in the user situation, et cetera. And we don’t have visibility into all of that. So one thing that we’ve added to our AI agents is that at the end, it’ll ask a series of follow-up questions. For example, if this isn’t what you’re looking for, can you give more context about this so we can adjust the answer to your needs? So things like that. Being absolutely clear that mistakes do happen and what we can do to remedy those mistakes is something that we built into the AI behavior. But yeah, at scale, we’re also just very transparent when we’re introducing these tools that mistakes happen. And even in the live demos, sometimes mistakes happen, and we don’t hide them. We show it and explain what to do in those types of situations, and how to check for those mistakes. And it is just a fact of the matter that they’re going to appear. So you just have to figure out how to deal with it. 

Torrey: Thank you for that, Cindy. I really appreciate that it sounds like you’re taking it beyond the sort of disclaimer in ever-decreasing font size of, “This might all be false,” but you’re asking, you’re proactively asking people, “Did this satisfy your request? Could you give us more information to do that?” It’s a proactive way of handling that. So Michael asks: “As a content person contributing to building out LLM agent behavior, what falls under your responsibilities?” They’re referring to things like building out knowledge base articles, prompts, guardrails, voice tone requirements, et cetera. And my guess is that Michael is asking this because sometimes this is a little bit of a turf fight with other people also working on AI. Keri, how about you?

Keri: I like to think of it as a collaboration opportunity. So the answer to that question is that our team focuses on product content very, very specifically. There are some overlapping areas, so product marketing is one of them. EdiTorreyal is one of them. Help and content strategy is one of them. Our team focuses primarily on content that appears in the product. So the tools that we are building focus very specifically on that. We collaborate with partners because sometimes there’s crossover, it’s not super clean, especially with things like, you know, product marketing, because it’s just like … sometimes it’s a little event circle there, but then it’s a conversation, then we lean into relationships and understanding our roles.

Torrey: Anything you wanted to add to that, Cindy?

Cindy: I think in terms of building AI behavior, I think that all of the topics that you mentioned are part of your responsibility as well. But it definitely does depend on who you’re collaborating with and which ones are part of other people’s responsibilities. But if you’re building out a tool, like when we’re building our internal tools, there are only like a few of us on the team that are working on it. So all of that goes under things that we’re working on for sure.

Torrey: You mentioned only a few of you working on it, only a few of you working on the tools. Fernanda asked: “How is the collaboration, if any, with data scientists on your team, and do you divide any work with them about the use of the AI or not?” It’s also fine if you don’t work with data scientists.

Keri: I’m not aware that we do directly, unless Cindy has a different response. That being said, that is definitely a collaboration that should happen.

Cindy: I don’t personally work with data science at all, but I know that some of the other conversation designers on our team do, in terms of getting, especially, for user pacing, AI assistance, the chat logs, and things like that, often come from data science. But for our internal AI tools, I just find a way to get the chat log myself, but usually it is built in somehow to create the chat log. But I do know for the user-facing side, of course, there’s PII, so they have to go through data science to get that type of information.

Torrey: Yeah, it was just such a great question to pop in here because I think there are so many new stakeholders for us to work with when we’re working with tools that are based on, like probability is a data science, right? And LLMs are based on nothing but probability. So, having access to those new perspectives and new ways of thinking about it seems like a great idea.

Keri: Yeah, no, that’s a really great point about probability. And I think what’s going to come out of all of this is new skills and nuances developing out of everybody’s discipline. As new tools are being developed and new skills are being formed, it’s almost like we’re gonna find new uses for existing skills. And it’s like, oh wait. Yeah. This is a very natural fit. So the point you just made is: it’s nothing but probability. I’m like, oh, yes, of course. That’s all data scientists.

Torrey: So I’m going to go to a great big question that we have not yet attacked as sort of our last topic to wrap up here together. And Cindy, I’m going to ask you first. In your role working as a content designer, as an IC solving these content design problems, how are you reconciling any moral or ecological concerns, if you have them, with the industry pressure to be leaning into using AI?

Cindy: That’s a loaded question. I think for sure that there’s a lot of worry too when we build out these tools. Is this going to be something that replaces our team? Or, even if it’s tools that are accelerating work, when it goes out to other teams, we worry, since I feel like, at the end of the day, the humans behind the tools still need to take responsibility for the output that the tool produces. We also worry if it’s giving other people misaligned content and things that don’t really make sense, whether our team worries about them getting that information and running with it, and not really understanding the true content design expertise behind that as well. So these are all definitely things that we’ve kept in mind as we’re creating these tools, and we do push back on things. There are always people who want to move faster and faster, and we do push back on people who want to take the human out of the loop and things like that. So we are trying to keep all the necessary restrictions in place before we announce a new tool. We always do evaluations and checks before we spread it to a wider audience. There’s usually a beta phase for everything that we create because we just want to make sure that it works with a smaller audience first, and we want to make sure that there aren’t any harmful side effects. So that’s definitely something that we keep in mind a lot when we’re creating our tools.

Torrey: Yeah, so those personal and interpersonal harms and staying as safe as possible with those. Keri, same question. 

Keri: What a big question. Cindy had a fantastic answer. For me, I’ve been doing this a long time, and I would love to say that this is just like any other point in time, but I don’t think it is. I think the things that have served me personally well are that I tend to be a pragmatic optimist, which means that I get excited about things, I get excited about the possibility, but I also approach them with a healthy dose of skepticism and realism. So I think being honest with yourself, I think this is going to take a lot of self-awareness around what your own boundaries are and your own values, and recognizing when those are crossing the line. You know, being confident in voicing them and hoping and trusting that the people around you are open to that conversation. I think all of those are going to be very important, and also recognizing that the people around you will have different boundaries and have different values. So again, it comes to the respect and collaboration and openness to have that conversation. So, yeah, it’s so important. It’s just a very different time, and we’re all having to evolve very, very quickly. I’m not sure that human beings are supposed to do this, but here we are, so, yeah, that’s my best recommendation.

Torrey: I want to give Kat credit for asking that question in the first place. This industry pressure is real, and it is at the investor level and investor-led pressure. And it’s shareholders and directors, and a big push from Silicon Valley and financial instruments changing hands about which I know far too little. And all of that is affecting our work requirements in a day-to-day world. I don’t think that I’ve heard from anyone who feels like they have a good solution to how to both keep a job that pays their bills and keeps them fed and gives them health insurance, and working in this industry right now. It is very, very tough. So thank you, Keri and Cindy, for letting me put you on the spotlight like that with that question.

Keri: No, that’s an important question.

Torrey: It’s a hugely important question. I am looking at the questions document here that Meghan Casey, behind the scenes, has been taking questions from chat and putting them in here, and there are so many good questions left to ask. I mean, fantastic questions. Like, “If there’s such a guarantee of mistakes with AI, how or why is that better than not having it at all?” Right. And “How do you thoughtfully guide and redirect conversations when your stakeholder comes in and says, well, I’ve got a better thing I just made with AI?” I actually got this question from a student team. I teach at the University of Washington in the Information School, and a project team came to me and said, “So our sponsor keeps coming to us with new designs that we’re pretty sure that she is going to AI and having it generated.” And they needed to know what they should do. And I think this is a very real problem that’s happening all the time right now. And I got to introduce them to the answer of the soft no. How do you say no, while being affirmative the entire time?

Keri: Wait, what does that look like? Can you teach me?

Torrey: Keri, you are good at this. I’m sure Cindy is good at this, too.

Keri: Yeah, but now, the people want to see. So now, you gotta do it.

Torrey: Oh. Oh. “So, oh, I see what you’re doing there. And it looks like you’re going towards these ends that are …” remembering that the thing I’m competing with for this executive’s attention is an AI who will be sycophantic the entire time. What can I compliment about this design and this urge to redesign? And then how can I affirm that, “I’m so glad that we’re working toward the same ends with the principles behind the design that has already committed work. And it’s possible we can build on that in the future.”

Keri: Oh, okay. That’s super interesting.

Torrey: Okay, now you all know if you work for Keri and she says something super interesting. Done.

Keri: What have I done? Oh no, I’ve blown my cover.

Torrey: Okay. On that note, I think I need to say thank you for joining us. This was a fantastic conversation with you two, with such smart answers and information about this. Thank you for joining us, Cindy. Great answers.

Cindy: Thank you so much for having me.

Torrey: And Keri, I really appreciate you being here and you bringing this perspective.

Keri: Lovely to be here. Thanks for having me!

Share this post

Join us for Button 2026

Tickets are on sale now! Our virtual conference returns this September with practical talks, live Q&As, and a community that feels like home. Spend two days exploring inspiring content design sessions grounded in real-world work and challenges.

Team Button

Author

Headshot - Sean Tubridy

Illustrator

Sean Tubridy is the Executive Creative Director and Co-Owner at Button Events.

Find out how you can write for the Button blog.

AI can’t do it all: Why human brains still rule in systems thinking

Team Button
Team Button
May 12, 2026
Can AI replace content design thinking? Learn where AI helps, where it falls short, and why human expertise still matters.

Keri Maijala and Cindy Xiong joined Torrey Podmajersky for a rich, thought-provoking conversation on the limits of AI and the power of human systems thinking. Watch the event recording or read the full transcript.

Let’s talk about today’s special event! It’s a little different than usual. We’re going to have a live chat with a few special guests and then answer some burning questions you might have at the end of our time together. Nothing like a full live session to get the juices flowing. So without further ado, I’d like to welcome Keri Maijala. Keri leads the content design team at LinkedIn, digging into organization design systems thinking and what it means to be a leader in an AI-first world. She’s been creating and organizing clear, relevant content for humans for almost 20 years. In her non-professional time, she’s hanging out with her husband and Pomeranian in Santa Cruz, California, and trying to figure out how to sneak roller skates into Disneyland. 

Keri: Yeah, that does sound like me. Hello. Hello. So happy to be here. Thanks for having me.

Torrey: Thanks, Keri. Next, I’d like to welcome Cindy Xiong. Cindy is a content designer at LinkedIn who loves turning messy copy problems into systems. Based in New York City, she’s currently building AI-powered tools that help teams write clear, consistent product copy at scale. When she’s not shipping agentic tools, Cindy enjoys baking bread and doting on cats. Welcome, Cindy.

Cindy: Thank you so much, Torrey. Happy to be here.

Torrey: We’re so excited to have both of you here. Let’s get into it. This event is titled “AI can’t do it all,” which makes me think that you two have done a lot of work to figure out what AI can and can’t do for you in your content space. I know you aren’t here to speak on behalf of LinkedIn, and you aren’t here to endorse any particular AI tech or AI toolchain, but can you each give us a little more context on how you’ve used AI so we know where today’s advice and reflections are coming from? Cindy, let’s start with you.

Cindy: Yeah, thanks, Torrey. At LinkedIn, I’ve done content design for user-facing AI assistance. And in those cases, I sometimes was working on sample responses and sample prompts, but even when I wasn’t closely working on the prompt design, we still had to design for possible AI pitfalls. So I did learn a lot about AI model behavior from that. And then, currently, I'm working more on our internal AI tools that help scale content design expertise across teams. Our main one is a content design assistant that’s integrated directly with Figma and other surfaces that designers are already using. And it can do things like reviewing copy, writing copy, and looking up guidance. And I think that the development of this tool will be where a lot of the reflections are coming from today. And in creating our internal tools, we’ve gone big, exploring and experimenting with what AI can offer us before landing on what we have now. And then personally, I also use AI a lot for vibe coding, for planning, et cetera. I really think it’s a great tool to help me organize my own thoughts.

Torrey: Thank you, Cindy. And then the same question to you, Keri.

Keri: Yeah. So, everything Cindy said, she’s doing all of that and more. So for me, because I’m a leader of a horizontal team, we have nine people across the entire product. So, one of the problems I’ve been trying to solve is around visibility and trying to show kind of the depth and the breadth of the work that we’re doing. I’m vibe coding as well. Ask me how that’s going. My poor, my poor brain. So, for example, I’m building a dashboard, and I’m like, “Cindy, I need your help because I can’t figure out how to do this.” But the idea is, you know, trying to take AI and use it to solve problems that maybe you’ve been experiencing for a while and that you’ve been struggling with. And I'm finding it’s really helpful in organizing vast amounts of information, summarizing it, and putting it in a way that makes sense and that is scalable. So that’s what I’m doing right now.

Torrey: Fantastic. I want to jump right into our first question here. Cindy, maybe I can go to you for this one. Where have you been most surprised by AI? Either by something it did that was better than expected, or gloriously wrong?

Cindy: Yeah, I think, when I first started developing AI tools that use LLMs, I was often surprised by how less instructions sometimes perform better than more instructions. And early on, I think I accidentally over-engineered a lot of prompts. This came about because you know, when you’re working on a prompt, and the first try doesn't go quite how you expect it to, and then you end up adding more instructions because you’re trying to fine-tune those little details? But in the end, you have this gigantic prompt, which honestly is probably confusing even for you to read, much less an LLM, right? So, I used to think it was very important to design and plan for all these edge cases and that more specific instructions would make the AI better and able to handle more complex situations. But I realized that the answer isn’t always more words, because the more words the AI has, the more it gets confused on what to prioritize. So, there are times where less instructions actually do better because it allows the AI to be more flexible, and it allows the AI to decide on its own how to navigate more nuanced situations instead of being locked in by a set of instructions. So, instead of planning a decision tree or something for the AI, I think it’s better now to give it a mindset and more of a list of principles to follow. But that being said, I think designing for use cases is still important in some cases, like when you only have a few main paths that you want to follow. I would also prioritize putting that knowledge in something like a data file instead of the instructions, where the AI can easily get overwhelmed by what to prioritize and to only call that when necessary.

Torrey: Yeah. Keri, I saw you nodding along to that. Is that something that you’ve also experienced, that less is more?

Keri: Kind of. There are two parts to it, right? I’ve always kind of thought that if I ever had a genie in a bottle and I needed to make a wish, like how very, very, very specific I would be. Kind of similarly, I used to say like include this and this and this and this and this, and Cindy’s exactly right. It gets confused very easily, and it’s hard to keep track of your prompting technique and what works and what doesn’t, because you lose sight of the problem that you’re trying to solve, and you can’t iterate on that. So, yeah, everything that was just said. But also, it’s hard to duplicate the results every time. So, I’ve experienced, “I’ve got it. I’ve got the perfect prompt,” and then all of a sudden it doesn’t work. Or the opposite will happen. It’s like, this doesn’t work, and then one day it works, and I have no idea what I did. Like, don’t breathe, don’t move, don’t taunt the prompt. Because otherwise everything will break.

Torrey: I need a t-shirt that says “Don’t taunt the prompts right now.” I’m fascinated by this because, Cindy, you mentioned that you want to design for all of these edge cases, you want to design for all of these things. But giving it the principles to follow. Can you give us an example of something like a principle, or is that going too detailed for this conversation?

Cindy: Maybe that would be a little too detailed, but I guess it’s just the type of … let me think about it from a general sense. Maybe it’s just the type of behavior and responses you wanted to give in a more nuanced situation, or when it runs into something unexpected, how would you want it to reason through that response, rather than saying, in this situation, do this. Because that’s where it starts getting locked in. And that’s where it starts trying to fit all of the questions it gets into one of those paths, and that’s where it starts getting really confused when it doesn’t perfectly fit into one of the predefined paths.

Torrey: So it sounds like this is, and I’ve experienced myself, an awful lot of work to get the prompt right and then do a lot of, I don’t know, candle lighting and freeze framing to say like, please just let it work. Keri, where do you draw the line between AI accelerating your work and AI replacing your thinking? You know, once it’s working, it gets kind of easy to hand it off.

Keri: Yeah, I mean, yes and no. I mean, everything that you just talked about, how much work it takes to get it right. It’s never one and done. It's never, “I made the tool, and now we're done here,” and we're handing it off. I think the line is drawn between: are you handing over your thinking, you’re handing it over to AI, when you understand the problem space and the audience, and what it is that you’re trying to do. And then you’re looking for a more specific result from AI. I think once you start handing over the problem space to AI, that’s when you lose a lot of that human thinking. And I think wrestling with looking at a problem from all angles. That’s where the “aha” moments happen. That’s when you start getting into weird spaces. And you want the weird spaces! You want to exist in that space to figure things out. And then once that’s formulated, then you have a very specific task that you want AI to do for you. So if you’re not getting weird, then don’t hand it to AI. 

Torrey: That reminds me of something I tell my students, which is to dwell in the ambiguity for a while.

Keri: Yeah, yeah, absolutely.

Torrey: That weird space.

Keri: Yeah.

Torrey: And until you’ve done that. I guess what you’re saying is, it’s not time to hand it over to AI yet.

Keri: Yeah, yeah, that’s exactly right. And if I can go back to the kind of the thing I’ve been saying for almost 20 years, is that the writing part of content design is actually like the smallest part that we do. It’s like 5% of what we do. The rest of it is existing in this other space.

Torrey: So a bunch of questions came in while we were talking about taunting the prompt. And I want to talk about this to make sure that we’re responding even better than an LLM would. So Donna had asked, I think, to clarify, Cindy, something that you had said, “Are you suggesting fewer guidelines in the instructions of your project or in the prompt or question itself?”

Cindy: Oh yeah. When I was talking about the prompt earlier, I meant the instructional prompt that you're giving to the LLM when you’re creating it versus the prompt that the user is asking, if that helps clear things up. But yeah, I think it’s just like less instructions in that master instruction prompt and more in the guidelines that it can search and call from in a data source or something like that.

Torrey: Yeah. So, getting really, it sounds like, getting very deliberate about this set of content is available to the AI when it needs it. This set of guidelines is part of the instructional prompt or the system prompt. All of the different systems seem to be using different words for these things, and the terminology seems to evolve every week or so. So, putting the principles and the role and the mindset into the guidelines and putting the data and maybe even guardrails into the sort of data source. Is that, do I have that right? I’m interpreting on the fly here.

Cindy: Yeah. Yeah. I think that’s exactly it.

Subscribe to the Button newsletter!

Get more valuable content design articles like this one delivered right to your inbox.

Thanks! Check your inbox to confirm your subscription.
Oops! Something went wrong while submitting the form.

Torrey: Thank you, Cindy. Another audience question: “Is it everything breaking after taunting the prompts or the feature of the AI learning from outside your prompt?” So, is it like, I think this is talking about when things are changing on the fly, is the AI learning or changing, or are model updates happening? They say: “I struggle with this, and I wonder if I need to shift my expectations?” Keri, let’s go to you.

Keri: Yeah, that’s a super interesting question. I mean, why not all of it? The LLMs are kind of this mysterious space, right? Because I think even the people who create them don’t know exactly what’s going on behind the scenes. And you know, some of the results that are happening are just kind of like, ah, it’s, you know, it’s being an LLM. So it, and Cindy may have a better answer for this, but, for me, it’s another proof point of why having a human in the loop to kind of shepherd these tools is so, so incredibly important. And the thing is, it’s not only for the prompting and the output and kind of reestablishing our own process, but, you know, for the decisions that we’re making that we put in them, because it’s only kind of repeating what we’ve said, you know, from a broad scale with all of the internet at its disposal. I’ve already lost sight of what the question was … got on a soapbox! I think it might be doing all of those things. So it’s up to us to understand that’s a thing that might happen and provide time and space to let that happen, and we learn from it, and it learns from us.

Torrey: Yeah. I think that sort of knowing that things will be unknowable, you know. Is it changing for this reason or this reason?

Keri: Right, right. Yeah, exactly. We may know, but we may not, so let’s just plan for the fact that it’s going to be unpredictable and stay on top of it.

Torrey: Yeah. The old, simple days when what we programmed came out are not where we are right now. I want to dig a little further into this human-in-the-loop idea. We hear about this a lot. But who should be that human in the loop when it comes to content design with AI? And Cindy, let me go to you first for this one. Who should that be, in your opinion?

Cindy: Yeah, for sure. We’ve done a lot of debate on this internally. We’ve considered product designers, content designers, and product designers with some content design knowledge. And then we’ve also thought about whether we should limit tool use to those who have that baseline knowledge. Or should we just give everyone free rein? And when we’re building tools that help with content design, this question comes up a lot because, if we can’t trust the LLM completely, we need someone to be in the loop. Right? And it would be great if somebody with content design expertise were able to review the outputs from an AI every single time, but it’s just not possible because we don’t have that many content designers to go around. And it also risks becoming a bottleneck for teams. So, since we usually give our content design tools to product designers to use, we assume that product designers would be able to be that human in the loop. But product designers usually have varying amounts of confidence in their own content design skills, their own writing skills. And we’ve actually gotten feedback in user interviews from product designers saying that when we give them multiple choices of copy and ask them to pick the best option for their situation, sometimes they’re even more confused on how to proceed because they’re like, all of these options … we’re making sure that all of the options they’re given meet a baseline quality. But since they all meet baseline quality, they find it hard to know whether they’re picking the right one. A long way to say, I think that we still need product designers to have a certain amount of confidence and knowledge to be that human in the loop. And we can do this by building education into the tools, but also providing training to the designers. And I think that’s the way we’re going about scaling our expertise, more so than having a content designer review all of the responses.

Torrey: Oh. So let’s dig into that. What kind of training is the right training for the human to be in the loop? I noticed this for myself recently when I was going, okay, if I was gonna do marketing on TikTok, what even does that look like? Like, how would I make a plan for myself? And I said, well, here I am in the AI age, I’m gonna squint real hard, and just what would that look like? And what I got back looked very believable. And as soon as I scratched the surface, I immediately went … I do not know enough to know if this is good advice. Right. I have not had the appropriate training to know whether this is good advice. Thanks, self, for proving that to myself again. So, Keri, as somebody in charge of a team trying to scale this horizontally. I know you’ve dealt with this problem.

Keri: Yeah. It’s a challenge. So, you know, to try to get at the heart of the question. I think it comes back to respecting the craft. So, you know, we talked at the beginning of this conversation about how I’m vibe coding, right? I have no idea what I’m doing. I’m not a coder. I decided a long time ago that I did not want to be a coder. I made that very conscious decision, and here I am. So I’m using this vibe coding to develop this thing I want. But when things go wrong, I don’t know what’s wrong with it. And sometimes I will get a clue in vibe coding, it won’t work. And that’s my clue that something won’t work. But I think for more esoteric disciplines that rely on things like, oh, I don't know, taste, to inform our decisions, it’s less clear. So, leaning heavily into trusting those who have developed careers and points of view and taste and craftsmanship and understanding what your own limitations are for what good looks like. And, as Cindy talked about, we’ve developed this idea around baseline quality. And baseline quality simply means that things are to style. Like, we’re camel-casing LinkedIn correctly, that kind of level. But in terms of understanding nuance and tone and where to focus and where to pull back, that’s a content design area of expertise. So that’s something. And not only do we need to evangelize for these are the kinds of things we’re bringing. We also need to respect those that are in other disciplines who are dealing with the same things and look to them for their expertise. So that is the kind of philosophy I hope I'm bringing to the team. And extolling our virtues for everybody who’s worked on their craft.

Cindy: I want to add that Keri makes great points about training people in content design expertise, but also, I feel like part of the training necessary to be the human in the loop isn’t just having the subject matter expertise, because that’s just not always possible, since we can’t, we wish we could, but we can’t learn everything under the sun. So I think it’s also important to train yourself and train others on their expectations surrounding AI and understanding how AI systems work. Like when I interviewed users on our internal tools about what they found frustrating, the answer didn’t usually actually have to do with how we designed the tools or the content design guidance itself. It actually was often about the limitations of the system that we could mitigate, but not always completely control. And they didn’t know that that was part of the LLM’s behavior and not really the content guidance behind it. So, for example, common pitfalls, like the AI being really confident in its answer even when it's wrong. A key principle that really stuck with me when I was first studying about conversation design was that AI models are also trained to please their users. And being agreeable will cause the user to return and keep using the service. So that’s why when users instruct the AI to go against its original system instructions, the AI actually does its best to comply, like oftentimes. So that’s something that we actively design safeguards against if we want to make sure that accuracy is first and foremost, but that’s not always controllable because the model design itself also has those elements. So we need to train users on recognizing that too and having more patience in these types of situations. And we can educate people through training sessions, but I think one of the most effective ways is also building that education into AI. So, for example, AI can provide reasoning every time it gives a response, right? Or it can give something like a score or a confidence level about its response. And then things like this both build trust and educate people on how it works.

Torrey: I really love that answer because it isn’t just about training in expertise, it’s also training to use a new kind of tool that has never existed before. We have never had an on-call sycophantic (the fancy way of saying very agreeable no matter what) coworker, right? And a lot of people are coming to prefer this coworker that always tells them how smart they are, whether it does that explicitly or implicitly. Like, “Great question, or a lot of people come up with this problem, or how important that you are trying to solve this user problem right now.” All of those tiny compliments add up to wear away our resistance to it and make us want to use it more.

Keri: Yeah, it feels good, doesn’t it? Like you’re so smart.

Torrey: “You’re so smart, and you’re pretty too, even though I can’t see you, because that would be creepy, unless you want me to see you.” No. I’ve been saving this question in this sheet of questions here. Megan and Alice both ask: “Do you worry that you are building the tools that are going to replace your team? And what do you say to people when they are scared of that?” Keri, I’m gonna start with you, manager lady. 

Keri: That does feel like a Keri question, doesn’t it? Yeah, I mean, of course. I would be incredibly naive to not think that. So I think the important thing, well, lemme take a little bit of a step back. The thing I fear is not AI replacing us; the thing that concerns me is people in positions of power thinking that AI can replace us. And I think those are two very, very different things. I think it’s important for anybody in a leadership position who manages teams that seem to be at risk right now to really be vocal and clear about what the human brings to the tools and how the tools cannot exist without the human beings behind it, making the decisions. I posted on LinkedIn (plug, plug) that I think people who are making the decisions to lay off people in favor of AI are looking at it from a very shortsighted point of view. The tools, and ask Cindy about this, the tools require constant updates and monitoring and feeding and care for them to work correctly. So when the people who have put all of those good decisions into the tools and are keeping them, and they’re gone, you know, it might not happen immediately, but the output is going to suffer. It’s not going to work anymore. And then they’ll be scrambling and looking for us again, and we’re going to be living our best lives. Which is why visibility, storytelling, and showing the breadth and the depth of the work are so important. It’s why I’m building this dashboard and doing other things around it. So, short answer. Yes. But I think if we’re all very clear about what it is we’re doing and how much hard work it takes, I’m hoping that won’t happen. I’m hoping people will make the right decisions.

Torrey: Yeah. Sometimes it feels like all we can do is hope, but I think what you said at the beginning of being vocal and being in those rooms and advocating for the good decision-making, the good ideas, that can only come from lived experience and expertise. Not a, as I’ve heard it called, not a Plinko board of word tokens that are related to each other. So Joel says, as a follow-up, “Everything I’ve heard today tracks with my own understanding of how AI and LLMs work. My question is, what kind of empathy do your leaders have for the advantages of shortcomings of AI and LLMs, and how do you help them understand the importance of understanding those shortcomings?”

Torrey: Yeah. Cindy, how do you talk to leaders when you’re in the room, and you’re saying, “Actually the LLM can’t do that.” What do you do?

Cindy: I think that before going into … because we’re always constantly creating new tools, we usually do some sort of share-out or presentation. We have this design tools lab meeting at LinkedIn where designers share out tools that they’re building. And before this, we always prep very carefully about the perspective that we’re going to give around this tool. Like, it can do this, it can’t do this. It should be used for this, and it can’t be used for that. And we always have to set those expectations really carefully because we don’t want it to be scaled to a point where we haven’t tested it yet, or we haven’t evaluated all of the consequences for it yet. So I think it’s just that, it’s how you understand the importance. It’s just making sure that we set those expectations first before everyone gets their hands on the tool and starts using it however they want.

Torrey: Yeah, that seems critical. It actually reminds me — it’s a very small tangent. My uncle is the foreman for a golf course. Big important golf course, big millionaires walking around playing golf. Getting new tools is a big deal. But some tools can literally cut the grass to within millimeter precision. This baffles me, but I believe them that this is important to them. If you were to use that tool that cuts with the millimeter precision of the grass on this part to, say, grade the sand trap, you’d be in big trouble. It just won’t work that way. That metaphor, brought to you by randomness. Keri, is there anything you wanted to add to that about building your leader’s understanding of these things?

Keri: I mean, it can be as simple as just showing them that it’s best. I mean this, you know, it kind of goes to show don’t tell. And this is just advice across the board, you know, showing the output or showing the results is a lot more impactful than just, "This is bad and here’s why.”

Torrey: Yeah. So yeah. “Try it for that, exec. Let me show you what happens when you do that.”

Keri: And, you could be like, “Oh, that’s interesting. Let’s see what happens.” And let them discover it on their own.

Torrey: I want to ask a question that Mike had asked earlier: that they work in a heavily regulated field in healthcare, and their medical-legal regulatory panel wants to review, they want to be that human in the loop, and review what their agent may say. “With generative AI that can create a variety of new responses, how do you navigate stakeholder review?” So this is sort of that human-in-the-loop, but at a big regulatory scale. Have you had to deal with that?

Keri: Yeah, I mean, of course, heavily regulated. So, we have a trust team, and we have a dedicated content designer on our trust team; her name is Shauna. She’s amazing. And, when we talk about these kinds of tools for her level of work, we’re all agreed that there is always a human in the loop on that. And we understand that it is a longer process, so we can … that doesn’t mean that there is no possibility of developing any tools at all, you know, because there are still standard responses as decisions are being made, you know, and it goes into our tool set. But we will, ideally, I mean, never say never, but you know, as we stand right now, we will never let something deploy without a human looking at trust content. That would be wild. In something like healthcare, something financial, you know, something that is very heavily partnered with legal, you wanna talk about high risk? No, no, no, no. We need a human in the loop for that. So I would say to just set those expectations that, yes, there are some things that we can automate, but human in the loop is absolutely necessary. And plan for that.

Torrey: Yeah. Cindy, did you wanna add anything to that?

Cindy: No. Yeah, I completely agree. I feel like it’s just that, with those sensitive topics, and especially when there are a lot of legal stakeholders, regulatory stakeholders, you definitely need someone else’s eyes on it before it goes out. They definitely can’t just take the LLM’s word for it. And especially if they themselves don’t have that understanding or expertise, it’s not a situation where it can just be somebody without the expertise and the AI. That just won’t go over well at all.

Torrey: Yeah. Just won’t fly. David asked a related question here. “Transparency around when mistakes can happen, and I guess when mistakes do happen, needs to be communicated. How do you handle that kind of transparency? If mistakes happen from the AI, how do you handle that kind of transparency at scale?” Cindy, have you run into that?

Cindy: Yeah, I mean, the AI, since it’s an AI, it’s obviously going to make mistakes from time to time. What we do in designing for the AI’s response is one of the things that we do is that we can’t always tell if the answer is correct or not for the user, because it also depends on nuance in the user situation, et cetera. And we don’t have visibility into all of that. So one thing that we’ve added to our AI agents is that at the end, it’ll ask a series of follow-up questions. For example, if this isn’t what you’re looking for, can you give more context about this so we can adjust the answer to your needs? So things like that. Being absolutely clear that mistakes do happen and what we can do to remedy those mistakes is something that we built into the AI behavior. But yeah, at scale, we’re also just very transparent when we’re introducing these tools that mistakes happen. And even in the live demos, sometimes mistakes happen, and we don’t hide them. We show it and explain what to do in those types of situations, and how to check for those mistakes. And it is just a fact of the matter that they’re going to appear. So you just have to figure out how to deal with it. 

Torrey: Thank you for that, Cindy. I really appreciate that it sounds like you’re taking it beyond the sort of disclaimer in ever-decreasing font size of, “This might all be false,” but you’re asking, you’re proactively asking people, “Did this satisfy your request? Could you give us more information to do that?” It’s a proactive way of handling that. So Michael asks: “As a content person contributing to building out LLM agent behavior, what falls under your responsibilities?” They’re referring to things like building out knowledge base articles, prompts, guardrails, voice tone requirements, et cetera. And my guess is that Michael is asking this because sometimes this is a little bit of a turf fight with other people also working on AI. Keri, how about you?

Keri: I like to think of it as a collaboration opportunity. So the answer to that question is that our team focuses on product content very, very specifically. There are some overlapping areas, so product marketing is one of them. EdiTorreyal is one of them. Help and content strategy is one of them. Our team focuses primarily on content that appears in the product. So the tools that we are building focus very specifically on that. We collaborate with partners because sometimes there’s crossover, it’s not super clean, especially with things like, you know, product marketing, because it’s just like … sometimes it’s a little event circle there, but then it’s a conversation, then we lean into relationships and understanding our roles.

Torrey: Anything you wanted to add to that, Cindy?

Cindy: I think in terms of building AI behavior, I think that all of the topics that you mentioned are part of your responsibility as well. But it definitely does depend on who you’re collaborating with and which ones are part of other people’s responsibilities. But if you’re building out a tool, like when we’re building our internal tools, there are only like a few of us on the team that are working on it. So all of that goes under things that we’re working on for sure.

Torrey: You mentioned only a few of you working on it, only a few of you working on the tools. Fernanda asked: “How is the collaboration, if any, with data scientists on your team, and do you divide any work with them about the use of the AI or not?” It’s also fine if you don’t work with data scientists.

Keri: I’m not aware that we do directly, unless Cindy has a different response. That being said, that is definitely a collaboration that should happen.

Cindy: I don’t personally work with data science at all, but I know that some of the other conversation designers on our team do, in terms of getting, especially, for user pacing, AI assistance, the chat logs, and things like that, often come from data science. But for our internal AI tools, I just find a way to get the chat log myself, but usually it is built in somehow to create the chat log. But I do know for the user-facing side, of course, there’s PII, so they have to go through data science to get that type of information.

Torrey: Yeah, it was just such a great question to pop in here because I think there are so many new stakeholders for us to work with when we’re working with tools that are based on, like probability is a data science, right? And LLMs are based on nothing but probability. So, having access to those new perspectives and new ways of thinking about it seems like a great idea.

Keri: Yeah, no, that’s a really great point about probability. And I think what’s going to come out of all of this is new skills and nuances developing out of everybody’s discipline. As new tools are being developed and new skills are being formed, it’s almost like we’re gonna find new uses for existing skills. And it’s like, oh wait. Yeah. This is a very natural fit. So the point you just made is: it’s nothing but probability. I’m like, oh, yes, of course. That’s all data scientists.

Torrey: So I’m going to go to a great big question that we have not yet attacked as sort of our last topic to wrap up here together. And Cindy, I’m going to ask you first. In your role working as a content designer, as an IC solving these content design problems, how are you reconciling any moral or ecological concerns, if you have them, with the industry pressure to be leaning into using AI?

Cindy: That’s a loaded question. I think for sure that there’s a lot of worry too when we build out these tools. Is this going to be something that replaces our team? Or, even if it’s tools that are accelerating work, when it goes out to other teams, we worry, since I feel like, at the end of the day, the humans behind the tools still need to take responsibility for the output that the tool produces. We also worry if it’s giving other people misaligned content and things that don’t really make sense, whether our team worries about them getting that information and running with it, and not really understanding the true content design expertise behind that as well. So these are all definitely things that we’ve kept in mind as we’re creating these tools, and we do push back on things. There are always people who want to move faster and faster, and we do push back on people who want to take the human out of the loop and things like that. So we are trying to keep all the necessary restrictions in place before we announce a new tool. We always do evaluations and checks before we spread it to a wider audience. There’s usually a beta phase for everything that we create because we just want to make sure that it works with a smaller audience first, and we want to make sure that there aren’t any harmful side effects. So that’s definitely something that we keep in mind a lot when we’re creating our tools.

Torrey: Yeah, so those personal and interpersonal harms and staying as safe as possible with those. Keri, same question. 

Keri: What a big question. Cindy had a fantastic answer. For me, I’ve been doing this a long time, and I would love to say that this is just like any other point in time, but I don’t think it is. I think the things that have served me personally well are that I tend to be a pragmatic optimist, which means that I get excited about things, I get excited about the possibility, but I also approach them with a healthy dose of skepticism and realism. So I think being honest with yourself, I think this is going to take a lot of self-awareness around what your own boundaries are and your own values, and recognizing when those are crossing the line. You know, being confident in voicing them and hoping and trusting that the people around you are open to that conversation. I think all of those are going to be very important, and also recognizing that the people around you will have different boundaries and have different values. So again, it comes to the respect and collaboration and openness to have that conversation. So, yeah, it’s so important. It’s just a very different time, and we’re all having to evolve very, very quickly. I’m not sure that human beings are supposed to do this, but here we are, so, yeah, that’s my best recommendation.

Torrey: I want to give Kat credit for asking that question in the first place. This industry pressure is real, and it is at the investor level and investor-led pressure. And it’s shareholders and directors, and a big push from Silicon Valley and financial instruments changing hands about which I know far too little. And all of that is affecting our work requirements in a day-to-day world. I don’t think that I’ve heard from anyone who feels like they have a good solution to how to both keep a job that pays their bills and keeps them fed and gives them health insurance, and working in this industry right now. It is very, very tough. So thank you, Keri and Cindy, for letting me put you on the spotlight like that with that question.

Keri: No, that’s an important question.

Torrey: It’s a hugely important question. I am looking at the questions document here that Meghan Casey, behind the scenes, has been taking questions from chat and putting them in here, and there are so many good questions left to ask. I mean, fantastic questions. Like, “If there’s such a guarantee of mistakes with AI, how or why is that better than not having it at all?” Right. And “How do you thoughtfully guide and redirect conversations when your stakeholder comes in and says, well, I’ve got a better thing I just made with AI?” I actually got this question from a student team. I teach at the University of Washington in the Information School, and a project team came to me and said, “So our sponsor keeps coming to us with new designs that we’re pretty sure that she is going to AI and having it generated.” And they needed to know what they should do. And I think this is a very real problem that’s happening all the time right now. And I got to introduce them to the answer of the soft no. How do you say no, while being affirmative the entire time?

Keri: Wait, what does that look like? Can you teach me?

Torrey: Keri, you are good at this. I’m sure Cindy is good at this, too.

Keri: Yeah, but now, the people want to see. So now, you gotta do it.

Torrey: Oh. Oh. “So, oh, I see what you’re doing there. And it looks like you’re going towards these ends that are …” remembering that the thing I’m competing with for this executive’s attention is an AI who will be sycophantic the entire time. What can I compliment about this design and this urge to redesign? And then how can I affirm that, “I’m so glad that we’re working toward the same ends with the principles behind the design that has already committed work. And it’s possible we can build on that in the future.”

Keri: Oh, okay. That’s super interesting.

Torrey: Okay, now you all know if you work for Keri and she says something super interesting. Done.

Keri: What have I done? Oh no, I’ve blown my cover.

Torrey: Okay. On that note, I think I need to say thank you for joining us. This was a fantastic conversation with you two, with such smart answers and information about this. Thank you for joining us, Cindy. Great answers.

Cindy: Thank you so much for having me.

Torrey: And Keri, I really appreciate you being here and you bringing this perspective.

Keri: Lovely to be here. Thanks for having me!

Share this post

Find out how you can write for the Button blog.

Join us for Button 2026

Tickets are on sale now! Our virtual conference returns this September with practical talks, live Q&As, and a community that feels like home. Spend two days exploring inspiring content design sessions grounded in real-world work and challenges.

Sign up for Button email!

Be the first to hear about Button events, free content design resources, and special offers.

Thanks! Check your inbox to confirm your subscription.
👉 IMPORTANT: Firewalls and spam filters can block us. Add “hello@buttonconf.com” to your email contacts so they don’t!
Oops! Something went wrong while submitting the form.