Minimizing user research fraud in the age of agentic AI

Abby Bajuniemi
Abby Bajuniemi
February 3, 2026
With valid and reliable data to work from, research helps us to de-risk our content and UX decisions.

What is fraud in user research, and why should we care about it? 

Before we get into the nitty-gritty of how LLMs and agentic AI are affecting our ability to run valid and reliable user research, let’s get a shared understanding of what fraud actually means in the context of research.

Participant fraud is when someone pretends to be someone they aren’t to qualify for research when they aren’t actually part of the target demographic. Typically, they don't do this to bung up your study, but because they need or want the income. They may not realize what fake results will do to the study’s reliability or validity; they just want the incentive, which can be a significant amount of money. For some participants, the incentive could be more than they make in a week, a month, or even a year.

There are typically two kinds of fraud: the individual “professional tester” and the fraud ring. Folks who are working alone usually rely on distributed networks to learn how to “game” screeners to qualify for studies. They’re often members of several online panels on platforms like Respondent.io, Usertesting.com, etc., and use them as a source of income. Fraud rings are similar, except that they operate as a group, pooling their earnings to share among their members.

So, why should we care about this? Well, we use data from user research to make business decisions that shape design direction and strategy. If the data we’re collecting doesn’t accurately reflect the customers and segments we’re hoping to serve, it introduces significant risk to the business.

According to dtect, “The cost of preventing fraud up front is much less than dealing with bad data because it leads to rework, lost trust, and poor business decisions.” 

Fraud in user research isn’t particularly new. We’ve had to be on the lookout for fraudulent participants for quite some time. But if you’re new to user research or UX in general, this may be upsetting news. Before we get into what to do about it, let’s talk a bit about what it looks like in practice, both before and after the introduction of LLMs and agentic AI, so you know what to look out for.

Subscribe to the Button newsletter!

Get more valuable content design articles like this one delivered right to your inbox.

Thanks! Check your inbox to confirm your subscription.
Oops! Something went wrong while submitting the form.

What did fraud look like before LLMs and agentic AI? 

Fraud was very manual in the “before times.” Individuals had to work harder to learn how to game screeners. They had to do a lot of their own work to find others who could teach them how to be “professional testers.”

Fraud rings had to gather in person, usually in a conference-style room, with multiple devices at the ready. Everything was limited by the number of people and devices you had at your disposal and the amount of time you had.

When social media channels like Twitter, TikTok, and YouTube arrived, things got a bit easier, because folks could broadcast their tips and tricks for gaming screeners and getting accepted for studies to a wider audience. More and more creators began making content on how to qualify for research.

Individuals and fraud rings also began to use social media to pick up on scheduling links and blast them to their networks. Calendly links, while super useful, were (and are) really vulnerable to being picked up by bad actors when shared publicly.

So, how did we catch fraudulent participants back then? There were several reliable signals we could use to determine whether a participant was fraudulent or not: 

  • IP addresses
  • Browser signals 
  • SMS verification
  • Speed traps or attention checks 
  • “Human-sounding” responses 

IP addresses used to be a reliable way to catch someone who wasn’t in your target demographic. If you were looking for folks from Canada but a participant’s IP address was in Germany, for example, it would be an immediate flag that they might not be who they said they were.

We could also check for duplicate IP addresses and block duplicates and/or IPs that showed up more than once. This used to be a sign that someone was using different browsers on the same device to complete a study more than once. We could use other browser signals as well, like screen resolution, browser and OS versions, or installed fonts and plugins, to make sure each respondent was a unique user with a genuine identity. SMS verification was also a reliable tool, as it was harder to spoof and therefore great for authentication.

Attention checks and speed signals were helpful in making sure you were dealing with an authentic, human response. If the completion time was way too fast or they “failed” the attention check, it was a signal that maybe the respondent was a “speeder,” or someone who was racing through your test just to get to the incentive.

And finally, we could look at the “humanness” of the responses. Were the responses relevant to the questions and answered in a sensible way? Was the response coherent, or was it a keysmash (something like “euhfkusdfuhiuw”)? Were there “human” typos, like “teh” instead of “the”? Was the person responding to your questions in a consistent manner, and were the answers across all the tests for one question consistent with what you might expect?

These were all reliable ways to prevent and detect fraud in online research during the early days of the Internet. We didn’t see a ton of fraud in remote moderated research; it was the worst with unmoderated and survey research.

LLMs and agentic AI have changed that, though. 

What has changed since LLMs and agentic AI became widespread? 

LLMs and agentic AI have enabled fraud to scale to levels we haven’t seen before. Fraudsters can more easily game your screener, especially if they figure out what it is you’re recruiting for. They can create personas that will answer screening questions in the way you’d expect someone who is actually a member of your target demographic to answer them. They can also help folks obscure themselves and circumvent some of the ways we used to be able to identify fraudulent identities.

We can’t rely on IP addresses or browser signals anymore. Today, residential proxy networks are used to route activity through real household IP addresses to appear to live in the area you’re targeting. These tools allow a lot of customization, too. You can choose what device, ISP, location, etc., shows up for you. And they’re very cheap, making spoofing super fast, seamless, and virtually undetectable. They can rotate through thousands of legitimate-looking IP addresses in a short amount of time.

Cheap, anti-detect browsers can spoof thousands of unique browser environments, each with its own digital footprint. Every session looks like a different person on a different device, and each person running this kind of spoofing can have multiple spoofed environments running concurrently, allowing them to run multiple tests at the same time and scale the incentives they’re earning.

SMS verification isn’t a reliable authentication method anymore, either. Again, there is now cheap and easily accessible software that allows people to rent or buy huge banks of phone numbers that are active for only a few minutes at a time, allowing them to “authenticate” themselves for multiple tests at once.

Speed traps are also less effective. Fraudsters can use LLMs with scripts to create more realistic completion times, or they can use LLMs and intentionally slow the response rate so that “time on task” appears normal. This one is especially insidious because your data is still fake, yet there are no obvious red flags when folks do this.

LLMs and agentic AI are being used in both moderated and unmoderated research, creating fraudulent data that often looks very human. Agentic AI, in particular, is problematic for user research because it can:

  • Generate fake personas, spinning up thousands of identities instantly 
  • Generate scripts to handle entire workflows 
  • Tailor responses to the context of the item/question 
  • Auto-create emails and user accounts 
  • Navigate surveys using browser automation 
  • Feed inputs into models to craft context-aware responses 

This data can be extremely difficult to identify as fake and could potentially put you at risk of making poor business decisions. Fraudsters can do a lot more, a lot faster, in ways that are so much harder to track.

For one thing, there are no physical limitations anymore. With cloud infrastructure, there’s no need to be in a room with a bunch of devices. Individuals and fraud rings can work much more easily from anywhere.

Fraudsters using agentic AI can rely on “decision-making” from the agents, reducing their cognitive load and allowing them to “participate” in concurrent tests while these agents reproduce human-like variability in responses.

Ultimately, there’s an ability to scale without ceilings. What once required time, people, devices, and effort can scale seamlessly with one person and automation. 

How can content designers catch user research fraud in action?

At this point, you might be feeling frustrated and wondering how you’re supposed to run research with all these obstacles. Or maybe you’re thinking, “Well, since my leadership wants me to ‘do research’ using Claude or Gemini or ChatGPT anyway, what does it matter?”

I’ll start by saying that Chris Chapman’s piece on why synthetic survey data cannot mimic real human responses is a must-read. He talks about survey data, but the reasons he gives, especially in the section about how synthetic data fails empirically, should give you a good idea as to why businesses shouldn’t rely on synthetic data from LLMs to drive strategy decisions.

I’d also point you to Dr. Abeba Birhane’s short article, “Cheap science, real harm: the cost of replacing human participation with synthetic data.”

If you want to learn from people, you need to talk to people. I’m going to give you some strategies to suss out whether the person participating in your unmoderated study or remote session is who they say they are, as well as how to prevent them from taking part in the first place.

Before getting into it, I want to acknowledge that these strategies can feel at odds with our values. As UX professionals, we value human-centered design principles like inclusivity, data privacy, and security, so some of these strategies may feel bad. There’s a risk you’re creating a less inclusive environment for people like neurodivergent folks who might want to turn the camera off to reduce stimulation (to give one example). Fraud puts us all in a bad position, but by practicing these strategies over time, we can begin to hone our ability to identify red flags that warrant further investigation, reducing the likelihood of fraud and ultimately creating a safer space for everyone.

I’ve recently run a few studies, both moderated and unmoderated, that had quite a bit of fraud. I typically saw a combination of things like:

  • IP addresses that were not from my target market 
  • Overly pushy respondents who were very focused on getting the incentive  
  • Immediate responses to invitations to participate, especially at odd hours for the target market
  • Responses to surveys that still had cut-and-paste artifacts (e.g. “translated by DeepL” )
  • Responses that did not answer the question
  • Speeding and keysmash responses 
  • Respondents using Gmail email addresses 
  • Unusual names that look like two first names (e.g.,“Mary John” or “Cindy Jack”) 

Each one of these, in isolation, isn’t necessarily a firm sign that the person isn’t who they say they are (though IP addresses can still be a strong indicator — especially since fraudsters sometimes forget to use their VPN or other spoofing device). You’ll want to look holistically at a participant’s behavior and clues to make a determination, layering your prevention and detection strategies to minimize fraud.

Biometric & language indicators

Check whether your platform has a way to track biometric signals, and keep an eye on writing style and vocabulary to spot unusual patterns or formal language that feels auto-generated. Specifically, you can watch for:

  • Typing rhythm: Things like backspacing, edits, pauses, or hesitations that humans naturally exhibit. 
  • Scrolling style: Natural/unpredictable vs. steady, robotic pacing.
  • Repetitive phrasing: Saying things like “All that said, …” repeatedly across responses and projects  
  • Uncharacteristic language: Sentence structure that feels too polished or jargon your target demographic wouldn’t likely use (e.g., “The quality of the customer experience was highly consistent across all touchpoints” or “Recent financial data from public companies suggests…”)

Behavioral cues

Along with language, certain behaviors and formatting choices can add to the holistic picture of the participant’s responses. Look for things like:

  • Tab changes: These can signal that respondents are going to another tab to look things up or use an LLM to get answers. 
  • Copy/pasting: Some platforms offer copy/paste detection or allow developers to write a script to prevent it. 
  • Use of bulleted lists: This can be a tell that a human didn’t create the response you’re seeing.
  • Eye contact and gaze: In moderated interviews, watch for participants who look away from the camera or sound as if they’re reading something.
  • Typing: If you hear typing during moderated interviews, it may indicate the use of an LLM.

These indicators aren’t foolproof, of course, especially if you’re targeting folks who are highly educated or have a deeper level of domain knowledge. But if, for example, you see bulleted lists and the time to complete was very short, these combined indicators may signal a red flag worth further investigation.

What can we do to prevent user research fraud?

We can still use some of the tried-and-true methods — like attention checks or confirmatory questions — in our screeners and unmoderated studies. I've had some luck repeating screener questions in an actual test and cross-checking the similarity of the responses.

For moderated studies, we can also take some extra steps to ensure our participants are who they say they are:

  • Require a photo ID at the start of a session
  • Require that the camera be on for the entirety of the study (and attempt to reschedule if they say they can’t have their camera on) 
  • Include confirmatory questions during the interview as well, paying attention to the environment when possible. I had one person show up to an interview on camera in what was clearly the evening, even though they were supposed to be in Michigan, where it was about 2 pm in the middle of summer. 

We can also take steps to make it more difficult for fraud rings to hijack our studies. Don’t advertise on social media if you can avoid it — it’s far too easy for your links to be spread by fraud rings, leaving you with the pain of figuring out which participants are legitimate and which should be rejected.

Finally, you can work with your vendor or participant panel provider to see what preventive strategies they have in place. Some great questions to ask them include:

  • What signals do you monitor? 
  • How often do you update your detection logic? 
  • How do you address evolving fraud tactics (e.g., AI-generated personas, AI agents)?
  • What happens if the tactics evolve faster than your detection logic can keep up? How do you adapt to that? 

Your vendor should be happy to answer those questions for you, and if they’re not, you may want to rethink the relationship. Their answers will help you understand their limitations, so you can build your complementary layers.

To wrap up

Fraud is a big and growing problem in UX research, and it’s likely only going to become more difficult to deal with. With valid and reliable data to work from, research helps us to de-risk our content and UX decisions. Some of these fraud detection strategies can feel as if they run counter to our human-centered values, but in the long run, dealing with fraud effectively allows us to better meet the needs of our audiences and create better products for everyone.

Keep learning with us at Button workshops

If this article got you thinking about how you practice content design, our upcoming workshops go deeper. In 2026, we’re hosting live, hands-on sessions where content designers work through real challenges together, guided by experienced facilitators. You’ll leave with practical skills you can use right away.

Share this post
Abby Bajuniemi

Author

Abby Bajuniemi is a Staff UX Researcher with expertise at the intersection of language and society, and a former Professor of Spanish and Linguistics. Her PhD is in Applied Linguistics, specializing in sociolinguistics, language learning, and technology-mediated communication. She’s led UX research and content strategy across multiple industries and has worked with companies including Google, Calendly, Princeton University, Medtronic, and Best Buy. She currently leads UX research for New_ Public’s Public Spaces Incubator, and she was previously the UX Research Lead for Android’s Expression and Emoji team. She is deeply curious about new ways of thinking about and evaluating the success of social platforms.

Headshot - Sean Tubridy

Illustrator

Sean Tubridy is the Executive Creative Director and Co-Owner at Button Events.

Find out how you can write for the Button blog.

Minimizing user research fraud in the age of agentic AI

Abby Bajuniemi
Abby Bajuniemi
February 3, 2026
With valid and reliable data to work from, research helps us to de-risk our content and UX decisions.

What is fraud in user research, and why should we care about it? 

Before we get into the nitty-gritty of how LLMs and agentic AI are affecting our ability to run valid and reliable user research, let’s get a shared understanding of what fraud actually means in the context of research.

Participant fraud is when someone pretends to be someone they aren’t to qualify for research when they aren’t actually part of the target demographic. Typically, they don't do this to bung up your study, but because they need or want the income. They may not realize what fake results will do to the study’s reliability or validity; they just want the incentive, which can be a significant amount of money. For some participants, the incentive could be more than they make in a week, a month, or even a year.

There are typically two kinds of fraud: the individual “professional tester” and the fraud ring. Folks who are working alone usually rely on distributed networks to learn how to “game” screeners to qualify for studies. They’re often members of several online panels on platforms like Respondent.io, Usertesting.com, etc., and use them as a source of income. Fraud rings are similar, except that they operate as a group, pooling their earnings to share among their members.

So, why should we care about this? Well, we use data from user research to make business decisions that shape design direction and strategy. If the data we’re collecting doesn’t accurately reflect the customers and segments we’re hoping to serve, it introduces significant risk to the business.

According to dtect, “The cost of preventing fraud up front is much less than dealing with bad data because it leads to rework, lost trust, and poor business decisions.” 

Fraud in user research isn’t particularly new. We’ve had to be on the lookout for fraudulent participants for quite some time. But if you’re new to user research or UX in general, this may be upsetting news. Before we get into what to do about it, let’s talk a bit about what it looks like in practice, both before and after the introduction of LLMs and agentic AI, so you know what to look out for.

Subscribe to the Button newsletter!

Get more valuable content design articles like this one delivered right to your inbox.

Thanks! Check your inbox to confirm your subscription.
Oops! Something went wrong while submitting the form.

What did fraud look like before LLMs and agentic AI? 

Fraud was very manual in the “before times.” Individuals had to work harder to learn how to game screeners. They had to do a lot of their own work to find others who could teach them how to be “professional testers.”

Fraud rings had to gather in person, usually in a conference-style room, with multiple devices at the ready. Everything was limited by the number of people and devices you had at your disposal and the amount of time you had.

When social media channels like Twitter, TikTok, and YouTube arrived, things got a bit easier, because folks could broadcast their tips and tricks for gaming screeners and getting accepted for studies to a wider audience. More and more creators began making content on how to qualify for research.

Individuals and fraud rings also began to use social media to pick up on scheduling links and blast them to their networks. Calendly links, while super useful, were (and are) really vulnerable to being picked up by bad actors when shared publicly.

So, how did we catch fraudulent participants back then? There were several reliable signals we could use to determine whether a participant was fraudulent or not: 

  • IP addresses
  • Browser signals 
  • SMS verification
  • Speed traps or attention checks 
  • “Human-sounding” responses 

IP addresses used to be a reliable way to catch someone who wasn’t in your target demographic. If you were looking for folks from Canada but a participant’s IP address was in Germany, for example, it would be an immediate flag that they might not be who they said they were.

We could also check for duplicate IP addresses and block duplicates and/or IPs that showed up more than once. This used to be a sign that someone was using different browsers on the same device to complete a study more than once. We could use other browser signals as well, like screen resolution, browser and OS versions, or installed fonts and plugins, to make sure each respondent was a unique user with a genuine identity. SMS verification was also a reliable tool, as it was harder to spoof and therefore great for authentication.

Attention checks and speed signals were helpful in making sure you were dealing with an authentic, human response. If the completion time was way too fast or they “failed” the attention check, it was a signal that maybe the respondent was a “speeder,” or someone who was racing through your test just to get to the incentive.

And finally, we could look at the “humanness” of the responses. Were the responses relevant to the questions and answered in a sensible way? Was the response coherent, or was it a keysmash (something like “euhfkusdfuhiuw”)? Were there “human” typos, like “teh” instead of “the”? Was the person responding to your questions in a consistent manner, and were the answers across all the tests for one question consistent with what you might expect?

These were all reliable ways to prevent and detect fraud in online research during the early days of the Internet. We didn’t see a ton of fraud in remote moderated research; it was the worst with unmoderated and survey research.

LLMs and agentic AI have changed that, though. 

What has changed since LLMs and agentic AI became widespread? 

LLMs and agentic AI have enabled fraud to scale to levels we haven’t seen before. Fraudsters can more easily game your screener, especially if they figure out what it is you’re recruiting for. They can create personas that will answer screening questions in the way you’d expect someone who is actually a member of your target demographic to answer them. They can also help folks obscure themselves and circumvent some of the ways we used to be able to identify fraudulent identities.

We can’t rely on IP addresses or browser signals anymore. Today, residential proxy networks are used to route activity through real household IP addresses to appear to live in the area you’re targeting. These tools allow a lot of customization, too. You can choose what device, ISP, location, etc., shows up for you. And they’re very cheap, making spoofing super fast, seamless, and virtually undetectable. They can rotate through thousands of legitimate-looking IP addresses in a short amount of time.

Cheap, anti-detect browsers can spoof thousands of unique browser environments, each with its own digital footprint. Every session looks like a different person on a different device, and each person running this kind of spoofing can have multiple spoofed environments running concurrently, allowing them to run multiple tests at the same time and scale the incentives they’re earning.

SMS verification isn’t a reliable authentication method anymore, either. Again, there is now cheap and easily accessible software that allows people to rent or buy huge banks of phone numbers that are active for only a few minutes at a time, allowing them to “authenticate” themselves for multiple tests at once.

Speed traps are also less effective. Fraudsters can use LLMs with scripts to create more realistic completion times, or they can use LLMs and intentionally slow the response rate so that “time on task” appears normal. This one is especially insidious because your data is still fake, yet there are no obvious red flags when folks do this.

LLMs and agentic AI are being used in both moderated and unmoderated research, creating fraudulent data that often looks very human. Agentic AI, in particular, is problematic for user research because it can:

  • Generate fake personas, spinning up thousands of identities instantly 
  • Generate scripts to handle entire workflows 
  • Tailor responses to the context of the item/question 
  • Auto-create emails and user accounts 
  • Navigate surveys using browser automation 
  • Feed inputs into models to craft context-aware responses 

This data can be extremely difficult to identify as fake and could potentially put you at risk of making poor business decisions. Fraudsters can do a lot more, a lot faster, in ways that are so much harder to track.

For one thing, there are no physical limitations anymore. With cloud infrastructure, there’s no need to be in a room with a bunch of devices. Individuals and fraud rings can work much more easily from anywhere.

Fraudsters using agentic AI can rely on “decision-making” from the agents, reducing their cognitive load and allowing them to “participate” in concurrent tests while these agents reproduce human-like variability in responses.

Ultimately, there’s an ability to scale without ceilings. What once required time, people, devices, and effort can scale seamlessly with one person and automation. 

How can content designers catch user research fraud in action?

At this point, you might be feeling frustrated and wondering how you’re supposed to run research with all these obstacles. Or maybe you’re thinking, “Well, since my leadership wants me to ‘do research’ using Claude or Gemini or ChatGPT anyway, what does it matter?”

I’ll start by saying that Chris Chapman’s piece on why synthetic survey data cannot mimic real human responses is a must-read. He talks about survey data, but the reasons he gives, especially in the section about how synthetic data fails empirically, should give you a good idea as to why businesses shouldn’t rely on synthetic data from LLMs to drive strategy decisions.

I’d also point you to Dr. Abeba Birhane’s short article, “Cheap science, real harm: the cost of replacing human participation with synthetic data.”

If you want to learn from people, you need to talk to people. I’m going to give you some strategies to suss out whether the person participating in your unmoderated study or remote session is who they say they are, as well as how to prevent them from taking part in the first place.

Before getting into it, I want to acknowledge that these strategies can feel at odds with our values. As UX professionals, we value human-centered design principles like inclusivity, data privacy, and security, so some of these strategies may feel bad. There’s a risk you’re creating a less inclusive environment for people like neurodivergent folks who might want to turn the camera off to reduce stimulation (to give one example). Fraud puts us all in a bad position, but by practicing these strategies over time, we can begin to hone our ability to identify red flags that warrant further investigation, reducing the likelihood of fraud and ultimately creating a safer space for everyone.

I’ve recently run a few studies, both moderated and unmoderated, that had quite a bit of fraud. I typically saw a combination of things like:

  • IP addresses that were not from my target market 
  • Overly pushy respondents who were very focused on getting the incentive  
  • Immediate responses to invitations to participate, especially at odd hours for the target market
  • Responses to surveys that still had cut-and-paste artifacts (e.g. “translated by DeepL” )
  • Responses that did not answer the question
  • Speeding and keysmash responses 
  • Respondents using Gmail email addresses 
  • Unusual names that look like two first names (e.g.,“Mary John” or “Cindy Jack”) 

Each one of these, in isolation, isn’t necessarily a firm sign that the person isn’t who they say they are (though IP addresses can still be a strong indicator — especially since fraudsters sometimes forget to use their VPN or other spoofing device). You’ll want to look holistically at a participant’s behavior and clues to make a determination, layering your prevention and detection strategies to minimize fraud.

Biometric & language indicators

Check whether your platform has a way to track biometric signals, and keep an eye on writing style and vocabulary to spot unusual patterns or formal language that feels auto-generated. Specifically, you can watch for:

  • Typing rhythm: Things like backspacing, edits, pauses, or hesitations that humans naturally exhibit. 
  • Scrolling style: Natural/unpredictable vs. steady, robotic pacing.
  • Repetitive phrasing: Saying things like “All that said, …” repeatedly across responses and projects  
  • Uncharacteristic language: Sentence structure that feels too polished or jargon your target demographic wouldn’t likely use (e.g., “The quality of the customer experience was highly consistent across all touchpoints” or “Recent financial data from public companies suggests…”)

Behavioral cues

Along with language, certain behaviors and formatting choices can add to the holistic picture of the participant’s responses. Look for things like:

  • Tab changes: These can signal that respondents are going to another tab to look things up or use an LLM to get answers. 
  • Copy/pasting: Some platforms offer copy/paste detection or allow developers to write a script to prevent it. 
  • Use of bulleted lists: This can be a tell that a human didn’t create the response you’re seeing.
  • Eye contact and gaze: In moderated interviews, watch for participants who look away from the camera or sound as if they’re reading something.
  • Typing: If you hear typing during moderated interviews, it may indicate the use of an LLM.

These indicators aren’t foolproof, of course, especially if you’re targeting folks who are highly educated or have a deeper level of domain knowledge. But if, for example, you see bulleted lists and the time to complete was very short, these combined indicators may signal a red flag worth further investigation.

What can we do to prevent user research fraud?

We can still use some of the tried-and-true methods — like attention checks or confirmatory questions — in our screeners and unmoderated studies. I've had some luck repeating screener questions in an actual test and cross-checking the similarity of the responses.

For moderated studies, we can also take some extra steps to ensure our participants are who they say they are:

  • Require a photo ID at the start of a session
  • Require that the camera be on for the entirety of the study (and attempt to reschedule if they say they can’t have their camera on) 
  • Include confirmatory questions during the interview as well, paying attention to the environment when possible. I had one person show up to an interview on camera in what was clearly the evening, even though they were supposed to be in Michigan, where it was about 2 pm in the middle of summer. 

We can also take steps to make it more difficult for fraud rings to hijack our studies. Don’t advertise on social media if you can avoid it — it’s far too easy for your links to be spread by fraud rings, leaving you with the pain of figuring out which participants are legitimate and which should be rejected.

Finally, you can work with your vendor or participant panel provider to see what preventive strategies they have in place. Some great questions to ask them include:

  • What signals do you monitor? 
  • How often do you update your detection logic? 
  • How do you address evolving fraud tactics (e.g., AI-generated personas, AI agents)?
  • What happens if the tactics evolve faster than your detection logic can keep up? How do you adapt to that? 

Your vendor should be happy to answer those questions for you, and if they’re not, you may want to rethink the relationship. Their answers will help you understand their limitations, so you can build your complementary layers.

To wrap up

Fraud is a big and growing problem in UX research, and it’s likely only going to become more difficult to deal with. With valid and reliable data to work from, research helps us to de-risk our content and UX decisions. Some of these fraud detection strategies can feel as if they run counter to our human-centered values, but in the long run, dealing with fraud effectively allows us to better meet the needs of our audiences and create better products for everyone.

Keep learning with us at Button workshops

If this article got you thinking about how you practice content design, our upcoming workshops go deeper. In 2026, we’re hosting live, hands-on sessions where content designers work through real challenges together, guided by experienced facilitators. You’ll leave with practical skills you can use right away.

Share this post

Find out how you can write for the Button blog.

Sign up for Button email!

Be the first to hear about Button events, free content design resources, and special offers.

Thanks! Check your inbox to confirm your subscription.
👉 IMPORTANT: Firewalls and spam filters can block us. Add “hello@buttonconf.com” to your email contacts so they don’t!
Oops! Something went wrong while submitting the form.