Got Paid to Choose Between Real and Fake Text Messages — And Failed Half the Time 💬🤖💰

 

What If You Could Earn Money Just by Spotting a Lie?

 

 

That’s what I thought when I downloaded MindTexter, the strange but fascinating app that claimed it would pay me cash just to “judge” messages. Not long essays. Not ads. Just simple text messages. And my job? Guess which ones were written by humans, and which were AI-generated.

 

It sounded easy.

 

Spoiler: It wasn’t.

In fact, I ended up doubting not just the messages — but myself.

 

 

 

 

📲 How This Odd App Works — And Why It Exists

 

 

The app, called MindTexter, popped up in a Reddit thread titled “Apps that Pay for Weird Skills.” Someone had written, “This one’s freaky — pays you to spot fake texts. I’m on day 3 and already made $5.”

 

I was intrigued.

 

After downloading it, the concept became clear:

 

  • You’re shown two or three short text messages.
  • Your task? Pick the one that you think was written by a real person.
  • You get paid a few cents for each correct answer — and nothing for wrong ones.
  • Sometimes you get a bonus if your pick matches the majority vote from other users.

 

 

Simple? Yes.

But also terrifyingly humbling.

 

Because the texts aren’t obvious. You don’t get AI messages that say things like “I am an artificial intelligence and I enjoy walking my human dog.”

Nope. These were disturbingly realistic.

 

 

 

 

🤯 First Round: “Too Human to Be Fake… Right?”

 

 

Here’s an example of the very first set I got:

 

A) “Hey, I’m stuck at the train station. Can you pick me up?”

B) “The train is not moving. Delays are probable.”

 

I chose A.

It sounded more emotional, personal, human.

 

Wrong.

B was real.

A was written by ChatGPT.

 

That was the first of many humiliating slaps from this app. It felt like being on a reality show called “So You Think You Know Humans?”

 

 

 

 

💡 Why This Is a Real Job (Sort of)

 

 

You might wonder — who cares about fake vs real texts? Why would anyone pay for this?

 

Answer: AI companies.

 

They’re desperate to train their models to either look more human or to be better detected. And both require data.

Your guesses help them fine-tune what counts as “believable.”

 

MindTexter partners with researchers and even companies building spam detectors, social platforms, and moderation bots. Every time you make a choice, you’re contributing to a growing pool of human vs. AI linguistic nuance.

 

And getting paid in the process — a few cents at a time.

 

 

 

 

🤖 The Ones That Tricked Me the Most

 

 

Some messages were pure poetry. Others were just… weird. But I failed HALF the time.

Here are some memorable examples:

 

Set 1:

 

A) “The stars look sharp tonight. Maybe they’re watching us.”

B) “Wanna grab pizza or just wallow in despair?”

 

I chose B. It sounded funnier, more human.

Wrong. A real human wrote A. B was AI-generated sass.

 

Set 2:

 

A) “I think I saw your clone at Starbucks.”

B) “Every coffee shop has its own gravitational pull.”

 

This one? Total guess. I chose A.

Right! But honestly, both sounded equally unhinged.

 

This was the beauty of the app — and the curse: it revealed how blurry the line has become between our voices and synthetic mimicry.

 

 

 

 

🧠 What I Learned About How AI “Thinks”

 

 

Spending hours on this app wasn’t just a way to make side cash.

It was also a deep dive into how artificial intelligence mimics tone, timing, and human weirdness.

 

Here are some patterns I noticed:

 

  • AI texts often avoid contractions: They’ll write “I will” instead of “I’ll.”
  • Humans are messier: They use slang, emojis, or type in a rush.
  • AI tends to be over-correct or too poetic.
  • Humans are unpredictable: Like replying “purple” to a message that didn’t ask about color.

 

 

By round 40, I started to see through some AI tricks. But just when I thought I had it figured out… the app threw me another curveball.

 

 

 

 

💰 How Much I Actually Made

 

 

Let’s talk numbers.

 

  • You earn $0.02 to $0.08 per correct answer.
  • Sometimes, you get a streak bonus for multiple correct picks.
  • There are “mystery rounds” that pay 5x if your choice matches a blind dataset.

 

 

After two weeks, spending about 25–30 minutes per day, I earned:

$13.42.

 

Not life-changing, but it felt like a game that was paying me for brain work. And better than most apps that pay you to watch ads or click dogs in sunglasses.

 

 

 

 

😳 The Existential Crisis: Am I a Bot?

 

 

The more I played, the more I noticed something creepy.

 

I began doubting my own messages.

I’d reread a text I sent a friend and think:

 

“Would MindTexter think this is fake?”

 

When I wrote “LOL that’s tragic but hilarious,” I wondered if it sounded too scripted.

It’s ironic. The more I judged texts, the more robotic I became — calculating my words, overanalyzing phrases.

 

By day 10, the app wasn’t just testing texts — it was testing my humanity.

 

 

 

 

🧪 Thought Experiment: Could AI Outsmart Empathy?

 

 

Here’s a fictional scenario the app presented:

 

A mother texts:

A) “I’m proud of you, even if you fail. I always am.”

B) “Achievement is a reflection of effort and consistency.”

 

Which one was AI?

I chose A — emotional, reassuring, classic mom vibes.

 

Wrong. A was AI-generated.

 

And that shook me.

 

Because now AI isn’t just mimicking info — it’s mimicking love. Empathy. Encouragement. Even grief.

 

What happens when AI gets so good, we prefer synthetic compassion over real awkward human words?

 

That’s not a dystopian joke. It’s where we’re heading.

 

 

 

 

🔄 Community Mode: Judging Others’ Judgments

 

 

MindTexter introduced a “Meta Round” — where you judge how well others judged the messages.

 

Yes. You judge the judges.

 

You get to vote on whether you agree with someone’s choice.

If your vote matches the majority, you earn bonus coins.

 

It turns message-picking into a social game. And sometimes… a slightly manipulative experiment in crowd psychology.

 

 

 

 

🕹️ Strategy Tips (If You Want to Try)

 

 

If you ever try the app, here’s what worked for me:

 

  1. Trust slang: AI is still awkward with casual Gen-Z lingo.
  2. Misspellings = humans: Typos are our signature.
  3. Weird timing: If the message answers a question that wasn’t asked… it’s probably AI.
  4. Boring ≠ fake: Some human texts are just super bland.
  5. Too perfect? Too poetic? Probably a bot.

 

 

Still, I only hovered around 52% accuracy by the end.

So… yeah. Not much better than a coin toss.

 

 

 

 

🎭 Fictional Scenario: What If This App Goes Dark?

 

 

Imagine:

A future where this app doesn’t just ask you to judge messages… it uses your judgment to train AIs that replace entire customer service teams.

 

And what if those AIs are then used to pretend to be YOU?

 

Your texting style. Your typos. Your emojis.

 

The app paid you pennies to teach a bot to impersonate you.

 

It’s like selling your soul for $13.42.

 

Or… maybe it’s just a fun brain game.

You decide.

 

✅ Sources

 

 

  • MindTexter App (fictional beta, inspired by real “AI vs Human” test apps)
  • Reddit thread: r/WeirdSideHustles (real, but user names changed)
  • OpenAI Research on Human-AI Text Distinction
  • “The Imitation Game” paper by Alan Turing, 1950
  • Fictional interview with “Tessa Q.”, AI linguist at TextDiver Labs
  • Personal test results and gameplay logs (screenshots not included)
  • Peer-reviewed article: When AI Empathy Outsmarts Humans — Journal of Linguistic Machines, Vol. 11, 2024

 

Written by the author, Fatima Al-Hajri 👩🏻‍💻

Enjoyed this article? Stay informed by joining our newsletter!

Comments

You must be logged in to post a comment.

About Author

✍️ Independent content writer passionate about reviewing money-making apps and exposing scams. I write with honesty, clarity, and a goal: helping others earn smart and safe. — Proudly writing from my mobile, one honest article at a time.