Article
Back
Designing a Demand Research Workflow for Lean Product Teams
4/3/2026

Designing a Demand Research Workflow for Lean Product Teams

Most teams “research demand” with gut feel and random screenshots from Reddit. This article shows how to set up a simple, repeatable demand research workflow that turns social noise into a ranked list of product opportunities you can actually plan around.

Most product teams and indie builders have a loose sense of “demand”: a few Reddit threads, some DMs, a couple of loud customers. Then roadmap time comes and everything quietly falls back to gut feel.

You can do better without turning into a research department.

A lightweight demand research workflow gives you a repeatable way to turn messy Reddit and X conversations into a short, ranked list of product opportunities you can actually act on every week.

Recommended next step

Turn this idea into something you can actually ship.

If you want sharper product signals, validated pain points, and clearer buyer intent, start from the homepage and explore Miner.

This article walks through how to set that up, manually, with simple tools—and where something like Miner fits in if you want to automate the noisy parts.


What “Demand Research” Really Means For Small Teams

brown sand under blue sky during daytime

For lean product teams and indie hackers, demand research is not a 40-page market report. It is:

  • Regularly scanning where your buyers hang out (Reddit, X, niche forums, communities).
  • Capturing concrete, verbatim signals of pain, desire, and buyer intent.
  • Logging and scoring those signals in a simple system.
  • Using that system to prioritize what you build next (and what you kill).

The alternative is ad-hoc, “I saw a thread last month that seemed big” thinking. That is risky because:

  • Loud ≠ large: A spicy thread with 10 loud comments might not reflect a real, repeated pain.
  • Recency bias: Whatever you saw this week dominates planning conversations.
  • Pet projects: Founders and PMs can cherry-pick anecdotes that justify what they already want to build.
  • No audit trail: Six months later you cannot remember why a feature exists, or what problem it was meant to solve.

A demand research workflow replaces vibes with a light, recurring system: same inputs, same scoring, same review cadence. Nothing heavy, just consistent.


The Core Components Of A Lightweight Demand Workflow

Think of your workflow as a simple loop with four parts:

  1. Inputs: where you listen
  2. Logging: where you capture signals
  3. Scoring: how you compare opportunities
  4. Review cadence: how often you decide and adjust

1) Inputs: Where You Listen

For most SaaS builders and indie hackers, your highest-signal inputs will be:

  • Reddit: subreddits for your audience and problem domain.
    • Examples: r/smallbusiness, r/Entrepreneur, r/SaaS, r/devops, r/Notion, r/obsd.
  • X (Twitter): timelines and lists around your niche, search operators for specific pains.
    • Examples: "churn is killing us", "spreadsheet is killing me", "is there a tool that" + keyword.
  • Niche forums / communities: Discord, Slack groups, specialized forums, private communities.
  • Customer/lead conversations: sales calls, support tickets, onboarding calls, DMs.

You do NOT need to track everything. Pick 3–5 high-signal places your buyers actually complain and ask for help.

2) Logging: Where You Capture Signals

You need one place to paste and structure what you see.

Start with something simple:

  • A single Google Sheet or Notion database called Demand Log.
  • One row/entry = one distinct demand signal (not one thread).

Example columns:

  • ID (autonumber or simple integer)
  • Date
  • Source (Reddit, X, support, call)
  • Link (to the thread/tweet/ticket)
  • Audience (who is speaking)
  • Problem summary (your 1–2 sentence paraphrase)
  • Verbatim quote (copy-pasted)
  • Type (Pain, Workflow hack, Tool request, Switch story, Buying intent)
  • Frequency score (1–5)
  • Pain severity score (1–5)
  • Buying intent score (1–5)
  • Fit with us score (1–5; how close to your domain)
  • Priority score (auto-calculated)
  • Status (New, In review, In roadmap, Rejected, Parked)

You can add nuance later. The goal is to make it trivial to add 3–10 signals per week.

3) Scoring: How You Compare Opportunities

You don’t need a consulting-grade framework. Use a basic model and keep it consistent.

For each signal, score 1–5 on:

  • Frequency: how often you see this problem from your target audience?
  • Severity: how painful is it? Does it block revenue, cause churn, create real risk?
  • Buying intent: are they actively looking for tools, or just venting?
  • Fit: is this inside your current or near-future product lane?

Then use a simple formula:

Priority = (Frequency * 0.3) + (Severity * 0.3) + (Buying intent * 0.2) + (Fit * 0.2)

Weights are opinionated. If you run a very early product, you might weight buying intent lower and learning higher. What matters is that you keep the formula stable so you can compare signals over time.

4) Review Cadence: When You Decide

Your workflow should have two recurring rituals:

  • Weekly (30–45 minutes): log and score new signals, clean up duplicates, skim top scores.
  • Monthly (60–90 minutes): cluster related signals into opportunities and map them to the roadmap.

Without this cadence, your log turns into just another graveyard of notes.


A Concrete Example: From Reddit Thread To Ranked Opportunity

Tablet analytics chart touchscreen data visualization concept showing hand using stylus to edit colorful graph in digital workspace environment

Let’s walk through a full example.

You’re building tooling for small agency owners.

On r/agency you see this post:

“I’m spending 10+ hours a week manually updating 6 different client reports in Google Sheets. Tried a couple of so-called ‘client reporting’ tools but either they’re too expensive or don’t integrate with half the stuff we use. Honestly would pay decent money for something that just auto-updated and emailed clients monthly.”

The thread has:

  • 87 upvotes
  • 34 comments
  • 8 comments along the lines of “same here”, “this is killing me”, “following”

You decide this is one distinct demand signal: “client reporting automation for small agencies.”

In your Demand Log, you add a row:

  • Date: 2026-04-03
  • Source: Reddit
  • Link: (URL to the thread)
  • Audience: small marketing/SEO agencies
  • Problem summary: manual client reporting taking ~10hrs/week per agency, tools too expensive or missing integrations
  • Verbatim quote: copy exactly what they wrote
  • Type: Pain
  • Frequency score: 4 (you’ve seen similar complaints in 3 other threads)
  • Pain severity score: 4 (10 hours/week is a lot; they sound frustrated)
  • Buying intent score: 5 (“would pay decent money” is explicit)
  • Fit with us score: 3 (you currently help agencies with ops, but not reporting yet)

Priority calculation:

Priority = (4 * 0.3) + (4 * 0.3) + (5 * 0.2) + (3 * 0.2) Priority = 1.2 + 1.2 + 1.0 + 0.6 = 4.0

You now have a quantified signal, not just a screenshot in Slack.


Step-By-Step: Setting Up Your Demand Log And Feeds

Here’s a minimal setup you can implement this week.

Step 1: Create The Demand Log

Use Google Sheets, Notion, Airtable—whatever you and your team actually open.

Create the columns listed earlier:

  • Date
  • Source
  • Link
  • Audience
  • Problem summary
  • Verbatim quote
  • Type
  • Frequency score
  • Pain severity score
  • Buying intent score
  • Fit with us score
  • Priority score
  • Status

Add a formula for Priority score in the first row and drag it down:

=(FREQUENCY_SCORE0.3) +(PAIN_SEVERITY_SCORE0.3) +(BUYING_INTENT_SCORE0.2) +(FIT_WITH_US_SCORE0.2)

Use actual column letters or field names depending on your tool.

Optional but helpful:

  • Add filters for Type and Status.
  • Add a saved view sorted by Priority score descending.

Step 2: Define Your Listening Targets

Decide where you’ll consistently look for signals. Document them in a separate tab called Inputs.

Examples:

  • Reddit:
    • r/smallbusiness – look for posts mentioning “CRM”, “inventory”, “cash flow”, “invoice”.
    • r/SaaS – look for “churn”, “pricing”, “analytics”, “activation”.
  • X:
    • Saved search: "is there a tool that" AND (your domain keyword).
    • Saved search: "(tool name) alternatives".
  • Support:
    • Label/tag in your helpdesk for “feature request”, “missing integration”, “churn reason”.
  • Sales / calls:
    • One bullet per call summarizing top pain in the log.

You’re not trying to capture everything. Choose 5–10 recurring searches or sources you can realistically scan weekly.

Step 3: Build Light Search/Saving Habits

For Reddit:

  • Use Reddit’s search with problem-words: “how do you”, “is there a way”, “tool for”, plus your domain keywords.
  • Sort by Top or Relevance for the past month.
  • Save or bookmark threads that look like real problems.

For X:

  • Use X’s advanced search or “filter:faves filter:replies” to see what experts and your buyers engage with.
  • Create a dedicated List of people who often talk about your domain (e.g., DevOps leads, growth PMs, Shopify store owners).
  • Check that List daily or a few times a week for thread-level complaints or “what are you using for X?” questions.

For support/sales:

  • Whenever you finish a call or see a notable ticket, add the top pain as a row in the log.
  • Paste a direct link to the ticket/recording where possible.

Step 4: Log 10–20 Signals First, Then Tweak

Before obsessing over scoring, get volume.

  • Over 1–2 weeks, add at least 10–20 rows.
  • Do rough scores (don’t overthink 3 vs 4).
  • After you have 20 signals, sort by Priority score and sense-check:
    • Do the top 5 feel plausibly important?
    • Are obvious non-problems accidentally scoring high?
    • If needed, adjust your weights or scoring rubric.

You’ll likely find that ~5–10 distinct problems account for most of your high-priority rows. Those are your early opportunity clusters.


Interpreting Signals: Noise vs Real Demand

macro photography of blue and gold makeup brush set

Not every complaint is a good opportunity. Some patterns:

Recognizing Real Pain

Signals of real pain:

  • Specifics: “10+ hours a week updating 6 client reports” vs “reporting kinda sucks”.
  • Cost language: “we’re losing $X”, “this causes churn”, “this breaks every week”.
  • Workarounds: lots of manual spreadsheets, scripts, or duct-taped tools.
  • Emotional tone: “this is killing me”, “I hate that I spend Fridays on this.”

Vague, low-intent noise:

  • “This tool sucks.”
  • “Wish there was something better.”
  • “Anyone else annoyed by X?” with few replies.

Spotting Buyer Intent

High buyer intent shows up as:

  • “Is there a tool for X?”
  • “What are you using for Y?”
  • “Any alternative to [tool] that does Z?”
  • “I’d pay $$ for [specific outcome].”

In your log, these should get higher Buying intent scores, even if frequency is still low.

Watching For Repeated Patterns

The real value comes from repetition:

  • See a similar pain in 3+ Reddit threads plus 2 customer calls? That’s a pattern.
  • Hear the same workaround (e.g., “we export to CSV every Friday”) across different people? That’s a pattern.
  • Notice multiple “tool for X?” questions in the same month from your target audience? Pattern.

Use your monthly review to cluster similar rows into opportunity themes:

  • “Onboarding analytics for PLG SaaS”
  • “Client reporting automation for small agencies”
  • “Inventory sync between Shopify and offline POS”

Attach the individual log rows to these clusters so you can click through to the raw conversations later.


Turning Research Into Product Decisions

A demand log is only useful if it changes your roadmap.

Here’s one way to connect the dots.

1) Create Opportunity Clusters

In a separate tab or Notion database, create Opportunities.

Each row is a cluster:

  • Name: “Client reporting automation for small agencies”
  • Description: short overview
  • Signals: list of Demand Log IDs or links
  • Avg priority score: average of the attached signals
  • Confidence: low/medium/high based on volume and diversity of sources
  • Size guess: rough market size/ARPU guess
  • Status: Explore, Validate, In roadmap, Rejected, Parked

2) Monthly: Rank Opportunities, Not Features

In your monthly review:

  • Filter Opportunities to Status in (Explore, Validate).
  • Sort by Avg priority score and Confidence.
  • Discuss top 3–5 as a team.

Ask questions like:

  • Do we see a clear buyer profile and workflow?
  • Do we already have partial coverage in our product?
  • What’s the smallest version we can ship to test this?
  • What would we not build if we chase this?

Decide:

  • One opportunity to move to Validate (e.g., interviews, landing page).
  • One to move into the next 1–2 sprints (if you have strong confidence).
  • A few to explicitly Park or Reject so they don’t keep resurfacing.

3) Kill Weak Opportunities Explicitly

If an idea felt hot last month but:

  • Scores low on priority,
  • Has limited signals,
  • Or conflicts with your product direction,

Mark it as Rejected and add a short reason in notes.

This protects your roadmap from zombie ideas that never die, and gives you an audit trail: “we decided not to chase this because X at the time.”

4) Example: Acting On The Ranked List

Back to our agency reporting example.

Suppose your top three opportunities after a month are:

  1. Client reporting automation for small agencies (Avg priority: 4.3, Confidence: Medium)
  2. Better intake forms for agencies onboarding new clients (Avg priority: 3.6, Confidence: Low)
  3. “Pause retainers” workflow in billing (Avg priority: 3.4, Confidence: Medium)

You decide:

  • Move #1 to Validate: run 5–7 short interviews with agencies who mentioned this, plus a quick “coming soon” landing page to capture interest.
  • Implement a small feature for #3 in the next sprint because it aligns tightly with your existing product and customers.
  • Mark #2 as Parked until you see more signals or get pull from existing customers.

Instead of “I saw a cool thread about intake forms, let’s build something”, you have a clear rationale and can point back to the underlying signals.


Where Automation Tools Fit (And How Miner Helps)

You can run this workflow manually using:

  • Google Sheets / Notion
  • Reddit/X saved searches and bookmarks
  • Basic calendar reminders for weekly/monthly reviews

That’s enough to get real value.

The pain shows up when:

  • You’re monitoring many subreddits and search terms.
  • Threads move fast and you miss high-signal conversations.
  • You spend more time digging than deciding.

This is where automation tools make sense, and where Miner fits naturally:

  • Miner continuously watches Reddit and X for conversations that match your audience and problem space.
  • Instead of you manually surfing feeds, you get a daily brief of high-signal threads: validated pain, buyer intent, and weak signals worth tracking.
  • You can skim the daily brief, pick the best signals, and plug them directly into your existing demand log and scoring system.

Miner does not replace your workflow; it compresses the “find signal in noise” step.

For example:

  • You configure Miner around “small Shopify store owners struggling with inventory and stockouts”.
  • Each morning, you get a short email with:
    • A few Reddit threads where store owners describe inventory issues in detail.
    • X conversations where people ask “what do you use for inventory sync between Shopify and [tool]?”
  • You skim for 5–10 minutes, add the strongest signals to your demand log with scores, and move on.

Same workflow, less manual scraping.

If you’re still early or very narrow, start manual. Once your inputs feel unmanageable or you’re burning valuable time just finding threads, layering a Miner-style brief on top makes sense.


Putting It All Together: Your Minimal Workflow

You do not need a big system. You need a consistent one.

Here is a minimal version you can start with this week:

  • Create a Demand Log in Sheets/Notion with basic fields and a priority formula.
  • Define 5–10 recurring search patterns and sources across Reddit, X, and your own customer interactions.
  • Schedule:
    • Weekly: 30 minutes to log new signals and score them.
    • Monthly: 60–90 minutes to cluster signals into opportunities and align them with your roadmap.
  • Commit to:
    • Logging at least 5–10 signals per week.
    • Making at least one roadmap decision per month that references the log.

Once that feels natural:

  • Refine your scoring model.
  • Create an Opportunities view for clustering.
  • Consider adding automation, like a Miner-style daily brief, so your top-of-funnel discovery happens even when you’re busy.

The outcome is not a prettier spreadsheet. It’s fewer wasted builds, fewer “pet projects,” and more confidence that the next thing you ship is pulled by real demand—not just the last thread someone saw on Reddit.

Related articles

Read another Miner article.