The Nudge: AI-Driven A/B Testing Flywheel

Published 13 March 2026 · Source: Inbox/thenudge-ab-testing-flywheel.md

The Nudge: AI-Driven A/B Testing Flywheel

Exec Briefing | March 2026


The Opportunity

Three conversion moments drive all revenue. Each is underoptimised:

StageBenchmarkWhat’s at stake
Visitor → email subscriber~2% average; best-in-class 8–15%Top-of-funnel volume
Subscriber → free trialTypically 5–15% of listAcquisition cost
Trial → paid (£4.99/mo)Industry avg 40–60% trial conversionDirect revenue

A 20% improvement at each stage compounds to ~70% revenue uplift with no additional traffic spend.


Keep what’s there. Add one layer.

ToolRoleStatus
OmniconvertFront-end A/B tests, personalisation, overlays, surveysAlready installed
DataHappyAttribution integrity, ad platform syncAlready installed
PostHogFunnel analytics, session replay, retention cohortsAdd — free tier sufficient to start
Claude APIExperiment analysis + hypothesis generationNew (the AI layer)
n8nOrchestration — cron, triggers, Slack notificationsNew

Omniconvert handles what to show. PostHog answers why users behave that way. Claude closes the loop.


The Flywheel

PostHog detects significance threshold met

Claude reads results + session replay summaries + learnings library

Writes conclusion: what moved, why, confidence level

Generates 3 ranked hypotheses for next test

Posts to Slack: "Test concluded. Proposed next experiments ↓"

Editor approves (one click) → Omniconvert experiment auto-created

Repeat

Human input: ~10 minutes per week to approve hypotheses. Everything else is autonomous.


Experiment Roadmap

Prioritised by revenue leverage, not technical complexity.


Priority 1 — Trial → Paid Conversion

Highest leverage. This is where money is lost.

Test 1.1 — Pricing transparency on CTA

  • Hypothesis: Hiding price (£4.99/mo) until mid-signup causes drop-off surprise
  • Variant: Add “Then £4.99/month, cancel anytime” under “Start Free Trial” button
  • Expected: +15–25% trial completion rate
  • Effort: 30 minutes in Omniconvert

Test 1.2 — Trial length

  • Hypothesis: 7 days isn’t long enough to experience the best perks (events are weekly)
  • Variant A: 14-day trial | Variant B: £1 first month
  • Note: £1 trial often beats free — signals intent, reduces tyre-kickers
  • Expected: +20–35% trial→paid (monitor churn at day 60)
  • Effort: Low (Stripe config + Omniconvert)

Test 1.3 — Specificity of benefit bullets

  • Current: “Discounts at 100+ restaurants and venues”
  • Variant: “Average 40% off at restaurants like [3 recognisable names]”
  • Expected: +15–30% click-through to trial
  • Effort: Copy change only

Test 1.4 — Urgency via member events

  • Hypothesis: Abstract perks don’t convert; concrete upcoming events do
  • Variant: Replace static benefit list with “Next member event: [real upcoming event] — 48hr early access for members”
  • Dynamic, updated weekly via Omniconvert personalisation
  • Expected: +20–40% on trial starts
  • Effort: Medium (requires content automation)

Priority 2 — Referral

Word-of-mouth is the natural distribution channel for a brand built on insider knowledge. The audience self-identifies as taste-makers — that’s the asset to activate. In an era of declining SEO discovery, referral is the most defensible acquisition channel. Rewardful is already installed.

The mechanic to test first: double-sided referral

  • Referrer gets 1 month free. Referred friend gets 14-day trial (vs. standard 7)
  • Both sides win → share rate 2–3x single-sided programs
  • Rewardful handles tracking; Omniconvert handles the in-product prompts

Test R.1 — Trigger timing

  • Hypothesis: Most referral prompts fire too early (at signup), before the user has experienced value
  • Variant A: Prompt after first perk redeemed (“You just saved £X — know someone who’d love this?“)
  • Variant B: Prompt at day 10 of trial (post-value, pre-renewal decision)
  • Control: Prompt at trial signup
  • Expected: +40–80% share rate vs. control — timing is the biggest lever in referral
  • Effort: Medium (Omniconvert trigger + Rewardful link injection)

Test R.2 — Reward framing

  • Current default assumption: free month
  • Variant A: “Free month” (monetary framing)
  • Variant B: “Unlock a secret member dinner for a friend” (experience framing)
  • Hypothesis: Experience rewards resonate more with this audience than cash-equivalent discounts — identity consistency with the brand
  • Expected: +20–35% on referral conversion rate
  • Effort: Low (copy + reward config)

Test R.3 — WhatsApp-first share flow

  • Hypothesis: London young professionals share via WhatsApp, not email
  • Current: Generic share link
  • Variant: Primary CTA = “Send via WhatsApp” with pre-written message (“I’ve been using this for London restaurants — you’d love it, here’s 2 weeks free”)
  • Pre-written copy removes the blank-page friction that kills shares
  • Expected: +50–100% share completion rate on mobile
  • Effort: Low (Omniconvert overlay, WhatsApp share API)

Test R.4 — Gifting flow

  • Hypothesis: “Give a friend a trial” converts better than “refer a friend” — gifting frame removes self-interest perception
  • Variant: Seasonal or occasion-based (“Give someone the best of London this month”)
  • Particularly relevant around Valentine’s Day, birthdays, “new to London” moments
  • Expected: Incremental acquisition channel, hard to benchmark — treat as new channel test
  • Effort: Medium (requires gift redemption flow)

Test R.5 — Email forward optimisation

  • The weekly newsletter already reaches 500k+. Most referral programs ignore this channel.
  • Add a single line at the bottom of every newsletter: “Forward this to someone who needs better London plans →”
  • Variant: Include a dedicated friend-referral link (tracked via Rewardful) vs. plain forward
  • Expected: 1–3% forward rate on 500k list = 5–15k new exposures per send, compounding weekly
  • Effort: Very low (template footer change)

Referral flywheel note for the AI layer: Referral experiments have a longer feedback loop than on-site CRO (need to track referred-friend trial → paid conversion, not just share clicks). The AI agent should flag referral tests as requiring 30-day minimum runtime and weight conclusions against downstream paid conversion, not just share rate — a reward that drives shares but attracts low-intent users is a negative outcome.


Priority 3 — Visitor → Email Subscriber

Volume play. More emails = more trial opportunities.

Test 3.1 — CTA copy (highest ROI test in this category)

  • Current: “Sign Up”
  • Variants: “Get This Week’s Edit” / “I Want In” / “Send Me The Good Stuff”
  • First-person and specificity consistently outperform generic signup copy
  • Expected: +80–150% (this category has the widest variance — big wins available)
  • Effort: 20 minutes

Test 3.2 — Exit-intent overlay

  • Trigger: User about to leave without subscribing
  • Offer: “Before you go — get London’s best 3 things this week”
  • Single field (email only)
  • Expected: Capture 2–5% of otherwise-lost visitors
  • Effort: Low (Omniconvert overlay, already capable)

Test 3.3 — Inline vs. modal signup

  • Hypothesis: Modal interrupts discovery; inline at article end captures high-intent readers
  • Test placement: end of every article vs. current modal timing
  • Expected: +30–60% on article-sourced signups

Priority 4 — Email → Trial

The leakiest pipe. Most subscribers never start a trial.

Test 4.1 — Subject line personalisation

  • Test: Generic (“This week’s edit”) vs. location-personalised (“What’s on in Soho this week”) vs. curiosity gap (“You haven’t been here yet”)
  • Expected: +10–20% open rate; benchmark for this audience should be 35–45%
  • Effort: Low (email platform A/B, not Omniconvert)

Test 4.2 — Email CTA framing

  • Current: Content-forward (article links)
  • Variant: One email per month with primary CTA = trial start, framed around a specific upcoming member event
  • Expected: +25–50% trial starts from email channel
  • Effort: Editorial + template change

Test 4.3 — Re-engagement sequence for non-openers

  • Segment: Subscribers who haven’t opened in 60 days
  • Trigger: Automated 3-email sequence: “Are we still right for you?” + best content + specific perk offer
  • Expected: 15–25% reactivation rate
  • Effort: Medium (automation setup)

Priority 5 — Retention (longer horizon)

Test 5.1 — Onboarding personalisation

  • Ask one question at trial start: “What are you most interested in?” (restaurants / events / both)
  • Personalise first 3 emails to that preference
  • Expected: +10–20% trial→paid, +15% 90-day retention
  • Evidence: Personalised onboarding is the single highest-impact retention lever for subscription products

Test 5.2 — Perk notification timing

  • Hypothesis: Members who use a perk in first 14 days retain at 2x the rate
  • Variant: Proactive “Here’s a perk you can use this weekend” at day 3 of trial
  • Expected: +20–30% trial→paid
  • Effort: Medium

AI Experiment Prioritisation Logic

The AI layer ranks hypotheses using:

  1. Funnel value — which stage has the most £ at stake
  2. Historical signal — what themes have worked before (learnings library)
  3. Session replay signal — where users are visibly confused or dropping
  4. Effort score — copy changes before layout changes before technical changes
  5. Time since last test — avoid testing fatigue on same page

It never runs two tests affecting the same conversion moment simultaneously. Referral tests are flagged for 30-day minimum runtime.


90-Day Build Plan

WeekWork
1PostHog instrumentation (3 key events: email signup, trial start, payment)
2Learnings library schema + first 5 tests live in Omniconvert
3–4Claude analysis agent (reads PostHog → writes conclusions → updates library)
5Hypothesis generator + Slack approval workflow (n8n)
6Omniconvert auto-creation via API
8+Flywheel running autonomously

Expected Outcome

Conservative case (20% improvement at each stage): ~70% revenue uplift from existing traffic. Optimistic case (best-in-class execution): 2–3x.

The compounding effect is the point — each concluded experiment makes the next hypothesis smarter.