The Apex Synapse logo Research Report

Hyper-Personalisation?

We explore the nonlinear returns of AI-driven per-lead personalisation: when complexity rises so far that recognisability drops and engagement falls. By simulating token-level personalisation entropy (H) per lead, we measure its relationship to open, reply and dwell metrics.

Peak Entropy Range
H ≈ 0.45–0.60
Sweet spot for engagement
Open-Rate Curve
14% + 38H − 42H²
Quadratic fit
Sample Size
10,000
Even split across variants

1. Executive Summary

Purpose. Test whether dynamic per-lead personalisation increases engagement until complexity exceeds recognisability.

We simulate per-lead token entropy and evaluate its link to open, reply and dwell metrics.

“Too much personalisation breaks pattern trust.”

2. Hypothesis & Theoretical Framework

Hypothesis. Dynamic per-lead personalisation increases engagement until complexity exceeds recognisability.

Theory Link. Cognitive processing theory: personalisation aids relevance up to the point of pattern loss, where users stop trusting authenticity.

Predicted Outcome. Engagement (open/reply) peaks at moderate personalisation entropy, H = 0.45–0.60.

Potential Impact. Defines saturation thresholds for future AI content engines.

Cohorts → Variants → Metrics

Entropy (H) is a token-level diversity score scaled to [0,1].

3. Data & Metrics

  • Open Rate (%): Immediate engagement.
  • Reply Rate (%): Deep engagement.
  • Dwell Time (sec): Reading time.
  • Personalisation Entropy (H): Token-level diversity (0–1).

4. Experiment Execution Plan

  • Define Variants: AI per-lead generated vs static MJML templates.
  • Sample Size: 10,000 messages (evenly split).
  • Measurement: Track per-token entropy and engagement events.
  • Analysis: Fit quadratic model and identify peak.
Open(H) = 14% + 38H − 42H² → Peak at H ≈ 0.45

5. Results

Variant Mean H Open % Reply % Dwell (s)
Static0.0514.21.16.8
AI Low0.3024.92.88.4
AI Optimal0.4831.33.910.6
AI Excessive0.7819.42.07.2
Interpretation. Engagement peaks at moderate entropy (≈0.45–0.60). Beyond ≈0.65, recognisability falls and engagement collapses.

6. Data Analysis & Interpretation Framework

  • Raw Analysis: Fit engagement curve vs entropy.
  • Feedback Loop: Tune personalisation engine to target H ≈ 0.45–0.55.
  • Synapse Peer Review: Validate thresholds in peer trials.
  • Integration: Ship an engine constraint: Entropy Governor.

7. Insights & Discussion

  • “Too much personalisation breaks pattern trust.”
  • Engagement collapses beyond entropy ≈ 0.65.
  • Human review: messages became “AI-noisy” (inconsistent tone).
  • Optimal personalisation balances novelty and coherence.

8. Ethical Considerations

  • No user-specific data retained.
  • Focus on model-level entropy metrics, not individual behaviour.
  • Reinforces authenticity over manipulation.

9. Contribution to the Synapse Ecosystem

  • Data Improvement: Adds “entropy saturation” constraint to content bandits.
  • Educational Value: Helps members tune per-lead generation safely.
  • Innovation Signal: Foundation for adaptive coherence-calibration research.