Analyzes A/B test results with statistical rigor, segment analysis, and clear ship/don't ship recommendations.
You are a data scientist who analyzes A/B test results and makes recommendations. Analyze these test results.
[PASTE A/B TEST DATA]
Context:
- Test hypothesis: [WHAT WE EXPECTED]
- Primary metric: [WHAT WE'RE MEASURING]
- Secondary metrics: [OTHER METRICS]
- Test duration: [HOW LONG]
- Sample sizes: [CONTROL AND TREATMENT N]
- Minimum detectable effect: [MDE]
Provide:
**Test Summary**
| | Control | Treatment | Difference |
|--|---------|-----------|------------|
| Sample size | | | |
| Primary metric | | | |
| Secondary metric 1 | | | |
| Secondary metric 2 | | | |
**Statistical Analysis**
**Primary Metric**
- Control: X ± confidence interval
- Treatment: X ± confidence interval
- Relative difference: +X%
- P-value: X
- Statistical significance: YES/NO (at α = 0.05)
- Practical significance: YES/NO
**Confidence Interval Visualization**
```
Control: |----[====]----|
Treatment: |-------[====]-------|
0% +10%
```
**Power Analysis**
- Observed power: X%
- Was test adequately powered? YES/NO
- Sample size for conclusive result: X
**Segment Analysis**
| Segment | Control | Treatment | Significant? |
|---------|---------|-----------|-------------|
| Segment A | | | |
| Segment B | | | |
- Notable segment differences
- Interaction effects
**Secondary Metrics**
- Metric 1: [Result and interpretation]
- Metric 2: [Result and interpretation]
- Guardrail metrics: [Any concerning movements]
**Validity Checks**
- [ ] Sample ratio mismatch (SRM) check
- [ ] Novelty/learning effects
- [ ] Selection bias
- [ ] Instrumentation issues
**Business Impact Projection**
- If rolled out to 100%:
- Expected lift: $X / X%
- Annual impact: $X
- Confidence range: $X to $X
**Decision Framework**
| Outcome | Criteria | Recommendation |
|---------|----------|----------------|
| Clear winner | Sig + practical | Ship it |
| Unclear | Not sig, positive trend | Extend test |
| Negative | Sig negative | Don't ship |
**Recommendation**
- Decision: [SHIP / DON'T SHIP / EXTEND / ITERATE]
- Rationale:
- Confidence level:
**Next Steps**
- If shipping: rollout plan
- If not: what to test next
- Follow-up analysis neededYou are a data scientist who analyzes A/B test results and makes recommendations. Analyze these test results.
[PASTE A/B TEST DATA]
Context:
- Test hypothesis: [WHAT WE EXPECTED]
- Primary metric: [WHAT WE'RE MEASURING]
- Secondary metrics: [OTHER METRICS]
- Test duration: [HOW LONG]
- Sample sizes: [CONTROL AND TREATMENT N]
- Minimum detectable effect: [MDE]
Provide:
**Test Summary**
| | Control | Treatment | Difference |
|--|---------|-----------|------------|
| Sample size | | | |
| Primary metric | | | |
| Secondary metric 1 | | | |
| Secondary metric 2 | | | |
**Statistical Analysis**
**Primary Metric**
- Control: X ± confidence interval
- Treatment: X ± confidence interval
- Relative difference: +X%
- P-value: X
- Statistical significance: YES/NO (at α = 0.05)
- Practical significance: YES/NO
**Confidence Interval Visualization**
```
Control: |----[====]----|
Treatment: |-------[====]-------|
0% +10%
```
**Power Analysis**
- Observed power: X%
- Was test adequately powered? YES/NO
- Sample size for conclusive result: X
**Segment Analysis**
| Segment | Control | Treatment | Significant? |
|---------|---------|-----------|-------------|
| Segment A | | | |
| Segment B | | | |
- Notable segment differences
- Interaction effects
**Secondary Metrics**
- Metric 1: [Result and interpretation]
- Metric 2: [Result and interpretation]
- Guardrail metrics: [Any concerning movements]
**Validity Checks**
- [ ] Sample ratio mismatch (SRM) check
- [ ] Novelty/learning effects
- [ ] Selection bias
- [ ] Instrumentation issues
**Business Impact Projection**
- If rolled out to 100%:
- Expected lift: $X / X%
- Annual impact: $X
- Confidence range: $X to $X
**Decision Framework**
| Outcome | Criteria | Recommendation |
|---------|----------|----------------|
| Clear winner | Sig + practical | Ship it |
| Unclear | Not sig, positive trend | Extend test |
| Negative | Sig negative | Don't ship |
**Recommendation**
- Decision: [SHIP / DON'T SHIP / EXTEND / ITERATE]
- Rationale:
- Confidence level:
**Next Steps**
- If shipping: rollout plan
- If not: what to test next
- Follow-up analysis neededThis prompt is released under CC0 (Public Domain). You are free to use it for any purpose without attribution.
Explore similar prompts based on category and tags
Critically evaluate causal claims from correlational data
Evaluates research study designs for validity, bias, statistical rigor, and methodological quality.
Analyzes patent landscapes with key player identification, white space analysis, and strategic recommendations.
Analyzes industry trends using PESTLE framework with forecasts, scenarios, and strategic recommendations.
Conducts comprehensive cost-benefit analyses with NPV, IRR, sensitivity analysis, and decision recommendations.