Back to BlogEducation

How to Interpret MMM Results Without a Data Scientist

MMM outputs can be confusing. Response curves, decomposition charts, and ROI tables mean nothing if you can't translate them into budget decisions.

Go Funnel Team8 min read

The interpretation gap

You ran a media mix model. Maybe you used Robyn, Meridian, or hired a consulting firm. Either way, you now have a pile of charts, tables, and statistical outputs. Response curves. Decomposition waterfall charts. Marginal ROI tables. Confidence intervals.

If you're an agency owner presenting these results to clients, you need to translate statistical output into business decisions. Here's how to read every major MMM output without a statistics degree.

Channel decomposition: who gets credit

The decomposition chart is the most important output. It shows what percentage of total revenue or conversions each channel contributed over the analysis period.

A typical decomposition looks something like this:

  • Baseline (non-media factors): 45%
  • Paid Search: 18%
  • Meta Ads: 15%
  • TV: 8%
  • Email: 6%
  • Direct Mail: 4%
  • Organic Search: 4%

What the baseline means. The baseline represents revenue that would have occurred without any marketing -- driven by brand equity, word of mouth, repeat purchases, seasonality, and organic demand. A high baseline (50%+) means the brand has strong organic demand. A low baseline (under 30%) means the business is heavily dependent on paid media.

What to watch for. If a channel shows 2% contribution but receives 15% of the budget, that's a red flag. Either the model is wrong about that channel (validate with holdout tests) or you're significantly over-investing. Conversely, a channel showing 15% contribution on 5% of budget is a growth opportunity.

The common mistake. Don't confuse contribution share with ROI. A channel can have high contribution (it drives a lot of revenue) but low ROI (it costs a lot to do so). These are different metrics that answer different questions.

Response curves: where the money runs out

Response curves show the relationship between spend and incremental outcome for each channel. They're almost always concave -- meaning each additional dollar of spend produces less additional revenue than the previous dollar.

Reading the curve. The steep part of the curve (low spend levels) is where the channel is most efficient. As spend increases, the curve flattens. The point where it begins to flatten significantly is the saturation point -- beyond this, you're paying more for each marginal conversion.

What to tell clients. "This curve shows that your first $50K per month on Meta generates strong returns. Between $50K and $80K, returns diminish but are still positive. Beyond $80K, you're in deep diminishing returns territory -- each additional dollar generates less than half the return of the first dollar."

The practical implication. If a client is spending $100K on a channel that saturates at $70K, you can shift $30K to an unsaturated channel and increase total results without increasing total spend. This is the single most actionable insight MMM provides.

Marginal ROI vs. average ROI

MMM reports two types of ROI, and confusing them leads to bad decisions.

Average ROI is total channel revenue divided by total channel spend. If Meta generated $300K in revenue on $100K in spend, the average ROI is 3.0x.

Marginal ROI is the return on the last dollar spent. Because of diminishing returns, marginal ROI is always lower than average ROI at meaningful spend levels. Meta's average ROI might be 3.0x, but the marginal ROI at $100K spend might be 1.2x.

Why marginal ROI matters more. Budget optimization is about where to put the next dollar, not where the first dollar went. If Meta's marginal ROI is 1.2x and TikTok's is 2.8x, the next dollar should go to TikTok -- even though Meta's average ROI is higher.

The mistake agencies make. Presenting average ROI to clients and recommending "keep spending on the channels with the highest ROI." This ignores saturation entirely. The channel with the highest average ROI is often already over-invested. The channel with the highest marginal ROI is where growth lives.

Adstock effects: yesterday's ads still working

Adstock measures how long a channel's impact persists after the ad runs. Different channels have very different adstock profiles:

  • Paid Search: Near-zero adstock. The effect is almost entirely same-day.
  • Social Media Ads: Short adstock, typically 1-3 days of carryover.
  • Display/Programmatic: 3-7 days of carryover.
  • TV/CTV: 2-4 weeks of carryover. Brand ads can have even longer effects.
  • Out-of-Home: 1-2 weeks of carryover.

What this means for budget decisions. Channels with long adstock are undervalued by platform attribution, which typically uses 7-day or 28-day windows. If TV's effect persists for 4 weeks but your attribution window is 7 days, you're only capturing a fraction of TV's true impact. MMM corrects for this.

How to explain it to clients. "When you ran that TV campaign in January, the model shows it continued generating conversions for 3 weeks after the last spot aired. Platform attribution missed this entirely because it only looks back 7 days. The true ROI of TV is 40% higher than what platform reporting shows."

Confidence intervals: how sure are we

Every MMM estimate should come with a confidence interval. If the model says Meta's ROI is 2.5x with a 90% confidence interval of [1.8, 3.4], that means there's a 90% probability the true ROI falls between 1.8x and 3.4x.

Wide intervals mean low confidence. If the interval is [0.5, 5.0], the model is essentially saying "I don't have enough data to give you a useful estimate." Don't make major budget decisions based on wide intervals.

Overlapping intervals mean unclear winners. If Meta's ROI interval is [2.0, 3.5] and Google's is [2.2, 3.8], the model can't confidently say which is better. They're statistically indistinguishable.

How to present this. Frame uncertainty as a range of scenarios. "In the best case, shifting $20K to CTV would increase revenue by $60K. In the worst case, it would increase revenue by $10K. The most likely outcome is around $30K. Even the worst case is positive, so this is a low-risk move."

Optimization recommendations: what to actually do

The final output of most MMM tools is an optimized budget allocation. The model takes your total budget and distributes it across channels to maximize total revenue, subject to the response curves and diminishing returns it estimated.

Don't blindly follow the optimizer. The optimizer doesn't know about contractual commitments, strategic priorities, or channel-specific constraints. If the model says cut TV spend by 50%, but you have a $500K annual TV commitment, that's not actionable.

Make incremental moves. Even if the model suggests a radically different allocation, shift budgets by 10-20% at a time and observe results. This validates the model's predictions while limiting downside risk.

Re-optimize quarterly. Market conditions change. Creative fatigue sets in. Competitor behavior shifts. A budget allocation that's optimal in Q1 may not be optimal in Q3. Treat MMM as a living tool, not a one-time answer.

FAQ

How do I know if the MMM results are trustworthy?

Check three things. First, does the baseline make directional sense? If the model says 80% of revenue is baseline for a brand that does no organic marketing, something is off. Second, do the channel rankings align with any incrementality tests you've run? Third, run a holdout validation -- withhold the last 4-8 weeks of data, fit the model on the rest, and check if it accurately predicts the holdout period.

Should I share raw MMM outputs with clients?

No. Clients need business insights, not statistical charts. Translate every output into a decision: "Shift $15K from Search to Meta because Search is saturated and Meta has room to scale." Include the charts as supporting evidence, but lead with the recommendation.

How often should I rerun the MMM for clients?

Quarterly is the standard cadence for most agencies. Monthly if the client's media mix changes frequently or if spend levels are shifting significantly. Always rerun before a major budget planning cycle so the recommendations reflect current market dynamics.


Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.

Want to see your real ROAS?

Connect your ad accounts in 15 minutes and get attribution data you can actually trust.

Book a Call

Related Articles