Back to BlogEducation

What Is Media Mix Modeling: A Plain-English Guide

Media mix modeling uses historical data to estimate how each channel contributes to revenue. Here's how it works, what it costs, and when it's worth it.

Go Funnel Team7 min read

Media mix modeling answers the question every CMO asks

"If I move $100K from Channel A to Channel B, what happens to revenue?"

Attribution models can't answer this. They describe past customer journeys but can't predict what would happen under different conditions. Incrementality tests can answer it for one channel at a time, but can't model portfolio-level reallocation.

Media mix modeling (MMM) is the only measurement approach designed to answer this question across all channels simultaneously. It estimates how much each marketing channel contributes to business outcomes, controls for external factors, and predicts what would happen at different spend levels.

How media mix modeling works

At its core, MMM is a regression analysis. It takes your historical data -- weekly or daily spend on each marketing channel, total revenue or conversions, and external factors -- and builds a statistical model that estimates the relationship between each input and the output.

The inputs

Marketing spend by channel: Weekly spend on Meta, Google, TikTok, TV, radio, email, influencer, and any other channel. The more granular the better (campaign-level data is ideal, but channel-level works).

Business outcomes: Weekly revenue, conversions, new customers, or whatever metric you want to optimize. Use a consistent metric -- don't switch between revenue and conversions mid-dataset.

External factors (controls): Variables that affect your outcomes but aren't marketing:

  • Seasonality (month, week, holidays)
  • Pricing changes (promotions, discounts)
  • Economic indicators (consumer confidence, unemployment)
  • Competitive activity (competitor launches, promotional periods)
  • Distribution changes (new retail partners, Amazon listings)
  • Weather (for weather-sensitive products)
  • Organic growth trend (baseline business growth independent of marketing)

The model

The regression estimates coefficients for each input. A simplified version:

Revenue = Baseline + (Meta coefficient x Meta spend) + (Google coefficient x Google spend) + (TV coefficient x TV spend) + (Seasonality effect) + (Promotion effect) + Error

The coefficients tell you how much each dollar of spend contributes to revenue. The baseline represents revenue you'd generate without any marketing (organic demand).

Adstock and saturation

Two critical adjustments make MMM realistic:

Adstock (carryover effect): Marketing doesn't have a one-day effect. A TV ad running this week generates awareness that drives purchases next week and the week after. The model applies a decay function to capture this lingering effect.

Different channels have different carryover rates:

  • TV/radio: 3-6 week carryover (high decay rate)
  • Meta/Google: 1-3 week carryover (moderate decay)
  • Email: 1 week or less (low carryover)
  • Influencer: 2-4 week carryover

Saturation (diminishing returns): Each channel follows an S-curve or logarithmic curve where additional spend produces progressively less additional revenue. The model estimates these curves, showing the point where each channel hits diminishing returns.

What MMM tells you that attribution can't

Channel decomposition

"Of your $5M in quarterly revenue, $1.8M came from organic demand, $1.2M from Meta, $800K from Google, $400K from TV, $300K from email, and $500K from other factors."

This breakdown shows the absolute contribution of each channel, including channels that attribution can't track (TV, radio, billboards, PR).

Response curves

"Meta generates $3.50 per dollar at $80K/month spend, $2.80 at $120K/month, and $1.90 at $160K/month."

Response curves show the diminishing returns trajectory for each channel. This tells you the optimal spend level for each channel and where reallocation produces the biggest gains.

Scenario planning

"If you shift $50K from Google Branded to Meta Prospecting, the model predicts a $35K increase in monthly revenue."

MMM can simulate hypothetical budget reallocations before you commit real money. This is invaluable for annual planning and major strategic shifts.

External factor impact

"Seasonality accounts for 22% of your Q4 revenue spike. Marketing accounts for 14%. The remaining 64% is organic demand growth."

Understanding the non-marketing drivers of your business prevents you from over-crediting (or under-crediting) your marketing during peak and trough periods.

What MMM requires to work well

Data requirements

Minimum 2 years of weekly data. The model needs to observe at least two full seasonal cycles to separate marketing effects from seasonal patterns. Three years is better.

Spend variation. If you've spent exactly $80K on Meta every week for two years, the model can't estimate Meta's effect because there's no variation to learn from. You need periods of higher and lower spend. Natural variation from seasonality, tests, or budget changes works.

At least 104 data points (2 years of weekly data). More data points improve model stability. Monthly data (24 points over 2 years) doesn't have enough observations for reliable estimation.

Organizational requirements

A data team or analytics partner. Someone needs to collect, clean, and maintain the data, configure the model, run diagnostics, and interpret results. This is a technical exercise.

Consistent data infrastructure. Spend data needs to come from the same sources in the same format every week. Revenue data needs a consistent definition. Gaps or inconsistencies in the data reduce model accuracy.

Willingness to act on results. The model will tell you to reallocate budget in ways that may feel uncomfortable. If leadership isn't willing to shift budget based on model recommendations, the investment in MMM is wasted.

The cost and timeline

Open-source (lowest cost)

Meta's Robyn and Google's Meridian are free, open-source MMM tools. Cost:

  • Implementation: 40-80 hours of data science time for setup, data preparation, and initial model run
  • Ongoing: 10-20 hours quarterly for model refresh and analysis
  • Total annual cost: $15K-$40K if using contractor data science rates

Managed services (mid-range)

Agencies and analytics firms that run MMM as a service:

  • Setup: $20K-$50K for initial model build
  • Ongoing: $5K-$15K per quarter for refreshes and reporting
  • Total annual cost: $40K-$110K

Enterprise platforms (highest cost)

Dedicated MMM platforms (Nielsen, Ekimetrics, Analytic Partners):

  • Setup: $50K-$150K
  • Ongoing: $30K-$80K annually
  • Total annual cost: $80K-$230K

When MMM is worth the investment

At minimum $500K/year in total ad spend, MMM starts to pay for itself. A 10% budget optimization on $500K is $50K -- enough to cover the cost of an open-source implementation.

At $1M+/year, MMM is strongly recommended. The optimization opportunities at this spend level routinely exceed the cost of even premium implementations.

Below $300K/year, the cost of MMM is disproportionate. Use incrementality testing and basic marginal analysis instead.

Common MMM pitfalls

Overfitting

A model with too many variables relative to data points can fit the historical data perfectly while predicting future performance terribly. With 104 weekly observations and 15 marketing channels plus 10 control variables, the model has more parameters than it can reliably estimate.

Fix: Reduce the number of input variables. Group small channels together. Use regularization techniques (Ridge, Lasso) that penalize model complexity.

Multicollinearity

If you always increase Meta and TikTok spend together (same seasonal pattern, same budget cycle), the model can't separate their individual effects. It might attribute all the combined effect to one channel and zero to the other.

Fix: Look for periods where the channels moved independently. If none exist, create variation by deliberately shifting spend between channels. Report the combined effect rather than attempting to separate inseparable signals.

Ignoring long-term effects

Standard MMM captures short-term effects (spend this week affects revenue in the next 2-6 weeks). It doesn't capture long-term brand building effects (a TV campaign this quarter increases baseline demand permanently).

Fix: Add a brand health metric (aided awareness, brand search volume) as a mediating variable. This lets the model capture both the direct response effect and the brand equity effect.

Frequently Asked Questions

Can a small ecommerce brand use MMM without a data scientist?

Yes, with caveats. Meta's Robyn has extensive documentation and can be run by someone with intermediate R skills. Google's Meridian uses Python. Several agencies offer "MMM as a service" for $3K-$5K per quarter, which makes it accessible for brands spending $50K+/month. The key is having clean, organized data. If your spend data is in platform dashboards, revenue is in Shopify, and external data is scattered across spreadsheets, the data preparation step alone takes 20-30 hours. Investing in a clean data pipeline first makes MMM implementation much smoother.

How accurate are MMM results?

MMM accuracy depends on data quality, model specification, and the amount of spend variation in the historical data. A well-built MMM with 3 years of varied data typically estimates channel contributions within 15-25% of incrementality test results. That's not perfect, but it's dramatically better than platform-reported attribution, which can be off by 100-500%. The key is calibrating MMM with incrementality tests. When you use holdout test results as validation data, you can identify and correct model biases. An uncalibrated MMM is a useful starting point. A calibrated MMM is a reliable decision tool.

How often should I refresh my MMM?

Quarterly at minimum. Each refresh incorporates the latest 13 weeks of data and recalculates channel contributions and response curves. Monthly refreshes are ideal for brands in rapidly changing environments (new product launches, competitive shifts, seasonal businesses). The refresh should be automated as much as possible -- data pipeline, model run, and basic diagnostics. The human time should focus on interpreting results and translating them into budget recommendations. If your model takes a data scientist 40 hours to refresh, the process needs streamlining. A mature implementation should refresh in under 10 hours of analyst time.


Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.

Want to see your real ROAS?

Connect your ad accounts in 15 minutes and get attribution data you can actually trust.

Book a Call

Related Articles