How Media Mix Modeling Works for Mid-Market Brands
Media mix modeling used to be for enterprise brands only. Open-source tools and better data now make it accessible for brands spending $50K+/month on ads.
MMM isn't just for Fortune 500 companies anymore
For decades, media mix modeling was a tool only available to brands spending $50M+ on advertising. The models required expensive proprietary software, teams of PhD statisticians, and months of data preparation. CPG giants like Procter & Gamble, Unilever, and Coca-Cola used MMM to allocate billions in media spend. Everyone else relied on gut feel and platform ROAS.
That changed in 2022 when Meta released Robyn, an open-source MMM tool. Google followed with Meridian in 2024. Suddenly, the same modeling approach that informed billion-dollar ad budgets was available to anyone with historical marketing data and basic technical skills.
For mid-market ecommerce brands spending $50K-$500K/month on ads, MMM went from "impossible" to "advisable" almost overnight.
What makes mid-market MMM different from enterprise MMM
Fewer channels, simpler model
Enterprise brands might have 15-20 marketing channels including TV, radio, print, outdoor, cinema, sponsorships, and multiple digital platforms. Mid-market brands typically have 4-7: Meta, Google, TikTok, email, maybe influencer, maybe Amazon Ads.
Fewer channels means a simpler model with fewer parameters to estimate. This is actually an advantage: simpler models are more robust and require less historical data.
More digital data, better granularity
Enterprise brands have a mix of digital and offline channels, with offline data often coming in monthly or weekly aggregates. Mid-market brands are primarily digital, with daily spend and revenue data available from platform APIs.
More granular data (daily instead of weekly) gives the model more observations to learn from. Two years of daily data provides 730 data points, compared to 104 weekly observations. This extra data compensates for the shorter track record that mid-market brands typically have.
Less organizational complexity
Enterprise MMM projects involve coordinating data from 10+ teams, navigating organizational politics around budget allocation, and managing a 6-month implementation timeline. Mid-market brands can often pull all necessary data in a day, run the model in a week, and implement changes the following month.
Setting up MMM for your brand: a practical walkthrough
Step 1: Gather your data (1-2 days)
You need three categories of data, ideally at daily or weekly granularity for 2+ years:
Marketing data:
- Spend per channel per day/week (pull from Meta Ads Manager, Google Ads, TikTok Ads, etc.)
- Impressions and clicks per channel (optional but helpful for some model configurations)
Outcome data:
- Daily/weekly revenue
- Daily/weekly conversions (new customers is ideal; total orders works too)
- Daily/weekly website sessions (optional, useful as a mediating variable)
External data:
- Holiday flags (Black Friday, Cyber Monday, Christmas, Valentine's Day, etc.)
- Promotion flags (when you ran discounts or sales)
- Pricing changes (if you changed pricing during the period)
- Any major business events (new product launches, stockouts, website redesign)
For Shopify brands, revenue and order data exports directly. For ad platforms, most have CSV export or API access for historical data.
Step 2: Choose your tool (1 day)
Meta's Robyn (R-based):
- Pros: Well-documented, large community, automated hyperparameter tuning, built-in budget allocation optimizer
- Cons: Requires R (which some teams don't use), can be slow on large datasets
- Best for: Brands that want a guided experience with extensive documentation
Google's Meridian (Python-based):
- Pros: Bayesian approach (provides uncertainty estimates), accepts prior information from incrementality tests, Python (more common in data teams)
- Cons: Newer, smaller community, requires more statistical knowledge
- Best for: Brands with data science resources who want calibrated uncertainty
Lightweight regression (any language):
- Build your own regression with adstock transformation and saturation curves
- Pros: Full control, educational, lightweight
- Cons: No automation, no built-in diagnostics, easy to make mistakes
- Best for: Learning purposes or very simple channel mixes (2-3 channels)
Step 3: Configure and run the model (2-5 days)
Using Robyn as the example (most common for mid-market brands):
-
Set hyperparameter ranges. Robyn searches across combinations of adstock decay rates and saturation parameters for each channel. Set reasonable ranges based on your business knowledge:
- Digital channels: 0.1-0.5 adstock decay (1-3 week carryover)
- Email: 0.0-0.2 decay (minimal carryover)
- Influencer: 0.2-0.6 decay (moderate carryover)
-
Run the model. Robyn generates thousands of model candidates and clusters them into distinct solutions. Each solution represents a different interpretation of how your channels contribute.
-
Select the best model. Evaluate candidates based on:
- Statistical fit (R-squared, NRMSE)
- Business plausibility (does the model's channel decomposition match your intuition?)
- Predictive accuracy (hold out the last 8 weeks and check if the model predicts them correctly)
-
Review channel decomposition. The model outputs the percentage of revenue attributable to each channel plus baseline (organic demand).
Step 4: Extract actionable insights (1-2 days)
Channel contributions: How much revenue does each channel actually drive?
A typical mid-market ecommerce brand's decomposition:
| Source | Revenue Contribution | |--------|---------------------| | Organic/Baseline | 45-60% | | Meta | 15-25% | | Google (non-brand) | 8-15% | | Google (branded) | 3-5% | | Email/SMS | 5-10% | | TikTok | 2-8% | | Other | 2-5% |
The baseline (organic) contribution often surprises founders. For brands that have been around 3+ years with decent brand awareness, 45-60% of revenue happens without any active advertising. This means your marketing budget is competing for the remaining 40-55%.
Response curves: How does each channel perform at different spend levels?
These curves show the diminishing returns trajectory. A common finding for mid-market brands: Meta hits meaningful diminishing returns around $60K-$100K/month (depending on audience size), Google Search hits it around $30K-$50K/month, and TikTok often has more headroom because it's typically underspent relative to its efficient frontier.
Budget optimizer output: Where should you reallocate?
Both Robyn and Meridian include budget optimizers that recommend the optimal spend allocation given your total budget and efficiency targets.
A typical optimization finding: the model recommends shifting 20-30% of budget from mature, saturated channels (often Meta at high spend) to undersaturated channels (often TikTok, YouTube, or Google Shopping).
Real-world results: what mid-market brands find
We've reviewed MMM outputs from 40+ mid-market ecommerce brands ($50K-$500K/month in ad spend). The most common findings:
Finding 1: Organic baseline is higher than expected
Brands spending $200K/month assume their marketing drives most of their revenue. MMM typically shows that 50-60% of revenue would happen without any paid advertising. This doesn't mean marketing is unimportant -- it means the incremental value of each marketing dollar is lower than platform ROAS suggests, which affects optimal budget levels.
Finding 2: One channel is significantly oversaturated
In 75% of models, at least one channel is spending past its efficient frontier. The most common: Meta at high spend levels, branded search at any spend level, and retargeting when it represents more than 15% of total budget. Reducing spend on the oversaturated channel and reallocating to the highest marginal return channel produces a 10-20% improvement in total marketing efficiency.
Finding 3: Email is under-credited
MMM consistently shows email/SMS contributing 5-12% of total revenue -- more than most attribution models report. This is because email drives conversions that happen through other channels (a customer reads an email, then searches on Google and buys). Attribution credits Google. MMM credits email. The implication: invest more in email list growth and segmentation.
Finding 4: Seasonality matters more than you think
For seasonal businesses, 20-35% of revenue variation comes from seasonality alone. This means a significant portion of your "high-performing" Q4 ad campaigns aren't performing well because of your ads -- they're performing well because it's Q4. Conversely, your "underperforming" Q1 campaigns may be perfectly efficient, just running during a slower season.
Common objections and honest answers
"Our brand is too small for MMM"
If you spend more than $50K/month and have 2 years of data, you have enough for a basic model. The model won't be as precise as one built on 5 years of data across 15 channels, but it will be more accurate than platform attribution for strategic decisions.
"We don't have a data scientist"
You don't need a full-time data scientist. A contractor with R or Python experience can set up Robyn or Meridian in 40-60 hours. Several agencies now offer "MMM as a service" for $3K-$8K per quarter. The initial setup is the heaviest lift -- quarterly refreshes take 10-20 hours.
"The model might be wrong"
It will be wrong in some ways. All models are approximations. The question is whether it's less wrong than your current approach (platform ROAS comparison). The answer is almost always yes. MMM typically estimates within 15-25% of incrementality test results, which is far better than platform reporting that can be off by 100-500%.
"We change our mix too often for the model to keep up"
Ironically, frequent mix changes are good for MMM. The model needs spend variation to estimate channel effects. Brands that constantly test budgets, launch new campaigns, and shift allocations provide the variation that makes MMM more accurate, not less.
Frequently Asked Questions
How long does the first MMM project take for a mid-market brand?
From data collection to actionable recommendations, plan for 3-4 weeks. Week 1: data gathering and cleaning. Week 2: model configuration and initial runs. Week 3: model selection, validation, and sensitivity analysis. Week 4: insight extraction, scenario modeling, and recommendation development. If your data is well-organized (consistent exports, clean formats), weeks 1-2 compress significantly. The biggest time sink is always data cleaning -- inconsistent naming conventions, missing weeks, currency/timezone issues, and reconciling numbers across platforms.
Can I use MMM to measure the impact of organic social media?
Partially. If you can quantify organic social activity (number of posts, engagement rate, or impressions), you can include it as a variable in the model. The challenge is that organic social is hard to separate from brand effects -- a brand with strong awareness gets more organic engagement regardless of posting frequency. What MMM can measure is the incremental impact of increased organic activity above the baseline. If you doubled your posting frequency for 3 months and the model captured a corresponding revenue increase, that's a real signal. But organic social data tends to be less variable than paid media, which makes the model's estimates less precise.
What should I do if MMM results conflict with what my ad platforms report?
This is expected and normal. Platform-reported numbers include non-incremental conversions (organic buyers who saw an ad), cross-platform overlap, and modeled conversions. MMM strips these away by estimating each channel's marginal contribution to total revenue. When MMM says Meta drives 20% of revenue but Meta reports driving 40%, the gap represents organic conversions that Meta is claiming credit for. Use the MMM result for strategic budget allocation and the platform data for tactical, within-channel optimization. If the gap seems implausibly large, validate with an incrementality test on the channel in question.
Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.
Want to see your real ROAS?
Connect your ad accounts in 15 minutes and get attribution data you can actually trust.
Book a Call