Shopping Campaign Structure & Principles
How our Google Shopping campaigns are structured, why, and the principles that should guide any future changes. This is a living document — update it as the account evolves and new evidence emerges.
Current Structure
| Campaign | ID | Products | Bidding | Daily Budget | Status | Purpose |
|---|---|---|---|---|---|---|
| Shopping Catch All >£20 | 71 | ~700+ SKUs (all >£20) | Campaign-level tROAS | £150 | Active | Primary Shopping campaign — all products above £20 |
| Shopping Catch All <£20 | 78 | ~115 SKUs (all <£20) | Campaign-level tROAS | TBC | Active | Low-AOV products (hooks, handles, knobs, accessories) |
| Shopping Top Performers | 67 | 10-14 SKUs | Per-ad-group tROAS | £80 | Active — recommended for consolidation into C71 | |
| Shopping Testing | 69 | — | — | — | Paused (Jul 2025) | Predecessor to C71. Proved broad approach works when executed properly |
| Brand Shopping | 72 | — | — | — | Active | Brand terms — separate analysis |
Why Price-Based Segmentation
The >£20 / <£20 split exists because products at different price points need different return expectations. A £7 hook and a £80 table leg frame cannot share a tROAS target — the revenue per conversion is too different for one target to optimise both.
Within each price band, the algorithm distributes spend well across categories. C71’s 2025 data shows roughly even allocation across product types — knife racks, table legs, hooks, desk frames, hairpin legs, and shelving all received meaningful spend with ROAS ranging from 3.04x to 5.70x. No category was starved. This is the strongest argument against further segmentation by product type.
| C71 Category (2025) | Spend | ROAS | Starved? |
|---|---|---|---|
| Knife Racks | ~£8,670 | 4.50x | No |
| Table Legs (all types) | ~£5,184 | 4.39x | No |
| Hairpin Legs | ~£5,674 | 5.70x | No |
| Wall Hooks | ~£4,884 | 5.03x | No |
| Desk & Table | ~£4,917 | 4.58x | No |
| Shelving | ~£1,117 | 3.04x | No |
| Stools/Seating | ~£865 | 11-13x | No |
The Core Principle
Campaigns are for different economics, not different products.
A campaign is two things: a budget and a bidding target. Create a new campaign only when you need a different combination of those two things that cannot coexist in an existing campaign.
Product diversity within a campaign is a strength, not a problem. The algorithm handles it — C71 proves this with 700+ products across 25+ categories. What the algorithm cannot handle is products with fundamentally different break-even economics sharing a single tROAS target.
When to Create a New Campaign
1. The break-even economics are fundamentally different
Not 10-20% different — fundamentally different. A £7 knob and an £80 table leg frame need different tROAS targets because the revenue per conversion is so different that one target systematically overbids or underbids for one group.
Test: Would the existing tROAS target cause the algorithm to systematically overbid or underbid on this product category? If yes, split. If it’s within a range the algorithm can handle (3-6x ROAS across categories), keep them together.
Example from this account: The >£20 / <£20 split. Products at £7-15 AOV need a tROAS around 150-200% to be profitable at 40% margins. Products at £30-80 AOV can sustain 300%+ tROAS. One target cannot serve both.
2. You need to guarantee minimum spend on something the algorithm would otherwise ignore
This is the legitimate “the algorithm is starving my category” scenario — and it should almost always be time-limited. You’re forcing exposure to gather data, not permanently subsidising an underperforming category.
Run it for 6-8 weeks with a defined kill threshold, then either fold successful products back into the parent campaign or accept they don’t work.
Important: If the algorithm is ignoring a category, ask WHY before forcing spend. If it’s because the category doesn’t convert (see: handles/knobs below), a dedicated campaign just forces money into losing auctions.
3. The targeting is structurally different
Different country, different language, different feed, different audience. You can’t mix a UK feed and a US feed in one campaign.
4. You want clean measurement for a time-limited test
Launching a genuinely new product line where you have zero conversion history and want to measure performance in isolation. Time-limited, with a defined end date and success criteria before launch.
When NOT to Create a New Campaign
”These are different products”
Knife racks and table legs are completely different products — but they have similar economics (£30-80 AOV, 3-5% CVR, 3.5-5.5x ROAS). One tROAS handles all of them. Product diversity is what the algorithm is designed to manage.
”I want more control”
Campaign 67 (Top Performers) is the cautionary tale. Per-ad-group tROAS across 78 ad groups, constant human tuning, 7+ bidding interventions in 2025 alone — all delivered worse results than C71’s single campaign-level tROAS on autopilot. More control means more opportunities to make mistakes. See the March 2025 disaster below.
”This category isn’t getting enough impressions”
Almost always a feed quality or market demand problem, not a campaign structure problem. Fix the product titles, images, and pricing first. If the offer is competitive, the algorithm will find it. If it’s not competitive, a dedicated campaign just forces spend into losing auctions.
”We should segment by product type”
This fragments data without changing the bidding economics. Each campaign’s algorithm gets fewer signals to learn from. Unless hooks need a 150% tROAS and knife racks need a 400% tROAS, they belong together.
Learnings & Evidence
Campaign 67 vs Campaign 71: Simplicity Wins
The strongest evidence in this account. Campaign 67 (Top Performers) ran per-ad-group tROAS across 78+ ad groups. Campaign 71 (Catch All) ran campaign-level tROAS on a single ad group. Over 14 months of direct competition (Jan 2025 - Feb 2026):
| Metric | C67 (Complex) | C71 (Simple) |
|---|---|---|
| ROAS | 2.48x | 4.50x |
| CPC | £1.48 | £0.81 |
| Head-to-head ROAS wins | 0 of 14 months | 14 of 14 months |
| Products served | 10-14 | 700+ |
| Management overhead | 7+ interventions/year | Set tROAS, adjust budget |
C71 won on ROAS in every single month. Its worst month (3.32x) was higher than C67’s lifetime average (2.84x). The entire performance gap came from CPC — C71 bought the same quality click for 45% less.
Why C67 lost: Per-ad-group tROAS fragments the algorithm’s data. Each ad group’s bidding algorithm has fewer conversion signals to learn from. Campaign-level tROAS gives the algorithm the full dataset — every click, every conversion, every product — to optimise against one target.
Full analysis: Shopping Catch All vs Top Performers
Cannibalization: ~£12,784/Year Wasted
Running C67 and C71 simultaneously cost ~£12,784/year in duplicate spend on 42-142 overlapping search terms per month. Core terms like “table legs”, “hairpin legs”, and “metal table legs” appeared in both campaigns every month. C71 consistently won lower CPCs on these overlap terms and often generated more revenue from the same queries.
Every pound C67 spent competing with C71 on “table legs” was a pound that could have funded C71 bidding on a long-tail term it couldn’t reach due to budget constraints.
Lesson: Two campaigns bidding on the same products in the same auctions always cannibalises. The only way to avoid it is non-overlapping product sets (like the >£20 / <£20 price split).
The March 2025 Disaster
Campaign 67 was switched to Maximize Clicks on March 3, 2025, during a conversion tracking disruption. Result: £6,420 in spend, £4,485 in revenue — 0.70x ROAS. The worst month in the dataset.
Campaign 71, running campaign-level tROAS through the same disruption, delivered 4.99x ROAS. A 7:1 efficiency ratio on the same auctions.
Why it matters: tROAS has a built-in feedback loop — it pulls back spending when conversion signals degrade. Maximize Clicks has no feedback loop for conversions at all. This isn’t an edge case; it’s the expected behaviour of these two strategies under signal loss.
Lesson: Never switch a Shopping campaign to Maximize Clicks. If conversion tracking breaks, tROAS self-regulates. Human intervention (switching strategy) made it worse, not better.
The tROAS Yo-Yo Problem
Campaign 71’s tROAS was changed 7 times in 4 months (Jun 2025 - Feb 2026), including three changes in one week (Nov 3-9, 2025: 250% → 200% → 250% → 280%).
Google’s tROAS algorithm needs 2-4 weeks of stable conditions to optimise. Each change resets the learning window. Despite this, C71 maintained above-3.32x ROAS throughout — evidence of the underlying campaign’s strength even when actively handicapped.
Lesson: Set the tROAS and leave it alone for at least 6 weeks. The campaign historically delivered 3.9-5.8x ROAS at 250-290% targets. Frequent changes prevent the algorithm from converging on an optimal strategy.
PMax: When It Works and When It Doesn’t
Three PMax campaigns were tested in late 2025/early 2026. All are documented in detail in the audit folder.
| Campaign | Period | ROAS | Status | Key Learning |
|---|---|---|---|---|
| Knobs Only (75) | Dec 2025 | 0.37x | Paused | Niche product, low search volume. 25% of spend went to Display (zero conversions). Even Search-only returned 0.54x |
| Knife Rack (76) | Nov 2025-Jan 2026 | 1.93-3.49x | Paused | Better than Knobs but underperformed C71’s 4.50x. Trending down |
| Table Tops (77) | Jan-Feb 2026 | Volatile | Active — monitoring | 5.83x → 0.79x in two months. Too early to judge |
When PMax Works
- Products with enough search demand to keep the algorithm focused on Search
- MAXIMIZE_CONVERSION_VALUE bidding (not MAXIMIZE_CONVERSIONS)
- Categories where Google can find enough data to learn
When PMax Fails
- Niche products with low search volume: The algorithm can’t find enough Search auctions, so it fills budget with remnant Display inventory at £0.04-0.07 CPC. Zero conversions from Display across all PMax campaigns.
- MAXIMIZE_CONVERSIONS bidding: Makes Display waste worse. The algorithm chases cheap clicks regardless of channel quality.
- Products that overlap with Standard Shopping: PMax takes auction priority and suppresses Shopping. When PMax Knobs was paused, Shopping took 5 weeks to rebuild its learning history. Running both doesn’t improve coverage — it prevents Shopping from learning.
The PMax Lesson for Campaign Structure
PMax on niche categories is a structurally worse version of “create a dedicated campaign.” It gives you less control (can’t exclude Display), fragments data, and suppresses Shopping for the same products. For this account, Standard Shopping consistently outperforms PMax.
Detailed analyses: PMax Knobs Analysis, Knobs & Handles Cross-Campaign Analysis, PMax Is It Worth It
Case Study: Handles & Knobs — When the Problem Isn’t Campaign Structure
The handles/knobs category is the clearest example of misdiagnosing a campaign structure problem when the real issue is the product offer.
What happened
Handle/knob search term impressions collapsed from 60,000 (Q1 2024) to 242 (Q1 2026) — a 99.6% decline over 2 years. This happened across every campaign that served them:
| Campaign | Period | What the algorithm did |
|---|---|---|
| C67 (Top Performers) | Q1-Q4 2024 | Spent £8,485 on handles. ROAS ranged 0.91-2.31x. Impressions halved each quarter |
| C69 (Shopping Testing) | Q1-Q3 2024 | Spent £1,971 on handles. ROAS ranged 0.42-1.43x. Deprioritised |
| PMax Knobs (dedicated campaign) | Dec 2025 | 0.37x ROAS overall, 0.54x Search-only. Paused after 12 days |
| C71 (Catch All) | 2025 | Algorithm spent £25 total on handles, then stopped |
| C78 (<£20) | Jan-Feb 2026 | £29 spend, 31 clicks. Algorithm finding hooks instead |
Four different campaign structures across 2 years. Every algorithm that encountered handles reached the same conclusion: stop spending.
Why a dedicated handles campaign won’t fix it
The algorithm isn’t “starving” handles by allocating budget elsewhere. It’s testing handles, finding they don’t convert, and moving on. A dedicated campaign would force spend into auctions the algorithm has already determined are unprofitable. The PMax test proved this — even with a dedicated budget, Search-only returned 0.54x.
What the investigation should focus on
Handles reportedly performed well when first launched (before this 2-year dataset). Something changed. The question isn’t “which campaign structure?” but:
- Competition: Who’s showing up for “copper kitchen handles” on Google Shopping? At what price? If B&Q or Amazon are competing at £3-5 per handle vs £9-20, no campaign structure fixes that.
- Product page conversion: The real AOV is £46-86 (customers buy 4-8 units). Is it clear from the Shopping listing and product page that this is a multi-unit purchase? Does the page make it easy to buy 6 handles at once?
- Pricing position: Are the products competitively priced for their quality tier?
- Shopping landscape changes: Google has been adjusting Shopping to favour higher-value items. A £7 knob may simply not get served against £40+ products regardless of campaign structure.
The right re-testing strategy
If the offer is updated (better images, competitive pricing, improved product pages):
- Don’t create a new campaign. The products are already in C78.
- Update the feed (better titles, better images). The algorithm re-evaluates products when listing quality changes.
- Set a review date 6-8 weeks after changes go live. Check: did impressions increase? Did any handle terms start converting?
- Define a kill threshold upfront: “If after £200 in handle-attributable spend we’re still below 1.5x ROAS, we stop.”
- Only if C78 shows no improvement despite a genuinely better offer: run a time-limited test campaign with forced minimum spend (£10-15/day) for 6-8 weeks to distinguish “algorithm hasn’t noticed” from “market genuinely gone.”
Contrast with hooks
Hooks tell the opposite story. They declined through 2024 as C67 narrowed and C69 wound down, then recovered strongly once C71 picked them up:
- Q4 2024: 688 impressions (no campaign serving them well)
- Q4 2025: 16,378 impressions (C71 serving them at 5.00x ROAS)
The algorithm found hooks, tested them, discovered they convert, and allocated budget. No dedicated campaign needed. This is what “let the algorithm work” looks like when the offer is competitive.
Decision Flowchart
When considering a new Shopping campaign, work through these questions in order:
1. Are the product economics fundamentally different from the existing campaign?
- Different enough that the existing tROAS would systematically overbid or underbid → New campaign with appropriate tROAS
- Within the range the algorithm can handle (e.g. 3-6x ROAS across categories) → Keep in existing campaign
2. Is there a genuine need for guaranteed minimum spend?
- Yes, and the offer has been validated (good images, competitive pricing, strong product page) → Time-limited test campaign (6-8 weeks, defined kill threshold)
- Yes, but the offer hasn’t been validated → Fix the offer first, then reassess
- No → Keep in existing campaign
3. Is the targeting structurally different?
- Different country, language, or feed → New campaign
- Same targeting, just different products → Keep in existing campaign
4. Is this a time-limited test with clean measurement needs?
- Yes, genuinely new product line with zero conversion history → New campaign with fixed end date and success criteria
- No, or the products already have history in another campaign → Keep in existing campaign
If you reach the end without triggering a split, the products belong in an existing campaign.
What Can Be Improved
Immediate
- Consolidate C67 into C71. The data is unambiguous — 14/14 months, 45% CPC advantage, ~£12,784/year cannibalization. Full evidence: Shopping Catch All vs Top Performers
- Stabilise C71’s tROAS at 300%. Stop changing it. 6 weeks minimum before reassessing.
- Product feed titles. Long-tail table leg terms are surging but product titles don’t match. “Square Industrial Legs | 71cm Table” doesn’t appear for “dining table legs”. Desk products at 71cm height don’t include “desk” in the title. Full evidence: Shopping Category Gap Analysis
Short-term
- Redirect C67’s budget (~£80/day) to C71. Push impression share from ~67% toward 75-80%.
- Monitor C78 through Q1 2026. Review at £2,000 cumulative spend. The price-based segmentation is working — overlap with C71 is shrinking.
- Keep paused PMax campaigns paused. Monitor Table Tops — review by end of March or at £2K cumulative spend.
Medium-term
- Evaluate category-based segmentation after consolidation is stable (4-6 weeks). The data currently supports price-based over category-based, but this should be reassessed as the account evolves.
- Investigate handles/knobs offer. Competition, pricing, product page UX — before any campaign structure changes.
Data Sources
All findings in this document are backed by analysis from the following:
- Shopping Catch All vs Top Performers: Consolidation Analysis — 26-month head-to-head comparison, cannibalization data, change history
- Shopping Category Gap Analysis — SEO vs Shopping impression trends, feed title gaps, desk legs opportunity
- Product Performance Decision Framework — per-product diagnostic process (complementary to this document)
- PMax Knobs Analysis — 12-day PMax test, Display waste, Search-only performance
- Knobs & Handles Cross-Campaign Analysis — 2-year handle/knob performance across all campaigns
- PMax Is It Worth It — PMax vs Standard Shopping comparison
- Google Ads Operational Log — events affecting campaign performance