TL;DR: Google's advertising advice is not neutral. Their $264 billion revenue depends on advertisers spending more, not spending smarter. Court rulings in the US and EU have confirmed this conflict. Internal systems incentivize reps to increase your spend. The Optimization Score measures compliance, not performance. Performance Max weighs engagement at 90% and conversions at only 10%. This article lays out the evidence so you can make informed decisions about which advice to follow — and which to override.
When Google tells you to "increase your budget," "switch to broad match," or "enable auto-apply recommendations," they present it as performance optimization. The data tells a different story.
This is the most important article in our Google Ads Efficiency Playbook series. Everything else — from auction mechanics to default settings to bidding strategies — builds on this foundational understanding: Google's incentives are structurally misaligned with yours.
The Revenue Model: How Google Makes Money From You
Google's primary revenue source is advertising. In 2024, Google's ad revenue reached $264.59 billion, making it by far the most dominant advertising platform in history.
The business model is simple:
- You pay per click. Every time a user clicks your ad, Google charges you.
- Google earns nothing from impressions. If a user sees but ignores your ad, Google makes zero.
- Higher CPCs = more revenue. Google's auction system creates competitive bidding that naturally drives up keyword prices.
- More clicks = more revenue. Any feature, setting, or recommendation that increases total click volume increases Google's income.
This creates an inherent tension. Google needs your ads to be relevant enough that users click them (otherwise Google earns nothing), but beyond that baseline of clickability, every dollar you spend on non-converting clicks is pure profit for Google and pure loss for you.
The Structural Problem
Google simultaneously operates:
- The auction platform where advertisers compete
- The ad exchange where inventory is bought and sold
- The publisher ad server that serves ads to websites
- The demand-side platform that agencies use to buy ads
- The recommendation engine that tells you how to spend your money
When the same company runs the marketplace, controls the supply, manages the demand, and advises the buyers, the conflict of interest is not theoretical. It is structural.
And two major courts have agreed.
The Legal Evidence: Courts Confirm the Conflict
US DOJ Ruling (April 2025)
On April 17, 2025, Judge Leonie Brinkema of the U.S. District Court for the Eastern District of Virginia ruled that Google illegally monopolized open-web digital advertising markets.
Key findings from the ruling:
- Google unlawfully tied its publisher ad server (DFP) to its ad exchange (AdX), forcing publishers who wanted access to high-demand inventory to adopt Google's tools
- Google employed auction mechanisms like "First Look" and "Last Look" to give AdX preferential treatment over competing exchanges
- Google's exclusionary conduct "substantially harmed Google's publisher customers, the competitive process, and, ultimately, consumers of information on the open web"
This was the second major antitrust blow for Google in under a year, following the search monopoly ruling.
EU Decision and Fine (September 2025)
The European Commission determined that Google violated Article 102 TFEU by favoring its own ad tech services between 2014 and 2025. The findings:
- Google held 91% market share in publisher ad servers via DoubleClick for Publishers
- Google held 60-70% dominance in ad exchanges through AdX
- EUR 2.95 billion fine imposed on September 5, 2025
EU Executive Vice-President Teresa Ribera stated: "At this stage, it appears that the only way for Google to end its conflict of interest effectively is with a structural remedy, such as selling some part of its Adtech business."
The Self-Preferencing Mechanisms
The EU investigation uncovered five specific mechanisms Google used to advantage itself:
| Mechanism | How It Worked | Impact |
|---|---|---|
| First Look | AdX received exclusive first access to ALL inventory before competitors could bid | Cherry-picked premium impressions |
| Last Look | AdX could see competing exchanges' sealed bids before submitting final offers | Undermined auction integrity |
| Dynamic Revenue Share (2015) | Covert pricing that varied take rates on individual impressions | Obscured manipulation from publishers |
| Project Poirot (2017) | Reduced bids from Google's DV360 into competing exchanges by up to 90% | Caused 20-30% revenue drops for competitors |
| Unified Pricing Rules (2019) | Prohibited publishers from setting differential price floors between AdX and competitors | Eliminated publishers' ability to incentivize alternatives |
The Demand Lockout
A Digiday analysis of trial evidence found that in 60% of AdX auctions examined, Google's tools were the only bidders. This means Google had effectively sealed off demand from competing platforms, creating a closed system where it controlled both supply and demand.
Google maintained approximately 20% ad exchange fees for over a decade — rates the court determined were "significantly above competitive levels." Despite technological advances that should have driven fees downward, Google's monopoly position allowed it to extract supracompetitive pricing year after year.
The Apollo System: How Google Reps Are Incentivized
Google Ads reps are not neutral consultants. A former Google quality analyst revealed the internal system that drives their recommendations.
What "Apollo" Is
Apollo is Google's internal system that:
- Recommends "improvements" to reps that are designed to increase advertiser spend, not advertiser performance
- Assigns points to reps for successfully implementing these recommendations
- Functions as a commission-like incentive structure
As Ten Thousand Foot View documented: Google reps are "commissioned salespeople" whose "goal is to secure your ongoing business and upsell you on more ads."
What Reps Actually Recommend
Based on documented examples from multiple agencies and PPC specialists:
Budget manipulation: "If you're running Max Conversions bidding and increase your budget, you may get some more leads, but the cost per lead will often go up." Reps push budget increases that inflate CPAs.
Lowering ROAS targets: Reps suggest lowering target ROAS to "boost performance" — which increases volume while reducing profitability. More volume means more Google revenue.
Broad match conversion: Reps push to "swap out all your keywords for 100% broad match," expanding traffic volume (and Google's revenue) at the expense of targeting precision.
Display Network enablement: Reps recommend adding the Display Network to Search campaigns. Michelle Morgan of Paid Media Pros notes that in 99 out of 100 cases, Display Network performance in Search campaigns is "pretty terrible."
Auto-generated creatives: Rep-pushed automated ads sometimes include content "way off topic from your offer" — and have been documented to violate Google's own policies.
The Three Tiers of Google Reps
| Rep Type | Account Size | Quality | Risk Level |
|---|---|---|---|
| Third-party reps | Small (<$5K/month) | Lack training and practical PPC experience | Highest |
| General Google reps | Medium | More trained but not PPC specialists | Medium |
| Accelerated Growth Team | Large | Most experienced; push rapid 3-month scaling regardless of fit | Medium-High |
How Reps Undermine Agencies
Multiple agencies have documented a pattern where Google reps:
- Bypass agencies to contact clients directly
- Make statements implying the agency underperforms
- Make unauthorized account changes without agency involvement
- Turn over frequently (every 1-2 quarters), losing institutional knowledge
SearchRank documented a rep "going behind our back and dealing directly with our mutual client." Ten Thousand Foot View estimated reps consume "50 hours a month of time that probably provides no value" for experienced PPC professionals.
What This Means for You
"Google's recommendations are sometimes in your best interest. They are always in Google's best interest." — Brad Geddes, industry veteran, Search Engine Land
If a Google rep contacts you, treat their advice the way you would treat advice from any salesperson. They may occasionally suggest something useful. But their compensation structure rewards your spending, not your converting.
The Optimization Score: A Compliance Metric Disguised as Performance
Google's Optimization Score appears throughout the Google Ads interface as a percentage (0-100%) suggesting how well your account is performing. The implication is clear: higher score = better performance.
This is false.
What Optimization Score Actually Measures
The Optimization Score measures how extensively you have reviewed and complied with Google's recommendations. It does not measure conversion rate, ROAS, cost per acquisition, or any actual business outcome.
As Austin LeClear of Grow My Ads states: "Whether it is at 10% or 100%, your overall optimization score does not change a single thing as to how your ad campaign, ad groups, keywords, or anything in the account actually performs."
The Dismissal Experiment
Grow My Ads conducted a revealing experiment:
- Starting score: 72.8%
- Action taken: Dismissed all recommendations without implementing any of them
- New score: 83.3%
- Campaign changes made: Zero
The score increased by 10.5 percentage points from literally doing nothing but clicking "dismiss." This proves the score measures review activity, not account health.
Performance Disconnection
The same experiment revealed another critical data point: The client whose score was "only" 72.8% was simultaneously growing 50% year-over-year and hitting all target ROAS goals. By any business metric, this account was performing exceptionally — yet Google's score implied it needed significant improvement.
A Marlin SEM case study found a similar pattern: a limo company client had a "low Optimization Score" while achieving $20 CPAs in a highly competitive Los Angeles market with strong call volumes and bookings.
The Five Most Harmful Score-Boosting Recommendations
Google's recommendations are structured so that the highest score-boosting actions are often the most wasteful:
| Recommendation | Score Boost | Why It's Harmful |
|---|---|---|
| AI Max for Search | +24.9% | Targets extremely low-intent searches based on website text |
| Display Network expansion | Variable | Display converts at 0.77% vs. Search at 4%+ — 5x worse |
| Broad Match keywords | +13 points | Triggers irrelevant, costly searches in competitive industries |
| Additional sitelinks | Variable | Increases ad prominence and costs without guaranteed conversions |
| Image additions | Variable | Recommendations lack precision; may suggest images already active |
How the Score Is Used Against Agencies
This is where the Optimization Score becomes a coercive tool. Google Partners — agencies certified by Google — must maintain a 70% optimization score across all managed accounts to keep their status.
This creates a direct conflict: agencies must choose between implementing recommendations that harm their clients or losing their Google Partner certification. The 70% threshold means agencies are forced to accept some harmful recommendations to stay above the line.
The Google Partner Program: Structural Pressure to Waste
Google's Partner Program is presented as a quality certification for agencies. In practice, it functions as a mechanism to channel Google's revenue-maximizing recommendations through agencies to their clients.
The Requirements That Create Conflict
To maintain Google Partner status, agencies must:
- Maintain 70%+ Optimization Score across all accounts — forcing implementation of harmful recommendations
- Enable 15+ auto-applied recommendations — allowing Google to make changes without client consent
- Meet spend thresholds in non-Search categories (Display, Video, Apps, Shopping) — pushing client budgets toward lower-ROI channels
Premier Partner metrics specifically incentivize "spend in non-Search categories" rather than client outcomes. This means the agencies Google certifies as its best partners are the ones directing the most client money into Google's lower-performing inventory.
Agencies That Quit
Rocket Clicks, a well-known PPC agency, publicly quit the Google Partner Program, calling the requirements "used car sales tactics. Shady, deceptive, manipulative, and not in any way focused on what was beneficial for their advertisers."
Their specific grievances:
- Zero promised benefits delivered. No product beta access, no support escalation, no client introductions.
- Google contacted clients directly to bypass agencies, making unauthorized account changes.
- Re-enabling recommendations was required for credit. Google suggested re-enabling previously applied recommendations as a way to maintain partner scoring.
Market Vantage reached a similar conclusion: "Agencies could soon be in the position of having to choose whether to follow Google's guidelines and reduce the efficiency and accuracy of their advertisements or lose their Partner certification."
What This Means for Advertisers
When you see "Google Partner" on an agency's website, it may not mean what you think. It means the agency maintains Google's recommended score threshold, enables auto-apply features, and spends across multiple Google product categories.
It does not mean the agency is optimizing for your ROI.
"Google's only goal is for advertisers to spend more money. An agency's goal is to help the advertiser hit their goals. Agencies are aligned with client goals, Google is not!" — Rocket Clicks
Performance Max: 90% Engagement, 10% Conversion
Performance Max (PMax) is Google's flagship campaign type, designed to run ads across all Google properties simultaneously: Search, Display, YouTube, Gmail, Discover, and Maps.
Google promotes it as the future of advertising. The data reveals a structural problem.
The Algorithm Weighting
GROAS.ai conducted a technical analysis of Performance Max's asset selection algorithm and found the following weighting:
| Factor | Weight | Who Benefits |
|---|---|---|
| Click-Through Rate | 45% | Google (more clicks = more revenue) |
| Expected View Duration | 25% | Google (engagement on Google properties) |
| Engagement Rate | 20% | Google (more interaction = more data) |
| Conversion Probability | 10% | Advertiser (actual business results) |
90% of the algorithm's weighting serves engagement metrics. Only 10% directly serves advertiser conversion goals.
This means PMax is structurally designed to select and display assets that generate clicks and engagement — not assets that drive your sales.
When "Best" Means Worst
Google rates PMax assets as "Best," "Good," "Low," or "Learning." The ratings are based on the algorithm's 90/10 engagement/conversion weighting. This creates a counterintuitive outcome:
Case Study: Fashion Retailer
| Google Rating | Actual ROAS | Google's Assessment |
|---|---|---|
| "Best" | 1.8x | Highest engagement metrics |
| "Learning" | 5.1x | Highest actual returns |
The asset Google rated as its "Best" delivered 1.8x ROAS — the worst conversion performance. The asset rated as "Learning" delivered 5.1x ROAS — nearly 3x better returns for the advertiser.
This is not an anomaly. GROAS.ai found that assets rated "Best" by Google often achieve the worst conversion rates, while "Low" rated assets frequently drive the highest-quality sales.
The Budget Allocation Problem
PMax allocates budget across channels automatically, and advertisers cannot override these allocations even when the data clearly shows waste.
Case Study: TechStart
| Channel | Budget Allocation | Conversions Generated |
|---|---|---|
| Search | 28% | 67% |
| Display | 45% | 18% |
| YouTube | 19% | 12% |
| Discovery | 8% | 3% |
Display received 45% of the budget but generated only 18% of conversions. Search received 28% of budget but drove 67% of conversions. Despite this clear evidence, TechStart could not reallocate spending — the algorithm maintained its engagement-optimized distribution.
They ultimately shifted budgets to traditional Search campaigns to regain control.
The Seven Critical Algorithm Biases
GROAS.ai's analysis identified seven systematic biases in the PMax algorithm:
-
Engagement Trap: High-click assets prioritized regardless of conversion quality. A B2B company saw 67% traffic cost increases with 34% fewer qualified leads.
-
Broad Appeal Priority: Generic creative performing "adequately" everywhere receives priority over specialized assets performing exceptionally on specific placements.
-
New Asset Penalty: Potentially high-performing new assets receive suppressed impressions for 4-8 weeks while inferior established assets dominate.
-
Platform Revenue Optimization: During revenue shortfall periods, high-CPC assets receive 280% more impressions despite 45% worse ROAS.
-
Cross-Account Learning Contamination: Your asset selection gets influenced by unrelated businesses. A luxury watch retailer found ads optimized based on mass-market electronics retailers' data.
-
Creative Homogenization: The algorithm favors generic stock photo aesthetics over distinctive brand positioning. A professional services firm discovered Google showed generic stock photos instead of specialized infographics that converted 340% better.
-
Attribution Manipulation: Video and dynamic assets receive inflated conversion credit while static images are under-credited.
The "Transparency Without Control" Admission
An anonymous former Google engineer told GROAS.ai: "The decision to provide transparency without control was deliberate. Google's data shows advertisers make suboptimal decisions with control, so reporting satisfies demands without enabling budget misallocation."
This quote reveals the philosophy behind PMax: Google believes it allocates budgets better than you do, so it shows you the data while preventing you from acting on it.
A professional industry survey found:
- 84% of advertisers appreciate the visibility
- 91% wish they could adjust channel budgets
- 83% continue using traditional Search campaigns for clients requiring control
The Search Advertising Self-Preferencing
The conflict of interest extends beyond recommendations into the search results themselves.
The Wall Street Journal Investigation
A Wall Street Journal research project examined 25,000 searches and found:
- Hardware products from Google and Alphabet subsidiaries appeared in the top ad slot of 91% of searches related to those products
- In 43% of searches, the top TWO ad slots were taken by Google products
- All 1,000 searches for laptops began with Chromebook advertisements
- Google Pixel phones dominated phone-related search results
When the Wall Street Journal published these findings, Google's response was revealing: "Within days, most of the biased ads disappeared from search results. Google declined to comment."
The rapid removal suggests Google knew the self-preferencing was indefensible. They fixed it only when exposed.
How Google Reduces Advertiser Control Over Time
The conflict of interest is not static. Google has systematically reduced advertiser control through a series of platform changes:
Match Type Erosion
- Exact match is no longer truly exact — it now shows for close variants and meaning matches
- Phrase match now behaves like old broad match modifier — it was retired and its function absorbed
- Broad match gives Google maximum flexibility with keyword interpretation
- 20-40% of actual search queries remain invisible to advertisers due to "low-volume" privacy restrictions
Hidden Search Terms
North Country Consulting found that hidden queries (terms Google doesn't show in the Search Terms Report) have:
- 52% higher CPCs than visible terms
- 44% lower CTRs than visible terms
An independent analysis of $20 million in ad spend found that up to 85% could be wasted on queries advertisers can't see or control.
New Automation Features
Recent changes that shift control toward Google:
| Feature | What It Does | Advertiser Impact |
|---|---|---|
| AI Max for Search | Replaces keyword targeting with "keywordless" targeting | Loss of granular keyword control |
| Smart Bidding Exploration | Temporarily lowers ROAS targets to "capture new users" | Reduces profitability without consent |
| AI Overviews Integration | Requires compliance with Google's automated tools | Makes manual control less relevant |
| Brand Guidelines | Google introduced because automation was creating "off-brand" creatives | Admission that automation operated outside advertiser preferences |
| Language Targeting Removal | Eliminates manual language targeting in bilingual markets | Reduces control in complex markets |
The Pattern
Each change follows the same pattern:
- Google introduces a feature that reduces manual control
- The feature is positioned as "smarter" or "more efficient"
- Advertisers who resist are penalized through lower Optimization Scores
- Partners who don't implement are threatened with status loss
- Eventually, the old manual option is deprecated entirely
The Broad Match Push: Revenue Disguised as AI Innovation
One of the clearest examples of Google's revenue-first approach is its aggressive promotion of broad match keywords.
How Google Sells Broad Match
Google positions broad match as an AI-powered advancement: let the algorithm interpret user intent, reach more potential customers, and simplify your keyword management. The pitch sounds compelling.
The reality, documented by TG Digital: "Broad Match is a classic example of false efficiency. It feels like it's working because your cost-per-click drops, but your cost-per-acquisition skyrockets."
How Broad Match Benefits Google
The mechanics are straightforward:
- Broad match enters your ads into more auctions. More auctions = more impressions = more clicks = more Google revenue.
- Lower CPCs mask higher CPAs. You see cheaper clicks and think the campaign is efficient. But those cheap clicks don't convert, so your actual cost per customer goes up.
- Google controls the matching logic. With exact match, you decide which searches trigger your ads. With broad match, Google decides. The algorithm is not transparent about how it interprets "semantic relevance."
The Data
- Between June 2023 and June 2025, broad match CPCs rose 29% (TG Digital)
- A plumbing business using broad match for "plumber near me" pays $2 for the low-intent search "how to do your own plumbing" — which will never convert — alongside $12 for the high-intent search "plumber near me open now"
- 84% of advertisers use fewer than 50 negative keywords (GROAS.ai), meaning most lack the defensive infrastructure to make broad match safe
- Accounts with proper negative keywords convert at 13% vs. 4.6% without them — but broad match requires extensive negatives to function efficiently
The Push Through Every Channel
Google promotes broad match through:
- Recommendations: "Switch to broad match" recommendations add significant points to your Optimization Score
- Google reps: Reps routinely recommend "swapping all keywords to 100% broad match" (Ten Thousand Foot View)
- Partner requirements: Premier Partner metrics incentivize broader targeting
- Platform changes: Exact match has been loosened to include "close variants" and "meaning matches," pushing even exact match toward broad-match behavior
The pattern is consistent: expand targeting, increase click volume, grow Google's revenue.
The Independent Expert View
"When you rely solely on broad match, you're putting all your faith in Google's AI to capture those high-converting search terms. And while Google's system is pretty smart, it's not perfect." — Grow My Ads
The recommended approach from independent PPC specialists: start with exact and phrase match, build 30-50+ monthly conversions, then add broad match paired with Smart Bidding only on campaigns where the data supports it.
The Hidden Query Problem
Google doesn't show you every search query that triggers your ads. This invisible traffic represents one of the most significant and least-discussed conflicts in the platform.
What Advertisers Cannot See
Due to "low-volume" privacy restrictions, Google hides 20-40% of actual search queries from the Search Terms Report. You pay for these clicks, but you cannot see what searches triggered them.
The Cost of Invisibility
North Country Consulting's analysis found:
- Hidden queries have 52% higher CPCs than visible terms
- Hidden queries have 44% lower CTRs than visible terms
- An independent analysis of $20 million in ad spend found that up to 85% could be wasted on queries advertisers cannot see or control
Think about what this means: the searches Google hides from you are more expensive and less relevant than the ones it shows you. And because you cannot see them, you cannot add them as negative keywords or optimize against them.
Why This Benefits Google
Hidden queries increase total click volume without advertiser oversight. Every hidden, irrelevant query that generates a click is revenue for Google and waste for the advertiser. The privacy justification may be legitimate, but the economic impact is one-directional: it benefits Google at the advertiser's expense.
What You Can Do
- Review your Search Terms Report weekly to catch and negative-match visible irrelevant queries
- Use N-Gram analysis every 2-3 months to identify patterns across queries
- Build proactive negative keyword lists — start with 200+ universal negatives (GROAS.ai recommends categories covering free/cheap, job/career, educational, geographic, and informational terms)
- Monitor CPC trends by keyword — unexpected CPC increases may indicate hidden queries driving up auction costs
- Consider third-party query monitoring tools that can capture more query data than Google's native reporting
The Ad Fraud Layer
Beyond Google's structural conflicts, there is an additional cost layer that Google's incentive structure is poorly aligned to address.
The Scale of the Problem
- 5.1% of all ad clicks were fraudulent in 2024, according to North Country Consulting
- Global ad fraud losses reached approximately $38 billion annually
- From Google's $301.59 billion ad revenue in 2024, an estimated $16.59 billion was lost to invalid traffic
Why Google's Incentives Are Misaligned
Google does refund some detected invalid clicks. But its detection is partial, and there is a structural disincentive to be too aggressive about fraud detection: every fraudulent click that slips through is revenue for Google and cost for the advertiser.
Google maintains that it has sophisticated invalid traffic detection. Independent fraud monitoring services consistently find additional fraud that Google's systems miss. The gap between Google's detection and independent detection represents the gray area where Google's revenue incentive conflicts with advertiser protection.
What You Can Do
- Monitor for suspicious patterns: sudden spikes in clicks without corresponding conversions, traffic from unusual geographic locations, or repeated clicks from the same IP ranges
- Consider third-party fraud detection: Services like FraudBlocker, Clixtell, and ClickCease provide additional protection layers
- Review placement reports: For Display and PMax campaigns, check where your ads appear and exclude suspicious sites
- Set up IP exclusions for known fraudulent sources in your campaign settings
The Evidence Summary
| Claim | Evidence | Source |
|---|---|---|
| Google's revenue depends on advertiser spending | $264.59 billion ad revenue (2024) | CNBC/Alphabet earnings |
| Google illegally monopolized ad tech | Federal court ruling | DOJ, April 2025 |
| EU confirmed systematic self-preferencing | EUR 2.95 billion fine | European Commission, Sept 2025 |
| 60% of AdX auctions had only Google bidding | Trial evidence analysis | Digiday |
| Google maintained 20% fees for a decade | EC investigation | PPC Land / EC decision |
| Reps incentivized via "Apollo" to increase spend | Former Google quality analyst | Pixsym |
| Optimization Score unrelated to performance | 50% YoY growth client scored 72.8% | Grow My Ads |
| Dismissing recommendations increases score | 72.8% to 83.3% with zero changes | Grow My Ads |
| PMax algorithm only 10% weighted to conversions | Technical algorithm analysis | GROAS.ai |
| "Best" rated assets had worst ROAS | Fashion retailer: "Best" = 1.8x, "Learning" = 5.1x | GROAS.ai |
| Display gets 45% of PMax budget for 18% of conversions | TechStart case study | GROAS.ai |
| Partners must maintain 15+ auto-applied recommendations | Program requirements documentation | Rocket Clicks |
| Google's own products in 91% of top ad slots | 25,000 search analysis | WSJ / Cornell Blog |
| Hidden queries: 52% higher CPC, 44% lower CTR | $20 million spend analysis | North Country Consulting |
What to Do About It: The Counter-Strategy
Understanding the conflict is the first step. Acting on it is what separates efficient advertisers from those funding Google's revenue growth.
The Defensive Playbook
1. Disable all auto-apply recommendations immediately
Navigate to: Recommendations > All Campaigns > Auto-Apply Settings > Uncheck all boxes.
Of 24 auto-apply options, only 2 are safe to leave enabled. We detail this in Auto-Apply Recommendations: Why Only 2 of 24 Are Safe.
2. Ignore the Optimization Score
It measures compliance, not performance. A client growing 50% year-over-year scored 72.8%. Dismissing all recommendations raised it to 83.3% with zero campaign changes.
Track these instead: Conversion rate, Cost per acquisition, ROAS, and Quality Score.
3. Override default settings
Every new Google Ads account starts with settings optimized for Google's revenue. The 10 most damaging defaults and their fixes are detailed in The 10 Default Settings Draining Your Google Ads Budget.
4. Treat Google reps as salespeople
If they call: listen, take notes, but never implement recommendations without independent verification. Their incentive structure (Apollo) rewards increasing your spend, not your conversions.
5. Start with Search campaigns, not Performance Max
Search delivers median ROAS of 5.17 vs. PMax's 2.57. For service businesses, Search campaigns provide the keyword-level control and conversion focus that PMax's engagement-weighted algorithm cannot.
6. Build your own data infrastructure
Google optimizes based on data it can see. When you feed downstream conversion quality data (from CRM, call tracking, and offline conversions), you force the algorithm to optimize for what actually matters to your business.
7. Evaluate agencies on results, not Google Partner status
Partner status indicates Google compliance, not client performance. Ask agencies: "What is your average client ROAS?" not "Are you a Google Partner?"
The Mindset Shift
The fundamental shift is this: Google is a vendor, not a consultant.
You do not take pricing advice from the car dealer. You do not take usage recommendations from the utility company. You should not take optimization advice from the platform that profits when you spend more.
"Google Ads may have access to data — but you have access to knowledge. Take every recommendation with a grain of salt!" — Claire Jarrett, Google Ads consultant
Google provides a powerful advertising platform. The auction mechanics work. The reach is unmatched. But every recommendation, every default, and every automated feature should be evaluated through this lens: does this serve my conversion goals, or does this serve Google's revenue goals?
When those two align, accept the recommendation. When they don't — and the evidence shows they frequently don't — override it.
Key Takeaways
The Conflict in 5 Points
-
Google's $264 billion revenue depends on you spending more. Every recommendation is filtered through this incentive.
-
Courts have confirmed the conflict. The US DOJ ruled Google illegally monopolized ad tech. The EU imposed a EUR 2.95 billion fine for systematic self-preferencing.
-
The Optimization Score is a compliance metric, not a performance metric. Dismissing all recommendations can increase your score. Growing 50% YoY can leave you at 72.8%.
-
Performance Max is weighted 90% toward engagement, 10% toward conversion. Google's "Best" asset ratings are inversely correlated with actual advertiser ROAS.
-
Google reps are incentivized by "Apollo" to increase your spend. They are commissioned salespeople, not consultants.
What This Means for Your Strategy
| Google's Priority | Your Priority | Action |
|---|---|---|
| More clicks | Better conversions | Focus on Quality Score, not bid volume |
| Broader targeting | Precise targeting | Use exact/phrase match, build negative keyword lists |
| Automated control | Strategic control | Disable auto-apply, manually review all recommendations |
| Cross-channel spending | Highest-ROI channels | Start with Search, add others only with proven data |
| Higher budgets | Efficient budgets | Optimize Quality Score before increasing spend |
What's Next in This Series
Now that you understand why Google's advice is structurally unreliable, the next step is learning exactly which settings to change.
In The 10 Default Settings Draining Your Google Ads Budget, we walk through every default setting that costs you money, with step-by-step instructions for fixing each one.
Then, in Auto-Apply Recommendations: Why Only 2 of 24 Are Safe, we categorize all 24 auto-apply options by risk level and show you the precise navigation path to disable each dangerous one.
The platform is powerful. The mechanics reward quality. But the advice? The advice rewards Google.
Your job is to use the platform while ignoring the salesperson.
This guide is part of the Google Ads Efficiency Playbook 2026 series. Evidence sourced from US DOJ court ruling (April 2025), European Commission decision (September 2025), GROAS.ai algorithm analysis, Grow My Ads optimization score experiments, Rocket Clicks partner program documentation, Wall Street Journal (25,000 search analysis), North Country Consulting ($20M spend analysis), PPC Land (15,000 accounts), Digiday trial evidence analysis, and multiple independent PPC specialists.