Combining ML Predictions in the Auction

How to combine multiple ML predictions to make optimal auction decisions.

The Expected Value Formulation: pCTR × bid, pCTR × pCVR × bid

Basic Formulation

For CPC campaigns:

  • eCPM = pCTR × bid: Expected revenue per impression

For CPA campaigns:

  • eCPM = pCTR × pCVR × bid: Expected value accounting for both click and conversion probability

Why This Matters

The auction should select ads that maximize expected value, not just highest bid or highest CTR.

Quality Scores and How They're Computed

Quality Score Concept

A composite metric combining:

  • Predicted CTR: How likely the ad is to be clicked
  • Ad relevance: How well ad matches user/context
  • Landing page quality: User experience after click
  • Historical performance: Past performance signals

Computation

Quality scores are typically:

  • Learned: ML model predicts quality
  • Calibrated: Adjusted to match actual performance
  • Normalized: Scaled for use in scoring functions

Usage

  • Score = quality_score × bid: Incorporate quality into ranking
  • Reserve prices: Higher quality scores can lower reserve requirements
  • Advertiser feedback: Help advertisers understand why ads rank where they do

Blending Multiple Objectives: Clicks, Conversions, Engagement

The Challenge

Different advertisers optimize for different objectives:

  • Some want clicks
  • Some want conversions
  • Some want engagement (video views, time spent)

Approaches

Multi-Objective Scoring

Combine multiple predictions:

  • Score = α × pCTR × bid_click + β × pCTR × pCVR × bid_conv + γ × pEngagement × bid_eng

Advertiser-Specific Objectives

Use advertiser's stated objective:

  • CPC campaigns: Optimize for clicks
  • CPA campaigns: Optimize for conversions
  • Engagement campaigns: Optimize for engagement

Unified Value Function

Convert all objectives to common currency (revenue):

  • Estimate value of click, conversion, engagement
  • Score = Σ(probability × value × bid_multiplier)

The Cold Start Problem: Bidding with Uncertain Predictions

The Problem

New ads have no historical data:

  • No CTR history
  • No CVR history
  • High prediction uncertainty

Solutions

Exploration

  • Lower reserve prices: Give new ads opportunities
  • Guaranteed impressions: Reserve some inventory for exploration
  • Multi-armed bandits: Formal exploration framework

Prior Information

  • Advertiser history: Use advertiser's past performance
  • Category priors: Use category-level statistics
  • Creative features: Use ad creative attributes

Uncertainty Quantification

  • Confidence intervals: Estimate prediction uncertainty
  • Conservative bidding: Adjust bids based on uncertainty
  • Bayesian approaches: Model uncertainty explicitly

Bid Shading with Quality Knowledge

Bid Shading

Bidding below true value when you have information advantage:

  • Platform perspective: If quality is high, can accept lower bid
  • Advertiser perspective: If quality is low, should bid lower

Quality-Adjusted Bidding

  • Platform: Adjust reserve prices based on quality
  • Advertisers: Adjust bids based on predicted performance
  • Both: More efficient matching when both sides have quality information

How Prediction Errors Propagate to Revenue Impact

Error Propagation

Prediction errors compound:

  • CTR error: Affects ranking and pricing
  • CVR error: Affects conversion-optimized campaigns
  • Bid error: Advertisers may bid incorrectly

Impact Analysis

  • Revenue loss: Wrong ads win auctions
  • Advertiser dissatisfaction: Poor ROI from incorrect predictions
  • User experience: Irrelevant ads shown

Mitigation

  • Calibration: Ensure predictions are well-calibrated
  • Uncertainty modeling: Account for prediction uncertainty
  • Monitoring: Track prediction accuracy and revenue impact
  • A/B testing: Validate improvements before full rollout

Understanding these interactions is crucial for building robust systems.

Content to be expanded...