Blog

The Perplexing Diminishing Returns of Facebook Ads.

TL;DR – We show that dividing an ad budget into many ad instances is a better bet for price efficiency for obtaining unique users than lumping all the money into one ad instance.

At Two Six Technologies, we use social media advertisements to recruit survey participants. Frequently, we use the Meta ad platform to serve advertisements on Facebook, Instagram, Messenger, and WhatsApp. As such, we are concerned with the best way to structure our campaigns to obtain the most survey responses.

We tune many aspects of our campaign, including campaign, incentive structure, and targeting, to encourage users to respond. For this blog post, we will focus on the advertising budget. Meta has a bidding system. Meta auctions ad space. Each campaign competes against each other bids in the auction. 

For the examples below our campaign objective was traffic, our performance goal was maximize number of link clicks and we chose spend-based bidding, highest volume. According to Meta, spend-based bidding “focus[es] on spending your full budget and getting the most results or value possible” with highest volume “maximiz[ing] delivery and conversions you can get from your budget.” (Meta Business Help Center). In summary, Meta should use our budget to get the most link clicks possible.

However, in our campaigns, we noticed that advertisements we allocated a smaller budget had higher clicks per dollar. We decided to test this. Below, we give the results of two campaigns where we ran copies of the same advertisement but at different budgets. 

In our first campaign, we ran two ads for $10.31, two ads for $20.62, two ads for $30.93, and two ads for $41.25. The ads ran for a little over 24 hours. Below we plot the impressions per dollar, reach per dollar, and clicks per dollar for the eight ads.

We see that as the cost/budget goes higher, the rate on impressions, reach and clicks is lower. In the next table, we show how the ads with higher cost compared to the $10.31 ads. In this table, we are reporting the average of the two ads at each cost.

Percent change compared to total cost (USD) 10.31
Total cost (USD)Impressions per USDReach per USDClicks per USD
20.62-3%-16%-16%
30.93-4%-26%-19%
41.25-4%-33%-22%

While the number of impressions per dollar stays relatively constant, the reach per dollar and clicks per dollar fall off considerably. For example, the clicks per dollar for $20.62, $30.93, and $41.25 are relatively close to each other. Facebook may be getting the “highest-propensity” users to view and click on ads first, and then moving on to lower-propensity users. However, this does not explain the change in strategy. The ads with the higher cost, have a similar number of impressions but a much smaller reach. This indicates that Meta is showing the ads to the same people more times. 

One limitation of our first experimental approach is that these ads were competing with each other and could have been shown to the same people. We account for that in our next experiment. 

In our second campaign, we ran an ad for $10.06, an ad for $20.13, an ad for $30.20, an ad for $40.27, and an ad for $50.34. The ads ran for a little over 48 hours. In this case, we ran the ads each in Meta’s A/B Testing framework so the ads would not compete against each other and be shown, to the best of Meta’s ability, to different users. Below we plot the impressions per dollar, reach per dollar, and clicks per dollar for the five ads. 

We see a similar pattern here. However, in this case, there is a large jump in clicks, reach, and impressions per dollar between the $20.13 ad and the $30.20 ad, and then the efficiency mostly levels off. Below, we tabulate the percent change of the ads compared to the $10.06 ad.  

Percent change compared to total cost (USD) 10.06
Total Cost (USD)Impressions per USDReach per USDClicks per USD
20.13-3%-17%-17%
30.20-21%-36%-41%
40.27-24%-43%-40%
50.34-31%-50%-44%

The drop-off is noticeable, and corresponds to real-world cost. The $10.06 ad received 257 clicks while the $20.13 ad received 428 clicks. Combined, that is $30.19 for 685 clicks. By comparison, the $30.20 ad received 453 clicks. So, by running two ads at a lesser cost, we got 51% more clicks for the same price. (Again, we note that these were run in an A/B Test design, and as a result / to the best of Meta’s ability, were not competing for the same users).

We posit two possible explanations. First, Meta’s advertising algorithm is good at targeting. They claim they know each user’s likelihood of clicking on ads. If true, for a given cost they serve the ads first to users most likely to click and then move on to users less likely to click. Through the bidding process the clicks from users more likely to click are cheaper. In this case, our results show the propensity to click on ads of different contours of the population.

The second is that the bidding system Meta uses is more inefficient as the budget of the ad increases. The system bids higher for clicks than it should when it has more money to work with. If this is the case, then the drop-off can be remedied using the slew of options Meta provides for adjusting the bidding strategy.

In conclusion, we showed that ads with higher budgets, and in turn higher costs, have diminishing returns on the amount of users they reach, the impressions that Meta shows, and the number of people who click on the ad.  We also showed that independently run ads with lower budgets are the better bet for price efficiency for obtaining unique users, regardless of target sample size (e.g., for both small- and large-sample designs). This finding impacts optimization designs for allocation and sampling strategies that rely on river sampling using Meta ads. 

Acknowledgements:

This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Army Research Laboratory under Contract No. [W911NF-21-C-0007] under the HABITUS program (https://www.darpa.mil/program/habitus). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), the Department of Defense, or the United States Government.

Distribution Statement “A” (Approved for Public Release, Distribution Unlimited)