Attribution is the process of assigning value to advertising activities and it has been a hot topic in advertising ever since John Wanamaker famously said: “Half the money I spend on advertising is wasted; the trouble is I don't know which half”. In modern times with the advent of technology, digital advertising and a world of tools, our possibilities to provide answers have improved vastly. This post will deep-dive into the recently released Facebook Attribution tools and discuss how this problem will develop into the future.

What is attribution and why is it such a big deal?

Within social psychology, attribution is defined as the process by which individuals assign causes to behaviours or effects. This definition translates very well to digital marketing where the outcome (effect) is most often a purchase and using attribution we try to find the cause of that purchase, which will hopefully be an ad. 

This is, of course, easier said than done – imagine someone buys a pair of Nike sneakers – given the brand it’s quite likely that thousands of ad impressions preceded this purchase. Remember that we are trying to find the cause for an event – which of these ad impressions should we say is the cause?


Traditionally there are two main practical ways of doing this:

  1. Rule-based models
  2. Algorithmic (Data-driven) models


Needless to say, having a reliable way of doing attribution allows you to better understand the entire customer journey and assign value to the touchpoints that have the highest impact. Simply put, understanding attribution is a large competitive edge.

Rule-based models

Rule-based models in attribution are ones that based on a number of more or less arbitrary rules assign credit to ad touchpoints. There are a number of popular models but just to explain the principle we will use the positional 30 – 30 model as an example. The positional 30-30 model assigns 30% of the value for a conversion to the first as well as the last touchpoint in a purchase journey respectively. The remaining 40% of the credit is evenly allocated to the touchpoints in between.


Above image shows how we would allocate the credit of a conversion using a rule-based model. In this case, the initial referral, as well as our Google shopping ad, get assigned 30% of the credit for the conversion simply because they happened to be first and last respectively. So, essentially we give more credit to the touchpoint starting the conversion journey and the one “closing” it.

Data-driven models

These types of models are algorithms that churn through a large amount of data in order to assign value to touchpoints. Paths to purchase can look very different and this way of doing attribution leverages the fact that given enough purchases you can start to analyze the different paths to purchase your customers take and deduce value that way.


Explaining data-driven attribution can quickly get very technical and mathematical so bear with me for a moment. There are a number of different algorithms and methods to process the different paths but on a general level, most models look at the totality of available paths and assign value to a channel based on how often it is present in the journeys. Roughly speaking, if Facebook is present in every converting path then we can be reasonably certain it’s an important channel in our advertising. 

In general, data-driven models tend to be more powerful since they avoid making arbitrary assumptions, however, they come with a price. Using data-driven attribution the end-result is only going to be as good as the data you feed the algorithm and mapping every advertising touchpoint to a conversion is no easy feat. Further complexity is added by the fact that the duopoly of Google and Facebook are both walled gardens in this regard, mapping a customer journey across channels is already difficult and with upcoming privacy concerns and ever-increasing competition between the two giants, it’s unlikely to get much easier in the near future.

Above is likely the reason that many companies that attempt to build a better understanding of their attribution fail. In pursuing the holy grail of all attribution models, many make the mistake of aiming too high, realizing that not all data necessary is available or other practical issues arise and the project is simply dropped. The key here is understanding the path to better attribution rather than imagining the output itself. Earlier we defined attribution as the process of assigning value to touchpoints, in advertising, a better definition might be: The process of improving your model for assigning value to touchpoints. 

In other words, we shouldn’t be so focused on the outcome itself but rather the focus should be on producing a model that is better than the one we have today.


The last key concept needed to understand how Facebook views attribution is incrementality. Recall that attribution tries to assign partial credit to each touchpoint in a series of ad servings, while this gives us an idea of how much touchpoints are contributing to a purchase it can’t answer the question, what would happen if they didn’t see any ad at all from x source? Incrementality should be seen as a framework to answer exactly that question, how many conversions would I have gotten without showing any ads?

The way we try to answer this question in practice is closely resembling a scientific study, we create a treatment group (users exposed to ads) and a control group (users not exposed to ads) and we measure the amount of conversions from both groups respectively.


This type of experiment is called a conversion lift study, and it answers the question how many more conversions did I get as a direct result of users being exposed to my ads? The difference in conversions between the test and control group is the lift in conversions. While the framework is simple the insights can be extremely useful. There are several reasons why lift-study outputs are so useful for marketers:

  • Lift studies are attribution agnostic, it doesn’t matter what models you choose, either a conversion happened or not and that is the only factor that makes a difference in these studies.
  • Lift studies can give you a lower limit for how many conversions a channel generated. Since incremental conversions are ones that would not have happened otherwise, incrementality fails to capture the full value of a channel, however it does give you a benchmark for the lowest possible number of conversions, this is extremely useful when comparing attribution models since it allows you to establish a lower bound.

Attribution is an iterative process, we are looking to improve our current understanding and incrementality is the tool we need to make sure we are making correct decisions. While this sounds neat and simple there are a few drawbacks of incrementality which means that it is often impractical to use by itself:

  • Incrementality misses any interaction effects that might come from users seeing ads on different platforms.
  • Studies can be costly in terms of resources (they require work to set up) as well as time.
  • It’s hard to generalize over campaigns, how can you know this result will hold true for the next campaign with different creatives/new communication?

It is due to these shortcomings incrementality has to be used in conjunction with attribution where incrementality studies are used to confirm insights derived from attribution.

“It is better to be roughly right than precisely wrong.”
– John Maynard Keynes

With all the theory-crafting out of the way, it’s time to come back down to reality. During 2018-2019 Facebook has been working on and rolling out a new tool that is now available to all advertisers – Facebook Attribution. Like other tools provided by Facebook, this tool is completely free and available to anyone who advertises on Facebook. This is a selling point that needs to be emphasized as attribution software has traditionally been expensive and geared toward enterprise solutions.

After going through the necessary setup steps you are met with the performance tab – this shows a basic table of your top sources. You might also notice that the tool itself is still quite limited, reports have low customizability and can’t be exported, and the number of tabs is low. This might be due to the fact that Facebook Attribution is still in its infancy, regardless we can still find useful features on the very first page.


The first interesting feature we come across is somewhat hidden in the top right corner of the interface. Here we are faced with the ability to select different attribution models from a series of pre-set rule-based models, as well as pick some settings for those models such as what attribution window to use and whether paid touchpoints should be proportionally weighted heavier. 

In addition to the rule-based models, Facebook also uses its own data-driven model to re-calculate conversions that Facebook was credited for. A very interesting exercise to do across all of your analytics platforms is to simply compare the output from different attribution models side-by-side as pictured below.


In this case what we have done is compare the amount of conversions Facebook campaigns are credited with based on the different attribution models, we also chose to compare it to GA last click which is still default for many advertisers today. 

There are a few interesting things to point out when doing this type of eyeball analysis:

  • The lower limit of conversions credited to Facebook is the output of a conversion lift study which is a great tool to help estimate value derived from Facebook ads.
  • The upper limit in this example is the default Facebook last-click model which tends to be overly generous.
  • When selecting which model is appropriate for your business, consider the proportion of spend you are putting into the channel (in this case Facebook). If you are spending a majority of your budget on this channel you should likely pick a model that is closer to the top limit, if it’s a minor portion of your budget it’s likely correct to pick a model closer to the lower limit. Consider the amount of data that each model makes available as this will affect your foundation for decision-making.
  • Models that consistently show similar results are in practice equivalent. Consider the three models in the above graph which are roughly equal in amount of conversions: even credit, positional 30% and time decay 1D. Regardless of which of these we chose our actions are going to be the same, hence we should consider these models equivalent in terms of what actions we will take. 

This type of analysis while quite basic will still provide good insight, remember that we are only looking for a model that is better than what we have today rather than perfection. This will help you make a more informed decision when deciding which attribution model you want to use, it’s often very useful to use a rule based model since while it might be less accurate than some data-driven counterparts it’s quicker to arrive at and quite a bit easier to work with / take action on.

Turning insights to action

If attribution had commandments written on a stone tablet there would likely be only two and it’d likely say something like this:

  1. Thou shall not pursue perfection but rather improvement.
  2. Thou shall not use the power of attribution in vain.

The first of these commandments we dealt with in the previous section, in this we will deal with the second commandment of not using attribution in vain. What we mean by this is that attribution should be viewed as a means to and end, not the end itself.

Perhaps the most important question to pose when working on attribution is what actions can we take from this insight? 

Let’s go back to the graph above and pretend that we decide to use the positional 30% model to evaluate Facebook results rather than Google Analytics which was previously used. This change would cause us to attribute roughly 4 times greater value to Facebook than previously, logically this should also correspond to a proportionally higher investment than before.

This brings us to the key question, seeing these results, would you be willing to invest 4 times more money into a channel? The output of your analysis in this case suggest that is the correct decision to make but such drastic decisions can be hard to make especially without being able to evaluate whether or not it was the right decision. Luckily Facebook has provided us the tools to answer these types of questions and evaluate if we are making the correct decisions.

A New Hope

Regardless of the current limitations in Facebook Attribution we have high hopes for the future. Throughout the last decade, the marketing analytics market has been largely dominated by Google Analytics which has its own shortcomings that for some reason are often overlooked. As an agency we see ourselves as a key partner for our clients to help navigate the duopoly of Facebook & Google. Our clients will often have representatives from both companies pitching for why their solution is better than the competitor and neither platform will likely incorporate the other in the near future. As an agency our job is to navigate these competing views objectively and generate as much impact for our clients as possible. 

To do this, having multiple data-sources to rely on, each with their respective biases makes it easier to uncover the platform bias and adjust accordingly, together with the ability to do experiments on a large scale we can generate and test big ideas for every client, not just the ones that have the means to run a large-scale data-driven attribution project. In order however for all of our dreams to come true there is one feature from Facebook Attribution we are still eagerly anticipating, the ability to export conversion paths.

Looking at you, Facebook…

Robert Jurewicz Global Social Lead