Resources
Welcome to the Luca data science blog! In our inaugural post we’re going to talk about one of the most frequently asked questions we get at Luca: Can you determine optimal prices without using user A/Bs? After all, A/B testing is understood to be the gold standard for measuring the causal impact of any facet of customer experience. Causation is not correlation, after all.
So, why not apply it to pricing?
To some extent, this question really is “how can you do pricing without user A/Bs?” For most Pricing Operators, running pricing user A/Bs is a bad idea. Prices that appear to fluctuate for unknown reasons can provide a poor user experience and open companies to accusations of illegal price discrimination – even if the intention was to randomly assign prices.
If testing prices at the user-level is off the table, should we run price tests at the product-level? For instance, to determine the optimal price, should we test multiple prices for the same product over time and pick the price that yields the best business outcome (like profit or revenue)?
On paper, this sounds like a great idea. In practice, testing every conceivable price point for every product is an expensive and poor business decision. Moreover, designing the right experiment – e.g., one that gets around seasonality issues, natural fluctuations in demand, or even possible spillover effects between products – is a challenge in and of itself.
Taking a step back, let’s think about why we’d want to run A/B tests; it’s not because “someone” (e.g., me, in the first paragraph) said A/B tests are the gold standard! You want to remove bias from the data – as an example, Operators don’t randomly set prices for their catalog; they set prices based on past experience with each product’s demand. But, they may also run promotions or marketing campaigns to boost their pricing strategies, which makes attributing the impact of price on demand challenging when all these choices occur at once.
You also want to remove noise found in real-world data, like the seasonality and natural fluctuations in demand mentioned earlier. That’s the beauty of a clean A/B – randomly assigning otherwise identical products to condition A or B means that the difference in demand between the two is attributable to condition A or B. Simply put, bias is no longer an issue since prices are assigned randomly, and noise like seasonality would impact both groups equally.
In a nutshell, at Luca, we build econometric and machine learning models to “reconstruct” those benefits of product A/B: We leverage each client’s historical data to control for potential sources of noise and bias. Take the example of noise I gave earlier – that’s the use case for a controlled regression. That is, build a demand model that controls for potential sources of noise, like seasonality; and if we’re concerned that each product has unique features that impact demand, we can include product features and fixed effects. Removing the bias in pricing decisions is a topic in its own right, but there are a variety of classical econometric as well as newer causal-machine learning techniques that we leverage to remove those sources of bias. These are just some high-level examples; in practical implementations, we test and build ensembles of models to ensure that we address the customized needs of each client.
That’s not to say there isn’t a place for product-level price experimentation in e-commerce, but, at Luca, we’re intentional about how and when we run price experiments to minimize disruptions to our clients. That is, we extract insights from a combination of historical client data and, in some cases, from limited product pricing experiments.
We only scratched the surface here – for more resources, check out: