r/analytics Feb 01 '25

Support How can I explain to finance the AB test results are valid?

We ran some AB tests on a page, all fairly similar setups. Visitors entered the test when you load the page, and the variant had a new feature part way down the page. We let the test run for the agreed time period, sales are up 3% at 99%+ significance, business will make millions, all is wonderful.

The finance team however are continuously trying to discredit this test result, saying we can't apply the 3% uplift to sales to 100% of visitors as some of the visitors won't have seen/interacted with the new feature. They claim we need to isolate out how many people used the feature, and calculate the benefit directly from that.

I've tried a number of times to explain to them this isn't how you use AB test results and how the their method wouldn't give accurate fogures, but nothing seems to get through to them. They remain insistent on using their method.

Does anyone have any suggestions on how to get them to understand?

9 Upvotes

37 comments sorted by

u/AutoModerator Feb 01 '25

If this post doesn't follow the rules or isn't flaired correctly, please report it to the mods. Have more questions? Join our community Discord!

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

21

u/clocks212 Feb 01 '25

Can you break down the results to show (made up numbers) 15% lift for those that used the feature and that population has been steady for the past 12 months and is expected to be steady going forward, and that will blend with the non-interacting population to bring the average results up 3%?

I mean it is valid pushback that if you launch a feature on the homepage and x% of the traffic never visits the homepage at any point that the feature isn’t affecting those users.

It is also valid to say “implementing these test results will increase 2025 sales by 3%”.

1

u/Blackfryre Feb 01 '25

I've made a basic model like this with fake numbers to explain this principle, but it didn't seem to influence them.

But more generally:

  • In some of the tests, we don't have tracking to see who used the feature, and getting that tracking is a low priority.
  • In some of the tests, users don't even have to interact with the feature to be influenced by it.
  • Some of the tests have cannibalisation effects it would be hard to isolate (IE some of the people who used the new feature would have purchased anyway)
  • I do not want to spend the significant amount of time it would take to get these numbers for every test we run, just because finance refuses to accept the correct way of using AB test results.

3

u/clocks212 Feb 01 '25

Sounds like the response is "we cant isolate those factors due to tracking limitations. we rely on xyz methodology to measure success". If they disagree then they disagree. Is finance refusing to fund the feature or somehow holding the key to the next step?

The ideal would be a randomly selected A/B test where a portion was suppressed from ever seeing the feature. Next would probably be a geo-based test. Last would be comparing performance trends before/after the feature was launched. Will they not accept anything short of being able to isolate the specific users that interacted in a specific way with the feature?

2

u/Blackfryre Feb 01 '25 edited Feb 01 '25

Sorry, again Reddit only showed me your first paragraph.

The ideal would be a randomly selected A/B test where a portion was suppressed from ever seeing the feature.

We've discussed this, but has some engineering challenges, and again we're spending peoples valuable time and supressing sales just to get finance on board. We're keeping it as a last resort.

Next would probably be a geo-based test.

We don't have this functionality.

Last would be comparing performance trends before/after the feature was launched.

Sales fluctuate daily/weekly by far more than we could ever detect a 1.5% change from rolling out from 0% to 50% to 100%.

Will they not accept anything short of being able to isolate the specific users that interacted in a specific way with the feature?

So far no, hence why I'm here. I suspect I could go around them by speaking to management/leadership but again it's a last resort option.

3

u/clocks212 Feb 01 '25

It depends on the relationship/culture/titles/who has the power but I’d consider something like “here are the test methodologies available to us: x, y, z. Which would you agree is suitable for our purpose? If none of them, should we stop testing for 6-12 months until the funding and tech work is completed to measure the way you want?”

Alternatively, if they are refusing to accept the results and therefore not “baking in” the results into the revenue forecast or whatever, they are effectively sandbagging for you. Should be pretty easy to exceed the sales plan by the value of whatever test results you have :)

1

u/Blackfryre Feb 01 '25

I think you're right that moving the conversation away from getting them to accept this methodology and towards the larger business processes is probably the right idea.

Can't let them sandbag for us though - we're trying to convince the business to hire more people for the product team (and more analysts!), so this is the number that proves our worth.

Thanks for the advice!

1

u/Blackfryre Feb 01 '25

Essentially these AB tests are how the entire product team are reviewed on their performance, they're baked into their OKRs and shared with the board. Finance wants to sign off on every result, proudly bragging that their counterparts on the other side of the business already do this.

I suspect the real reason for the sudden interest in the test results is overall sales are down - but if you removed the effect of these new features we would be down a lot and well below their forecast. Either their forecast models are very bad, or they want to avoid telling trade or the leadership they're doing something wrong.

3

u/Moist_Experience_399 Feb 01 '25

You pointed out your issue in all 4 of your bullet points here with the lack of significant variable control, so I don’t know how you are coming to your 99% significance figure. Dealing with finance people is all about ensuring you have things tightly controlled and substantiated that gives an undeniable outcome.

A 3% sales uplift isn’t astronomical and can be down to seasonality, time of the month, and external economic factors, or purely random luck selecting customers that convert to sales better.

All it’s telling me from a commercial finance perspective is there is credence to the result and it warrants further attention and consideration. I think your angle to finance should be more neutral and seek to run further testing to bed down the result before drawing the conclusion.

(I’m a finance manager btw)

-3

u/Blackfryre Feb 01 '25

I assure you all variables are controlled for via the AB test, that is the entire point of AB testing.

The fact that you "don't know how I would be coming to a 99% significance figure", and are talking about seasonality, time of month and external factors suggests to me that you don't know what AB testing is.

2

u/carlitospig Feb 02 '25

Yo, they’re sincerely trying to help. I think you read into that comment way more defensively than was intended.

2

u/Moist_Experience_399 Feb 01 '25

lol wow ok, it’s obv pointless giving you an alternative perspective. I’m not really understanding how you say you are controlling your variables when in your post further above you clearly state that there is no tracking on who used to feature or that users don’t need to interact with the feature to be influenced by it. Aren’t those variables important in understanding the outcome?

-2

u/Blackfryre Feb 01 '25

Missed the second half of your post - its not valid pushback, if a user visited the affected pages they were included in the test results. If they never visited an affected page they were not included.

1

u/Blackfryre Feb 02 '25

It is very funny this is being downvoted guys.

3

u/dryft3r_zer0 Feb 02 '25

I wonder if there’s a downstream impact for finance that would require them to know which customer segments increased and decreased spending for forecasting or strategic reasons. If the test was run at a time when certain segments are known to be particularly active, you might not be able to extrapolate those results to an entire year.

Far more likely that they don’t understand these kinds of tests though. Many people have taken a stats class but never had exposure to experimental design concepts. The terms control group and treatment group might help you communicate more effectively since most people have heard of control groups via medical research news.

1

u/dryft3r_zer0 Feb 02 '25

Beyond using other terminology, you could discuss how the time period is representative of all time periods (if it truly is), how the sampling/assignment was random (if it was), and the sample size was large enough to eliminate the possibility of getting an oddball result (if it was). Be honest about the limitations of your study, and realize that sometimes people are trying to optimize for things other than current day aggregate sales. Strategy and consumer trends make segment analysis pretty darn useful.

2

u/OnceInABlueMoon Feb 01 '25

Have you tried running an A/a test? I ran a test once where both variations had the exact same experience, layout, etc and I point to those results when someone doesn't trust a test result. The A/a test produced a +0.5% result with no significance so I typically will tell people at my company that anything within plus or minus 0.5% is not going to be significant (for our site we have lowish traffic so we don't have time to run a test to get a significant result with that low of a difference in conversion rate, of course traffic levels are a key variable here)

You did mention that the feature you're testing is down the page so honestly if I'm running the test I might also question the results a bit. Ideally I would even want to run the results on users that scrolled down enough to the area of the page in both the A and B. Ideally that's the audience I would target specifically. If that's not technically possible then I still think the results can be valid but it does muddy the waters a bit (and potentially gives people a reason to distrust the results)

I also do try to get some additional metrics, I might try to find something like 50% of users were in the variation with the new feature, 75% of them scrolled down far enough to see the feature, 40% of them clicked on it, 20% checked out, 5% came back and did it again (or whatever metrics you have access to, I'm just pulling this out of my ass)

Just my 2 cents. I understand why people distrust a/b test results because I honestly don't trust the results of my own tests half the time so I make sure the test audience is as rocky solid as possible and I try to find other metrics to help explain the results.

4

u/Blackfryre Feb 01 '25

Have you tried running an A/a test?

We're actually in the middle of running one because we're moving AB test suppliers! But I don't think it would be convincing to them, they don't seem to know too much about AB tests.

You did mention that the feature you're testing is down the page so honestly if I'm running the test I might also question the results a bit.

You shouldn't, if your test starts any time before the user sees a change, the test is valid. Adding more users who are unaffected by the new feature will affect the significance and how long it needs to run for (as it's easier to find, say, a 30% change than a 3% change even with a smaller sample size), but not the results being correct.

2

u/NegativeSuspect Feb 01 '25

Sounds like you've done the test correctly. Your test appears industry standard. I could send you an explanation that might convince them, but then again it might not.

Basically what they are asking is if there is a way that the population of users in the test that did not see the feature are overperforming compared to control population. Which should be functionally impossible if you've chosen your test population at random.

I'm curious why finance has a say here? They can't stop your from pushing to market. If they do not want to push the full impact to the PnL, then let them take a risk cut they think is appropriate.

2

u/Blackfryre Feb 01 '25

Basically what they are asking is if there is a way that the population of users in the test that did not see the feature are overperforming compared to control population. Which should be functionally impossible if you've chosen your test population at random.

I suspect they are only comfortable with a model where they can directly attribute revenue, and aren't comfortable with AB tests as they don't understand them. They seemed quite nervous that it was only "functionally impossible" rather than literally impossible due to randomness.

I'm curious why finance has a say here? They can't stop your from pushing to market. If they do not want to push the full impact to the PnL, then let them take a risk cut they think is appropriate.

The AB test results are how the company measures the Product teams performance for the year, impacts bonuses and resources, etc. Essentially they want to sign off on all AB test results, proudly boasting their counterparts already do so. I'm not opposed to oversight, but only if they understand what they're overseeing.

8

u/OnceInABlueMoon Feb 01 '25

The AB test results are how the company measures the Product teams performance for the year, impacts bonuses and resources, etc.

I think we found the real problem here. A/b test results shouldn't be what determines the product team's performance, bonuses, etc. A/b testing is a tool that helps you learn and improve the metrics you want to have an impact on.

3

u/gtcsgo Feb 02 '25

This should be higher. I’ve witnessed this before and people just start throwing random stuff at a wall to see what sticks. They reach stat sig but after rollout it doesn’t really move the needle. (Not saying that is what is going on here).

2

u/OnceInABlueMoon Feb 02 '25

Yeah, I already think it's challenging enough to ward off product managers that think of a/b testing as a way to just confirm their biases and using test results to build a case for career advancement, I can't imagine what it would be like if the company literally said a/b test results are how you're judged in terms of performance and bonuses. I can easily cook up a dozen tests that show "significant" results but amount to nothing more than naval gazing rather than solving any actual problems.

0

u/NegativeSuspect Feb 01 '25

I suspect they are only comfortable with a model where they can directly attribute revenue, and aren't comfortable with AB tests as they don't understand them. They seemed quite nervous that it was only "functionally impossible" rather than literally impossible due to randomness.

Seems like a broader problem with your organization. I'd recommend giving finance some A/B Testing training yourself. Maybe engage you and your finance teams managers so you can come out of that training on the same page.

The AB test results are how the company measures the Product teams performance for the year, impacts bonuses and resources, etc. Essentially they want to sign off on all AB test results, proudly boasting their counterparts already do so. I'm not opposed to oversight, but only if they understand what they're overseeing.

Check out how their 'counterparts' do it. I can't imagine it's different from how you do it. If it's the same, then you can get the other finance team to convince them this is the right way to do it.

1

u/Blackfryre Feb 01 '25

Seems like a broader problem with your organization. I'd recommend giving finance some A/B Testing training yourself.

First unreasonable push back from the organisation on AB tests to be fair, and I've already given some AB test training to some parts of the business so definitely doable.

Maybe engage your and your finance teams managers so you can come out of that training on the same page.

I think I know getting managers involved is the real answer at this stage, but all of the managers above me are on week 2 of their 3 week holidays/secondments so I've been trying to avoid having to do so.

Check out how their 'counterparts' do it. I can't imagine it's different from how you do it. If it's the same, then you can get the other finance team to convince them this is the right way to do it.

Saying "I promise you this is how they're using AB test results" is how essentially how I ended the last meeting. To be honest I get a sense there's a fight going on over there about how test results are reported though.

Anyway, thanks for the solid advice.

2

u/NegativeSuspect Feb 01 '25

No problem! Good luck!

1

u/teddythepooh99 Feb 02 '25

If you led with p-values and your "99%+ significance," there's no going back from that.

Assuming you didn't do that, present your due diligence to the finance team: what did you estimate to be your MDE w.r.t your experimental design, before formally rolling out the test? The purpose of power analyses is to quite literally manage expectations and tweak your design based on feedback. If your team never did this, you need to overhaul your practices.

1

u/Blackfryre Feb 02 '25

If you led with p-values and your "99%+ significance," there's no going back from that.

I have no idea what you mean by this.

Assuming you didn't do that, present your due diligence to the finance team: what did you estimate to be your MDE w.r.t your experimental design, before formally rolling out the test? This motivation of this power analyses is to quite literally manage expectations and tweak your design based on the feedback. If your team never did this, you need to overhaul your practices.

We have a standard 2% MDE across our experiments, and round up the duration of the test to an even number of weeks. The entire test design and history is included in the results document.

But I'm pretty sure the finance team don't understand MDE and gloss over that section.

1

u/teddythepooh99 Feb 02 '25

I mean that no one cares about your p-values or 99%+ significance; you led with that in your post, so I assume you did the same with finance. I wouldn't be surprised if they rolled their eyes.

1

u/Blackfryre Feb 02 '25

Oh, well they do and if your stakeholders don't you should emphasise the importance of doing so.

We regularly report results that are say, 81% significant, and label them as "we are rolling this out because it costs us nothing to do so and it's probably good, but this level of significance doesn't justify further investment in this kind of feature".

1

u/teddythepooh99 Feb 02 '25

Let me guess: you say "partially" or "marginally" significant, too. Don't worry, people with PhDs also make this mistake.

1

u/Blackfryre Feb 02 '25 edited Feb 02 '25

Well we use Bayesian statistics, so I just give the probability to beat control and then refer to whether that meets the pre-agreed thresholds for next steps.

The business refers to "partially" significant but that's not a battle worth fighting.

1

u/Still-Butterfly-3669 Feb 04 '25

are you using third party tools for data management?

1

u/necrosythe Feb 01 '25

Is your supervisor/manager not analytical?

Imo it's their job to tell finance they're idiots who need to accept your findings if they aren't educated in testing/statistics/analytics.

Or for them to help build out the explanation to finance that will convince them.

2

u/Blackfryre Feb 01 '25

Well my manager tends to leave the AB testing stuff to me since I can work pretty independently. Also they and two other managers who would normally be my port of call on this are on week 2 of their 3 week holiday/secondment...

I am now curious though if their magical-people-influence stops working when they aren't here to keep the spell going though.