You’re running an a/b test and observe some data. The sooner you can make the right decision, the better.
Perhaps the biggest suggestion I might make from playing this game (and running a/b tests) is to take an agile approach to testing. That is, some people might estimate their testing duration based on some effect prediction and set-forget the experiment for a couple of weeks. The problem with this is that their predicted effect is often different than the real effect - way different. In such cases, precious time is wasted if the test is bound to continue unconditionally (ex: the variation might be losing badly or has a larger effect than predicted). Hence we advise to look at your results often and stop tests when they begin to show strength.
It might also become obvious quite quickly that there are three factors at play which make the discarding-vs-implementation decision more difficult. These include:
The more time we give a test to run, with a greater sample size, the effect ranges of A and B tighten, making the right decision (implementing or discarding) easier.