You’re running an a/b test and observe some data. The sooner you can make the right decision, the better.
Perhaps the biggest suggestion I might make from playing this game (and running a/b tests) is to take an agile approach to testing. That is, some people might estimate their testing duration based on some effect prediction and set-forget for a couple of weeks. The problem with this is that the effect prediction is just a prediction and often the real effect is different - way different. In such a case time is wasted if the test is bound to continue no matter what (if the variation is losing badly or has an effect larger than predicted). Hence we advise to look at your results often and stop tests when they begin to show strength.
It might become obvious quite quickly that there are three factors at play which make the discarding-vs-implementation decision more difficult. These include:
The more time we give a test to run, with a greater sample size, the effect ranges of A and B tighten, making the right decision (implementing or discarding) easier.