aplusexperiments_1430x410

Creating and managing A+ content experiments in Vendor Central

In October, Amazon announced via Vendor Central that they would begin allowing A/B tests on A+ content for vendor listings. With the holiday shopping season in view, Amazon unveiled A+ Content Experiments as a way to potentially drive sales by identifying and presenting the best content possible to customers. 

How to use Content Experiments

 

In Vendor Central, vendors can visit the Experiments Learning Center and find a dashboard to control pages and interpret data and results. “Experiments can help you statistically find the best A+ content for your listings, improve future content, and drive more sales,” notes Amazon in the dashboard. 

Ideas for experiments


A+ content can potentially help customers go from browsing to buying. It goes a step further than fully optimized content and allows you to include detailed images, answer customer FAQs, provide rich details to differentiate from competitors, comparison charts, and more. This new program allows vendors to fine-tune these pages even more to make sure they are hitting the mark. Some ideas for content experiments may be:

  • Adding or removing a comparison chart
  • Changing the order of existing modules to display information higher up
  • Adapting existing content to a new module
  • Introducing new A+ content modules to your detail pages
  • Using exclusively lifestyle imagery or standard product imagery
  • Highlighting one set of features versus a different set

To achieve conclusive results, vendors should make sure your experiment is run for a long enough duration to provide meaningful insight. Also, make sure the A/B versions are different enough to potentially have an effect on customer behavior.

Get your results

Once you have isolated an ASIN to test content variations on and run an experiment, you can check the impact in the dashboard. From the Experimental Details page, you can export results. Amazon will calculate your A+ experiments results each week based on their cumulative impact and update the graph weekly until it ends. Results are always aggregated for each enrolled ASINs. “The gray shading on the graph shows the 95% confidence interval of the result; that means that if you ran the experiment again, there is a 95% likelihood that the result would end up in the shaded region.” This means, if a vendor were to run the exact same test again, there’s a 95% chance of the same result. The 95% confidence level is due to potential external factors, like seasonality or sales spikes that might have nothing to do with the test. Running more experiments on the same ASIN may allow you to zero in on specific customer behavior. For instance, certain features, images, or text may work better seasonally or be more responsive to select groups of customers. 

After running a few tests, vendors should get a feel for what hypotheses work or don’t, and can potentially roll these out across existing pages. Amazon noted in the announcement that during a small trial, experiments on A+ content showed sales conversion impacts of up to 6%.

Need more help?

Our team at Vendor Society are experts in creating robust content that gets results. If you need support creating, maintaining or running experiments on A+ content, please contact us

Katy Luxem

Katy Luxem is a Salt Lake City-based writer and editor who specializes in online marketing. As a former Amazonian in both the U.S. and U.K. locales, she worked in marketing for several different teams and product lines. Prior to that, she worked at Microsoft and was a journalist. She now enjoys helping businesses succeed and grow with next-level content.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top