Ecommerce trends at Paris Retail Week

Physical or digital? We found merchants doubling down on both at Paris Retail Week. At the big event in Paris last month, we found retailers intent on merging the online and offline shopping experience in exciting new ways. See who we met and what the future of digital might hold for global ecommerce. Representatives from our European team had a great time at the big ecommerce event, one of the 'sectors' at Paris Retail Week. Outside of the event, it was great to have a chance to catch up with Maukau, our newest Shopify agency partner in France. (Bonjour!) Among the huge amount of digital sales and marketing trends we observed throughout the week, a few emerged again and again: mobile-first, phygital experience, and always-on, multi-channel marketing. Getting phygital Phygital? Is that a typo? Hardly. It’s the latest trend in ecommerce, and it was prevalent everywhere at Paris Retail Week. Phygital combines “physical” and “digital” experiences in a new ecosystem. This offers the consumer a full acquisition experience across different channels. From payment providers to marketing agencies, everyone was talking about going phygital. One of our favourite presentations was by AB Tasty. They focused on how optimising client experience can boost sales and conversions in the long-term. It’s not enough to promote your products, nor to link to an influencer for social proof -- you need to create a full customer experience. Starbucks and Nespresso are good examples of how this works offline, assuring that a customer who comes in to drink a coffee will linger around for the next 20-30 minutes. By keeping the customers in the shop, they will eventually order more. The goal is to reproduce this immediately sticky experience online too, and focusing on web engagement benchmarks is the best way to track your progress here. Using the example of conversion rate optimisation (CRO) for mobile apps, AB Tasty's Alexis Dugard highlighted how doing data-driven analysis of UI performance, on a very detailed level, can help clarify how mobile shopping connects with a wider brand experience. In the end, customer experience means knowing the customer. 81% of consumers are willing to pay more for an optimal customer experience. Brands that are reluctant to invest in customer experience, either online or offline, will hurt their bottom line, even if this isn't immediately apparent. Those brands that do invest in multi-channel customer experience are investing in long-term growth fuelled by higher Average Order Value (AOV). 81% of consumers are willing to pay more for an optimal customer experience -- the statistic speaks for itself! Another great talk was from Guillaume Cavaroc, a Facebook Academie representative, who discussed how mobile shopping now overlaps with offline shopping. He looked at experiments with how to track customers across their journeys, with mobile login as a focal point. In the Google Retail Reboot presentation, Loïc De Saint Andrieu, Cyril Grira and Salime Nassur pointed out the importance of data in retail. For ecommerce sites using the full Google stack, Google data represents the DNA of the companies and Google Cloud Platform is the motor of all the services, making multi-channel data more useful than ever in assisting with smart targeting and customer acquisition. The Google team also stated that online shopping experiences that don’t have enough data will turn to dust, unable to scale, and that in the future every website will become, in one way or another, a mobile app. In some ways, "phygital" really means mobile-first. This message that rang out clearly in France, which is a mobile-first country where a customer's first encounter with your brand or product is inevitably via mobile -- whether through a browser, specific app or social media feed. [subscribe] Multi-channel experience (and the data you need to optimise it) Physical marketing is making a comeback. Boxed CEO Chieh Huang and PebblePost founder Lewis Gersh presented the success of using online data for offline engagement, which then converts directly back on the original ecommerce site. Experimenting heavily in this area, they've seen personalised notes on invoices and Programmatic Direct Mail (with the notes based on viewed content) generate an increase of 28% in online conversion rate. Our real-world mailboxes have become an uncluttered space, and customers crave the feel of a paperback catalogue or simple postcard, to name just a bit of the physical collateral that's becoming popular again -- and being done at a higher quality than in the years of generic direct mail. Our real-world mailboxes have become an uncluttered space, and customers crave the feel of a paperback catalogue or simple postcard. However, data is still the backbone of retail. In 2017 Amazon spent approximately $16 billion (USD) on data analysis, and it was worth every penny, generating around $177 billion in revenue. Analysing declarative and customer behaviour data on the shopper’s path-to-purchase is a must for merchants to compete with Amazon. Creating an omni-channel experience for the user should be your goal. This means an integrated and cohesive customer shopping experience, no matter how or where a customer reaches out. Even if you can't yet support an omni-channel customer experience, you should double down on multi-channel ecommerce. When Littledata's customers have questions about the difference, we refer them to Aaron Orendorff's clear explanation of omni-channel versus multi-channel over on the Shopify Plus blog: Omni-channel ecommerce...unifies sales and marketing to create a single commerce experience across your brand. Multi-channel ecommerce...while less integrated, allows customers to purchase natively wherever they prefer to browse and shop. Definitions aside, the goal is to reduce friction in the shopping experience. In other words, you should use anonymous data to optimise ad spend and product marketing. For marketers, this means going beyond pretty dashboards to look at more sophisticated attribution models. We've definitely seen this trend with Littledata's most successful enterprise customers. Ecommerce directors are now using comparative attribution models more than ever before, along with AI-based tools for deeper marketing insights, like understanding the real ROI on their Facebook Ads. The new seasonality So where do we go from here? In the world of ecommerce, seasonality no longer means just the fashion trends of spring, summer, autumn and winter. Online events like Black Friday and Cyber Monday (#BFCM) define offline shopping trends as well, and your marketing must match. "Black Friday" saw 125% more searches in 2017, and "Back to School" searches were up 100%. And it isn't just about the short game. Our own research last year found that Black Friday discounting is actually linked to next-season purchasing. Phygital or otherwise, are you ready to optimise your multi-channel marketing? If not, you're missing out on a ton of potential revenue -- and shoppers will move on to the next best thing.

2018-10-09

How Pufushop used our ecommerce benchmarks to grow sales

"Is my conversion rate good or bad?" We built Littledata's benchmarking feature to help you say goodbye to guessing games and start automatically benchmarking your site against top performers. Now that our benchmark tool has been around for awhile, we've started to get a sense for which ecommerce sites are using it most effectively. In other words, we've seen how benchmarks can help websites increase revenue - not in theory but in actual practice. Littledata has now helped hundreds of companies understand where their performance is compared with other websites in their niche, using our benchmarking algorithms and clean user interface. But can benchmarks really help you grow sales? I understand if you want to see the data for yourself. One of our long-term customers makes for an ideal case study. Case study - Pufushop Over the course of 2017, we helped Pufushop, a Romanian ecommerce site, understand if their website changes were helping to increase performance - and where they still had work to do. Pufushop is a retailer of baby goods, with a main focus on baby carriers. The products in their store are all premium quality and from top vendors, so comparing them with just any other baby store wouldn't have been relevant. Instead, we compared their ecommerce metrics with specific benchmark segments that were most relevant to their market landscape and business goals. Ecommerce benchmark segments Benchmarking is used to measure and compare the performance of a specific indicator, and it's most useful when you map that data onto your internal KPIs and compare performance against similar sites. Littledata specialises in ecommerce analytics and our benchmark population now includes Google Analytics data from almost 10,000 sites. We break that data into specific categories, such as Marketing, Ecommerce and Speed (site performance), and within each category you can filter by industry, location, website size, and more. Littledata aggregates reliable data from those thousands of high-performing websites so that you can focus on results. In this customer's case, we analysed their website and business model to provide 5 relevant benchmark segments: Romanian websites to compare KPIs across regional market Small SEO websites because 60% of Pufushop's traffic comes from search engines SEO-driven online stores (more generally, to see how they compare) General online shopping websites across the globe, to get a sense for how their funnel compares And a specific revenue per customer category based on shoppers' average basket spend (sites with a similar average order value, no matter the sector) Key metrics Web behaviour is not necessarily consistent across industries. We started Pufushop's analysis by looking at key ecommerce KPIs such as Checkout completion rate, Ecommerce conversion rate and Add-to-cart rate, but we didn't just pull these metrics blindly. Starting with the first month, February 2017, we looked at how other stores with a similar average basket value were performing. This helped our client establish what was working and what could be improved. As we worked with them to make sure everything was tracking correctly (after all, benchmarks are only as useful as your data is accurate), they could also check these benchmarks directly in the Littledata app. Results Now for the first time, both Pufushop's Marketing Director and Senior UX Designer had clarity on which areas of the website could be improved to increase sales. Based on the benchmark data they could see that the main places to improve were: The checkout process (to increase the checkout completion rate) Product pages (to increase the add-to-cart rate) Resolving those two main issues will automatically resolve the e-commerce conversion rate KPI and will indirectly influence the Revenue per customer. Pufushop decided to use Google Optimize in order to improve the checkout completion rate. Using Google Optimize is an easy-to-use, fast and scalable tool in order to A/B-test different experiences on the checkout page. Pufushop conducted a variety of targeted experiments, including: Shortening the checkout process Eliminating unnecessary fields Testing variants of checkout pages Split-testing different product pages Testing a variety of shipping costs After a couple of months of testing, the results were significant: The add-to-cart rate grew from 3.7% to 5.5% The checkout completion rate jumped from 52.8% to 89.7% Now those are some real results! Having a direction as well as a target helped Pufushop's digital team to focus on clear, achievable goals. As they continue to grow, we're glad to have them as a part of the Littledata family. [subscribe] Ready to benchmark your site? If you're in the same place as Pufushop was a year ago, here's a quick guide for how to use ecommerce KPI benchmarks to improve your store performance. Sign up for Littledata's main app or Shopify app Look at the benchmark data and pick an industry and a set of KPIs - the right sectors and segments will help you optimise campaigns Use tools like Hotjar and Littledata's automated reporting to analyse user behaviour around those benchmarks and define a short list of actions you're going to take Use Google Optimize or hire a developer to put those actions into place Monitor how users are interacting with the changes When you have sufficient data to see a clear relationship between those changes and an increase in traffic, revenue or conversions, make those changes permanent and move on to focus on a new set of KPIs Keep in mind that there are situations where the KPIs will show you issues of wrong messaging, for example of a product page or advertisement - technical issues where the change is fairly easy to make. In other cases, you will need to develop a long-term strategy for radical changes to your website, such as altering your checkout process. The online environment is a fast-moving industry, so you need to be agile and ready to change accordingly. Either way, we're here to help you scale with data-driven strategies for sustainable growth. Now stop reading this post and start benchmarking your site!   Note: In order to maintain data-confidentiality, KPI values have been altered in this case study (the results are real, only the benchmarks have been adjusted).

2018-05-24

What to test with Google Optimize

So you’ve got a brand new tool in your web performance kit – Google Optimize – and now you want to put it to good use. What can you test with Optimize and how does it work? Firstly, what are the different options for setting up an experiment? AB Test Using the in-page editor you can create an altered version of the page you wish to test. This could be a change of text copy, different styling, or swapping in a different image. You can also add new scripts or HTML if you’re familiar with coding. The way this works is Optimize adds a script after the page loads to manipulate the page text, images or styles. I recommend not switching header elements or large images using this method as, depending on your website setup, there may be a noticeable flicker– try a redirection test below. You can create many versions with subtly different changes (C, D and E versions if you want) – but remember you’ll need a large volume of traffic to spot significant differences between lots of variations. You can also limit the test to a certain segment of users – maybe only first time visitors, or those on mobile devices. Multivariate Test Similar to an AB test, a multivariate test is used when you have a few different aspects of the page to change (e.g. image and headline text) and you want to see which combination is most engaging. To get a significant result, you'll need a large volume of traffic - even more than testing many options in AB tests.   Redirection Test This is where you have two different versions of a page – or a different flow you want to start users on. Optimize will split your visitors, so some see the original page and some are redirected to the B version. A redirection test is best when the page content or functionality is very different – perhaps using a whole different layout. The disadvantage is you’ll need a developer to build the B version of the page, which may limit the speed of cycling tests.   Personalisation Personalisation is not officially supported by Optimize right now, but we’ve found it to be a useful tool. You can assign 99.9% of the visitors who match certain criteria to see the alternative version of the page. An example is where you have a special offer or local store in a particular city - see our step-by-step local personalisation example. You can ensure that all the visitors from that city see a different version of the page. Unfortunately on the free version of Google Optimize you are limited to 3 concurrent ‘experiments’ – so it won’t be a good solution if you want to run similar personalisation across lots of cities or groups of users. Next the question is where to start with tests...   Start with the landing pages Landing pages get the greater volume of traffic, and are where small visual changes (as opposed to new product features) make the biggest difference to user engagement. This greater volume allows you to get a significant result quicker, meaning you can move on to the next test quicker. And keep on improving!   So what exactly could you test using Google Optimize? Here are six ideas to get you going.   1. Could call-to-actions (CTA) be clearer? Changing the colour or contrast of a key button or link on the page (within your brand guidelines) usually results in more visitors clicking it. This might involve changing the style of the CTA itself, or removing elements close by on the page – to give the CTA more space to stand out. [subscribe] 2. Are you giving the user too many choices? In Steve Krug’s classic Don’t Make me Think he explains how any small confusion in the user’s mind can stop them making any choice. Every choice the user has to make is an opportunity for them to give up. Try hiding one of the options and seeing if more users overall choose any of the remaining options.   3. Is the mobile page too long? As many sites move to responsive designs that switch layout on smaller screens, this has led to mobile pages becoming very long. User may get ‘scroll fatigue’ before then get to critical elements on the page. Try cutting out non-essential sections for mobile users, or editing copy or images to make the page shorter. You could also try switching sections so that the call-to-action is higher up the page on mobile – although this is harder to achieve without a redirection test.   4. Is localisation important to your users? You may have discussed providing local language content for your users, and been unsure if it is worth the costs of translation and maintenance. Why not test the benefits for a single location? As with the personalisation tests, you can show a different local language (or local currency) version of the page to half the users in the single location (e.g. Spanish for visitors from Mexico) and see if they convert better.   5. Does the user need more reassurance before starting to buy? It easier to build experiments which remove elements to the page, but you should also consider adding extra explanation messages. A common problem on ecommerce stores is that visitors are unsure what the shipping charges or timing will be before adding to cart. Could you add a short sentence at the start of the journey (maybe on a product page) to give an outline of your shipping policy? Or maybe some logos of payment methods you accept?   6. Changing header navigation If your site has a complex mix of products that has evolved over time it may be time to try a radical new categorisation – maybe splitting products by gender or price point rather than by type. For this test, you’ll want to target only new visitors – so you don’t confuse regular visitors until you’re sure it’s permanent. You will also need to make the navigation changes on all pages across the site.   Good luck! Littledata also offering consulting and AB testing support, so please contact us for any further advice.

2017-05-30

4 common pitfalls of running conversion rate experiments from Microsoft

At a previous Measurefest conference, one of the speakers, Craig Sullivan, recommended a classic research paper from Microsoft on common pitfalls in running conversion rate experiments. It details five surprising results which took 'multiple-person weeks to properly analyse’ at Microsoft and published for the benefit of all. As the authors point out, this stuff is worth spending a few weeks getting right as ‘multi-million-pound business decisions’ rest on the outcomes. This research ultimately points out the importance of doing A/A Testing. Here follows an executive overview, cutting out some of the technical analysis: 1. Beware of conflicting short-term metrics Bing’s management had two high-level goals: query share and revenue per search. The problem is that it is possible to increase both those and yet create a bad long-term company outcome, by making the search algorithm worse. If you force users to make more searches (increasing Bing’s share of queries), because they can’t find an answer, they will click on more adverts as well. “If the goal of a search engine is to allow users to find their answer or complete their task quickly, then reducing the distinct queries per task is a clear goal, which conflicts with the business objective of increasing share.” The authors suggest a better metric in most cases is lifetime customer value, and the executives should try to understand where shorter-term metrics might conflict with that long-term goal 2. Beware of technical reasons for experiment results The Hotmail link on the MSN home page was changed to open Hotmail in a separate tab/window. The naïve experiment results showed that users clicked more on the Hotmail link when it opened in a new window, but the majority of the observed effect was artificial. Many browsers kill the previous page’s tracking Javascript when a new page loads – with Safari blocking the tracking script in 50% of pages opening in the same window. The “success” of getting users to click more was not real, but rather an instrumentation difference. So it wasn’t that more people were clicking on the link – but actually that just more of the links were being tracked in the ‘open in new tab’ experiment. 3. Beware of peeking at results too early When we release a new feature as an experiment, it is really tempting to peek at the results after a couple of days and see if the test confirms our expectation of success (confirmation bias). With the initial small sample, there will be a big percentage change. Humans then have an innate tendency to see trends where there aren’t any. So the authors give the example of this chart: Most experimenters would see the results, and even though they are negative, extrapolate the graph along the green line to a positive result and four days. Wrong. What actually happens is regression to the mean. This chart is actually from an A/A test (i.e. the two versions being tested are exactly the same). The random differences are biggest at the start, and then tail off - so the long term result will be 0% difference as the sample size increases. The simple advice is to wait until there are enough test results to draw a statistically significant conclusion. That generally means more than a week and hundreds of individual tests. 4. Beware of the carryover effect from previous experiments Many A/B test systems use a bucketing system to assign users into one experiment or another. At the end of one test the same buckets of users may be reused for the second test. The problem is that if users return to your product regularly (multiple times daily in the case of Bing), then a highly positive or negative experience in one of the tests will affect all of that bucket for many weeks. In one Bing experiment, which accidentally introduced a nasty bug, users who saw the buggy version were still making fewer searches 6 months after the experiment ended. Ideally, your test system would re-randomise users for the start of every new test, so those carryover effects are spread as wide as possible. Summary For me the biggest theme coming out of their research is the importance of A/A tests – seeing what kind of variation and results you get if you don’t change anything. Which makes you more aware of the random fluctuations inherent in statistical tests. In conclusion, you need to think about the possible sources of bias before acting on your tests. Even the most experienced analysts make mistakes! Have any comments? Let us know what you think, below!    

2016-11-27

Personalising your site for a local event with Google Optimize

Google Optimize (standard edition) will be released publically at the end of October, allowing free access to powerful AB testing and personalisation features. Here’s a guide to launching your first test, assuming you have the Google Optimize 360 snippet installed on your page. Step 1: Create the experiment I want to trigger a personalisation on Littledata’s homepage, shown only to visitors from London, which promotes a local workshop we have running later this month. It’s not a real AB test, as we won’t have enough traffic to judge whether the banner is a success, but we can use the ‘experiment’ to launch this personalisation for a local audience. First, I need a new test (click the big blue plus sign) and select an AB test. I’ll name my test, and set the editor page as our homepage – which is pre-filled from Google Analytics anyway… Since I have Google Analytics linked, I can select a goal from GA as the objective. In this case, the banner will promote the event (which isn’t tracked on our site) so the only sensible goal is promoting more pageviews – but it’s possible it will also increase signups for our app, so I’ll include that as a secondary objective. Next, I need to add a variant, which is going to load my event banner. I’ve named it ‘add yellow bar’. Clicking on the variant row will take me to the editor. Step 2: Edit the ‘B’ version Note: Optimize’s editor works as a Chrome Plugin, so you’ll need to install that in Google Chrome first. It’s easy to select an element on the page to edit or hide, but my variant will load a new snippet of HTML code which is not already on the page. So I’ll select the element at the top of the page (with ID ‘content’) and then go to the select elements icon in the top left. Now I’ve got the right element to use as a building block, I’m going to add an ‘HTML’ change. And set it to insert the HTML ‘before’ the current element. I’ve pasted in the HTML I’ve recycled from another page. Once I click apply we can see the new element previewing at the top of the page. Next, let’s check it looks OK on mobile – there’s a standard list of devices I can select from. Yes, that is looking good – but if it wasn’t I could click the ‘1 change’ text in the header to edit the code. Lastly, in the editor, you may have noticed a warning notification icon in the top right of the Optimize editor. This is warning me that, since Littledata is a single-page Javascript site, the variant may not load as expected. I’m confident Optimize is still going to work fine in this case. Step 3: Launching the experiment After clicking ‘Done’ on the editor, I go back to the experiment setup. Usually, we’d split the traffic 50:50 between the original and the variant, but in this case, I want to make sure all visitors from London see the message. I’ll click on the weighting number, and then set ‘add yellow bar’ to show 99.9% of the time (I can’t make it 100%). Then, we want to set the geotargeting. The experiment is already limited to the homepage, and now I click ‘and’ to add a 2nd rule and then select ‘geo’ from the list of rules. I want the yellow bar to show only for visitors from London. The city is a standard category, and it recognised London in the autocomplete. As the final step, I need to click ‘Start Experiment’. I can’t edit the rules of any running experiments (as this would mess up the reporting), but I can stop and then copy an experiment which is incorrect. Conclusion Google Optimize makes it really simple to set up tests and personalisations, although it is missing a few features such as scheduling. The premium edition (Optimize 360) will allow more analysis of tests using Google Analytics, and also allow the import of custom audiences from other Google 360 products. This is powerful if you want to launch a customised landing pages experience based on, say, a DoubleClick display ad campaign. So try it out, and if you have any questions, contact one of our experts! Get Social! Follow us on LinkedIn, Twitter, and Facebook and keep up-to-date with our Google Analytics insights.

2016-10-18

Google Optimize versus Optimizely

I’ve been an Optimizely certified expert for a couple of years and have now trialled Google Optimize 360 for a few months, so it seems a good time to compare how they stack up. Optimizely is the current market leader in AB testing (or content experimentation), due to its ease of use and powerful reporting tools. It gives companies an easy way to run many concurrent tests and manage their setup and roll out without the involvement of developers. That was a big step up from Google Content Experiments, where the only way to set up an experiment is to write some Javascript code. The Guardian had some success with Optimizely, where they increased subscriptions by 46%. Google Optimize is an equivalent testing tool, and has copied much of the user interface that made Optimizely popular: you can click on elements within the page to experiment, and change their style, hide them or move them. My only complaint is that the interface is so simple it can take a while to unbury powerful features, such as transform the page via a custom script. There have been many success stories of companies implementing Google 360. Technically, Optimize’s editor is a bit smoother; using a Chrome plugin avoids some of the browser security issues that bugged Optimizely (since internet browsers confused the Optimizely in-page editor with some kind of script hacking). For example, to load Littledata’s homepage in their editor I have to enable ‘insecure scripts’ in Chrome and then navigate to a different page and back to force the editor to re-render. For reporting, Google Optimize 360 gives the ability to see results either in Optimize or as a custom dimension in Google Analytics – so equivalent to Optimizely. Right now Optimize lacks some features for advanced scheduling and user permissions, but I expect those to evolve as the product gathers momentum. The critical difference is with the targeting options Optimizely allows you to target experiments based on the device accessing the page (mobile vs desktop, browser, operating system) and for enterprise plans only to target based on geolocation. The limitation is that every time Optimizely needs to decide whether to run the test, the check for the user’s location may take a few seconds – and the landing page may flicker as a test rule is triggered on not. Google Optimize can target to any audience that you can build in Google Analytics (GA). This means any information you capture in Google Analytics – the number of previous visits, the pages they have previously seen or the ecommerce transactions – can be used in a test or personalisation. For example, in Google Optimize you could serve a special message to users who have previously spent more than $100 in your store. Optimizely has no knowledge of the users’ actions before that landing page, so the only way you could run an equivalent personalisation is to expose this previous purchase value as a custom script on the landing page (or in a cookie). The beauty of Google Optimize is that you are targeting based on information already captured in Google Analytics. There is no technical setup beyond what you were already doing for Google Analytics, and it doesn’t take a developer to build targeting for a very specific audience. Pricing Optimizely starts from under $100/month, but to get access to enterprise features (e.g. for geo-targeting) you will need to spend $2000 plus per month. Google Optimize is currently being sold at a flat rate of $5000 / month for the basic tier of Google 360 customers (which have between 1M to 50M sessions per month), but in future, it could be offered at a lower price to smaller companies. Conclusion Where you’ll appreciate the benefits of Google Optimize is for running personalisations based on complex rules about previous user behaviour, or the campaigns they have seen. The more different tests you are running, the more time and simplicity saving you will get from building the audience in Google Analytics rather than some custom scripts. Google Optimize 360 is currently in beta but you can currently add your email to invite list. For smaller customers, or those with less complex needs, Optimizely still offers better value – but that might change if Google were to offer a limited version of Optimize at a lower price.   Get Social! Follow us on LinkedIn, Twitter, and Facebook and keep up-to-date with our Google Analytics insights.   Further reading: Create and customise dashboards and widgets in Google Analytics New in Littledata: an improved navigation, trend detection algorithm, and more How to set up internal searches in Google Analytics Image credit: Blastam  

2016-10-05

Tips to optimise your ecommerce landing pages

Are your ecommerce landing pages suffering from poor conversion rate because people aren't engaging? First impressions are everything, and more so online, so your task is to figure out which on-site improvements will help you towards your goals. Once you start optimising, it's a continuous process of reviewing, changing, testing and refining - aiming to find out what is most appealing to your customers, what they like and care about, what makes them trust you, what encourages them to purchase. There is always room for refinements so here are some tips on what you should consider when reviewing your pages. What are you trying to achieve? Before starting testing and implementing the changes on your landing pages, you have to be clear about what you want to accomplish. Whilst the end goal for an online store is to increase sales, at times you might also want to get more sign ups, or improve views of or engagement with product pages. Think about what success will look like as that will help with planning your optimisation tests. How are you going to measure it? If you are clear about what you are trying to achieve, it will be easier to set measurable targets. Are you looking to increase your sales by 10% or pageviews of products by 15%? Or maybe you want your potential customers to browse further and spend more time reading content? Further engagement can also be demonstrated by the site visitor scrolling down the page if you have long product or category pages. In which case you'll want to track how far down the page they get to. I believe in keeping reporting straightforward so when testing focus on tracking important metrics only. Ideally just one if you can, or a few if you have to, but that will help focus on measuring what is most important for your business at the time. Assuming you are using Google Analytics, like most of people looking after digital performance, set up goals to monitor how customers are converting. Our web-based software also makes it easy to keep track of on-site changes are by reporting on changes in trends, goals, pages. Who are you targeting? User-focussed content is more effective at engaging your customers and improving your conversion rates. So you should write up your customer personas to be clear about who you are targeting with landing pages. This also applies to general look and feel of your ecommerce site. Most importantly, include with personas what problems your customers are trying to solve or what they are trying to achieve.  Once your team knows who your ideal or typical customers are, then it will be easier to focus on creating more relevant and engaging content on those pages. Do you have a clear value proposition? Value proposition explains why you’re better than or different from your competitors, and what you can deliver that they can’t. When writing it up, focus on benefits not features. It’s not always about the product looking top notch (unless you’re the industry or company where that matters of course) so it is more about how you can alleviate their problem. Check out how to write your value proposition by following Geoffrey Moore’s model. Does your copy reflect your value proposition? Once you have your customer personas and value proposition, review existing content on the site against how you describe what your clients are looking for. Check if it fits with what they are looking for, explains how you can solve their problems or fulfill their desires. The copy on your site has to reflect how you can improve your potential customers lives through what you offer. A great copy informs, compels, captivates, reflects what people search for and promotes key benefits. Econsultancy have compiled a great set of advice from experts on writing copy for product pages. Also, check out Copyblogger Demian Farnworth’s articles for superb advice on writing copy. Have you found your winning call to action? This is very important – test your call to action until you find the best performing one. Your call to action is like a visual sign that guides the buyer towards a specific action you want them to complete. Different things work for different sites. Start off with trying simple changes like different text, colour, shape, size or placement of the button to figure out what is most effective for your page. If small changes aren’t helping, then try a more drastic change of the button or page. Do your pages load fast? This is pretty self-explanatory. Slow page loading speed might drive your potential customers away from your online shop, so you should regularly check whether they can view your products within 3 seconds (Source: Radware). If you’re using Google Analytics, you can use Site Speed reports to check how you’re performing and get advice on where to improve. If you don’t have Google Analytics, you can use their online tool PageSpeed Insights. Other tool worth checking out is GTMetrix where you can grade your site's speed performance and get a list of recommendations. Do you need to optimise for mobile? It’s a very common fact that more and more people are using mobile devices to browse and buy online. But unless you have unlimited budget for ensuring that your ecommerce site is optimised for mobile, it is best to check in Google Analytics first whether you need to do it now. If you go to Google Analytics > Audience > Mobile > Overview report, you will get a breakdown of device categories that buyers are using to visit your online store. Here you can see that the majority of customers, almost 93% are using desktop so in this case (assuming you have a limited budget) you might want to make sure you have a responsive site at the very minimum, and leave a full optimisation for mobile device for later when there is a sufficient need. Now, if results were different and let’s say you had 60% of people visiting your site via mobile devices, then you would want to ensure that they’re getting the best experience on their device and don’t leave the site to buy from a competitor instead. Are your test results statistically significant? Evaluating your AB test results isn't quite as simple as looking at the highest conversion rate for each test, which would be an incorrect way to interpret the outcome. You want to be confident that results are conclusive and changes you tested will indeed improve your conversion rates (or not, depending on the outcome of testing). That's where statistical significance comes in. It gives you assurance about the results of your tests whilst taking into consideration your sample size and how confident you want to be about the importance of the results. By reaching over 95% statistical confidence in testing results, you can be sure that the winning variation performed better due to actually being an improved version, and not simply due to change. You can easily find a calculator online that tells you if your AB testing results were statistically significant and you should conclude the test or not - for example, try the calculator by Kissmetrics or Peakconversion. There is no one winning formula for how to make your pages more effective, but you have to be pro-active to figure out what they are  - so keep testing until you do. Have any questions? Leave a comment below or get in touch with our experts!   Image Credit: Stocksnap.io

2016-07-27

Get the Littledata analytics app

Start your free 14-day trial

Learn More