Creating Control Groups/Hold Out Groups in Engagement Setups

Related products: PX Engagements

Hey all, I'm looking to setup 'hold out/control groups' for some of our new engagements. It's not an A/B test where I'm comparing two different engagements, but rather users who would not see engagements (our baseline) and then those who will. This should help us validate if our engagements are influencing adoption/behavior of tools. We are new to PX and only have a several weeks of data so looking for increase in trends will be a hard route for us at this time. We can turn it on/off for different months, but seasonality is something specific to our user base and would be a flaw in our results. I'm aware there is no tool (but could be in the future PX product team ;)). Has anyone else found a creative way to create a control group?





Sarah



Hi Sarah,





This is a great question! I've been thinking about it a bit too, but haven't had a chance to experiment yet. I was thinking we might use an Account rule where the name starts with __ , and then target perhaps half of our accounts to see the engagement.





I'm curious to hear what others may have tried!




Hey Sarah and Lila, thanks for your good question and ideas.





Sarah, while you had a good idea yourself and we diiscused on a seprate thread, I am sharing it here for the broader audience. One of the following ideas could be followed:





1. Using Naming conventions> starting with/ ending with alphabets for the random control group (you were already thinking of this); or;





2. You can create a user numeric custom attribute with a sequence number which then can be used (1-user.size) - we can export & import the users into a csv and use the csv loader utility to update the user record; or,;





3. We have partly done this internally: Identifying areas/categories with high volume of Support Tickets/ CSM Questions (so, consider this as identifying the biggest pain point for your customers) and launching in-app guides in those specific areas and correlating it with the # of support tickets/ questions/ CS Calls going down. Support Tickets are more easy to track and quantify.





So, to track real value/ driving adoption using PX Engagements, the broader concept would be (which can be applied to your idea as well):





1. To identify the biggest pain-points (ex: using Support Tickets),





2. Associating a KPI with it (ex: decrease support tickets in Month 1 by 10% for customers in onboarding pahse) (this will be a key)





3. During onboarding, targeting/addressing those pain-points areas of the product using in-app engagements





4. Track your metrics (ex: if week on week or Month on Month, has the Support Tickets decreased for customers in 'onboarding' stage)





This will help see if PX engagements are driving value/ adoption/ increase CSAT/ NPS.





@denver_pm I understand you had a very good idea about randomiation on User id with for controlled group, would be great if you shared your idea here for others as well. thanks!




These are all good, possible options and I am pretty sure that there will never be a "one size fits all" approach for every PX customer.













It will be important to get the sample size/construct of each grouping right too, so roughly a 50/50 split picked "randomly" from all possible users/accounts makes the most sense and will ensure that each sample is valid and not biased.













As Harshi said, you could pull these accounts out of PX, sort/filter/data-science outside to evenly apply some classification (group1, group2, groupN) or numbering code (1, 2, N) and then reupload to update them. However, this process would likely need to be repeated periodically to account for new or removed users.













Harshi is also right that focusing on specific target sets of users/account and then applying the 50/50 split against those would be very important in many cases so that you can do a targeted apples-to-apples comparisons. This will certainly be a good next step after finding a good way to apply a 50/50 split across the full set of users/accounts.













Without understanding all the possible data available, alphanumeric "String" type fields seems like a very good option as you could use the "starts with" or "ends with" Audience filtering options. If they are already available in your dataset, string fields with only number values could be a very good option as you could break them up quickly by sets:













group1 - starts/ends with [0,1,2,3,4] or [0,2,4,6,8]





group2 - starts/ends with [5,6,7,8,9] or [1,3,5,7,9]













If you need to choose a String field with mixed alphanumeric characters (company name, userid, email, last name, etc.), it should still work pretty well although you would need to create many more filters to include every possible letter/number combination across each sample set.













To me, the "ends with" filter option seems like it would lead to better equal sample sizes in all cases, since fields like user/account IDs typically start with the same (or only a few) different numbers that auto-increment and there are many more company names that start with "A" vs. "X" for example. So, choose your group filter distributions wisely.













Lastly, you will probably want to be sure to exclude internal and testing users from all sample groups, since that could skew your results unexpectantly.













I hope this helps a little!




Thanks all! I ended using a 'ends with' with our account # field. Did some trial reporting and I'm within 2 percentage points of group A and B giving me the closest randomized sample I can find. This should do the trick and allow me to hold it over several engagements.





Thanks all!




@denver_pm please see this month’s release notes:https://support.gainsight.com/PX/Release_Notes/PX_Release_Notes_November_2019 → #3 Improved Advanced Settings in Engagements:

 

Based on your feedback, we made it a Product Feature to Target Engagements to Controlled Groups