What can I learn from this page?
FAQ's around impact and driver analysis
Who is this guide for?
Account Admins, Survey Admins, Survey Creators, Report Viewers
- What is a driver analysis?
- What does a driver analysis tell me?
- In simple terms (no math) how are the drivers identified?
- How are the high impact (driver) questions identified in Culture Amp?
- Why do some of the questions and also the Engagement questions have N/A as their driver strength?
- Why do all of my departments or teams (demographics) have the same drivers?
- How do we use the driver analysis?
- How is the driver analysis calculated?
- What is the difference between Extreme, Very Strong, Strong, Moderate and Low Drivers?
- Does the driver strength reflect statistical significance?
What is a driver analysis?
Impact is a word we use to refer to a statistical technique called a driver analysis. People Intelligence relies on a lot of data and analysis techniques, and one of the most powerful is Driver Analysis. Driver Analysis lets you focus on the most important drivers of outcomes for your culture. Driver analysis lets you move beyond low and high scores. Learn how impact is determined.
What does a driver analysis tell me?
The questions that are identified as the top drivers are the questions that are most likely having the biggest impact on Engagement (or whatever measure you have set up as your outcome). So if you were able to improve your scores on these questions, you are likely to improve your Engagement score.
In simple terms (no math) how are the drivers identified?
The analysis uses your company's survey data and is not based on other company's data. We look at how your people responded to the Engagement questions (or other outcome measure i.e your Index Factor) and how they respond to all of the other questions.
We look at people who are most engaged in your company (i.e answer most positively to our Engagement questions) and identify which other questions they are more positive on than other people. We also look at those who are least engaged and see what questions they are less positive about than other people.
By doing this we can identify the questions that more engaged people are more positive about and less engaged people are more negative about.
Putting this information together gives us the the top drivers - or the questions that seem to have the most impact on engagement levels. This does not mean other questions are unimportant, it just means that the other questions have a less similar pattern in the way employees have responded when compared with the way employees responded to the engagement questions.
For example, questions about safety might only be a moderate driver relative to other questions. This may be because safety is at a sufficient level for most employees - this doesn't mean you should allow safety to drop. It is also possible that your drivers of engagement have quite different scores to your engagement scores. This is because the analysis uses rankings and averages.
The key thing to take away is that focusing on the driver questions is more likely to improve engagement than focusing on questions that are not drivers.
How are the high impact (driver) questions identified in Culture Amp?
The strength of correlation for each question on engagement is displayed in Culture Amp's reports using the Impact column.
By default, questions are ordered based on the driver strength so you'll see the strongest driver questions at the top of the list and the questions that have a very weak driver strength at the bottom. Note, the 'score' for each question is not directly related to the driver strength. The score is an agreement score that represents the percentage of respondents that answered either agree or strongly agree to the question.
When comparing the driver strength of questions, keep in mind that it isn't overly important if a question sits one position higher than another question as the driver strength between questions next to each other is often very similar. The main thing is to understand that questions with more overlapping impact circles are having a larger impact on engagement, and questions with circles further apart are having less impact. If you want to see the specific numerical driver strength value, then you can export your data to Excel. Refer to the Geeky FAQs below for more detail.
Why do some of the questions and also the Engagement questions have N/A as their driver strength?
Some questions may have driver strengths which are so close to zero that we list them as Not Applicable (N/A), this just means that their relationship to Engagement is very random or varied. They are usually not the best questions to focus on in terms of improving your Engagement score since the data suggests they will have no consistent impact.
The Engagement questions have N/A listed because they are excluded from the driver analysis. They are excluded because they are being used as part of the outcome index in the statistical analysis. It wouldn't really help to know that Engagement questions are driving or impacting the Engagement index because these questions are actually part of that index. It would be like telling you that happiness causes happiness and that you should act on happiness to improve happiness.
NOTE: You need at least 25 survey responses for any impact to be calculated.
Why do all of my departments or teams (demographics) have the same drivers?
Driver analysis requires much larger samples than what is required to just calculate % agreement (favorable) scores. If we do driver analysis for smaller teams or departments we will find that they are unreliable and may change a lot and be confusing. For this reason we generally do the driver analysis using the entire company's data to make the results reliable and statistically valid. It can also be helpful for a company to all be focused on the same set of drivers and make plans and compare notes and actions. Each manager or department will still see their own scores on these drivers and this can help them find something that they can improve that is also a driver of outcomes.
How do we use the driver analysis?
Driver analysis is an important extra piece of information to consider when you're looking for what to focus on. It is really just some insurance that you won't focus on something that has no relationship with the outcome you have in mind (e.g. increasing Engagement). Don't just focus on the top driver or the top few but instead look at the top five or ten drivers to find something with a lower score than the others that you feel you have the resources and motivation to try and address. You might also look for themes in the items such as two or three questions all about leadership or recognition. In that case you might decide to find actions that will address the overall area as well as the specific questions. Things are often inter-related.
How do companies action plan using driver analysis?
Some companies may decide on focus questions from the top down with executives deciding on what the focus questions will be and managers below working on the how with their teams. Other companies will opt to let managers or departments decide on their own unique focus questions. It is also possible to combine these approaches and have one or two company wide questions to focus on and also allow local plans to complement these with one or two focus questions they want to act on locally. As mentioned above, you should also identify questions or themes that you believe you have the resources and organizational support to tackle. Sometimes you might find things that you already have initiatives around and it may just be a case of communicating that you'll be addressing the survey results via those initiatives also.
Slightly More Geeky FAQs
How is the driver analysis calculated?
The specific statistic we use in most cases is called Kendall's tau-c. We calculate a tau-c value for the outcome index and every other question in the survey that is not in that index so there are multiple statistics calculated which we then use to rank the questions. Each tau-c calculation involves looking at every respondent's score on the outcome index and how they responded to every other question and comparing the patterns to every other respondent's patterns on the same measures. This helps to assess whether a response on one question can tell us anything about someone's responses to the outcome questions (or not). The calculations for tau-c are quite simple but computationally intensive in terms of the number of comparisons that need to be made.
For larger datasets where we have more than 3000 responses we use a Pearson r correlation statistic - because tau-c becomes very slow to calculate and with the large numbers involved there is minimal difference between the two statistics.
What is the difference between Extreme, Very Strong, Strong, Moderate and Low Drivers?
We use these terms as a rough guide to the size of the underlying driver strength. The numerical driver strength values are provided when you export data from the main dashboard. We use slightly different labeling conventions for the two different statistics because the tau-c tends to be more conservative. The underlying value bands that we use are as follows:
- > .70 - Extreme
- >.50<.70 - Very high
- >.40<.50 - High
- >.30<.40 - Medium
- >.20<.30 - Low
- <.20 Insignificant
- > .80 - Extreme
- >.60<.80 - Very high
- >.50<.60 - High
- >.40<.50 - Medium
- >.30<.40 - Low
- <.30 Insignificant
Does the driver strength reflect statistical significance?
No, driver strength is a direct measure of effect size and we manage the substantiveness of results by only showing relationships that meet a minimum effect size (e.g. => .2). We also only calculate the correlations where we have more than 25 respondents and we recommend a focus on drivers with at least a Moderate (or preferably Strong) relationship to the outcome measure. Statistical significance is sensitive to the number of respondents (sample size) and increasing sample sizes will result in more and more correlations being deemed significant even where the effect size is minimal. For this reason we prefer an approach based on effect size and the ranking of questions with a minimum sample size criteria and analysis done at the overall company level.
- How do I use the reports?
- Participation Rates
- Viewing comparisons in reports
- Demographic spread charts
- How is impact determined?
- Insight Report Overview
- A quick guide to results for Managers
#Engagement #Experience #Effectiveness #Attributed #Unattributed #Continuous #ManagerEffectiveness #TeamEffectiveness #ResultsInterpretation #ReportingViews