Ratings Preview is designed to create a more effective, fair, and constructive performance evaluation process by addressing common challenges in employee evaluation and calibration processes.
Bias Reduction and Improving Accuracy
Individual manager evaluations are often susceptible to bias, with studies showing that idiosyncratic rater effects can account for over half of performance rating variance. Ratings Preview helps mitigate this by:
Providing an additional review step: calibrators can review employee ratings and offer opinions, leading to more accurate final ratings before calibration.
Promoting shared understanding: by creating an opportunity for calibrators to align on performance criteria and definitions. Research indicates that post-calibration ratings correlate better with other performance metrics, and calibration processes have been shown to mitigate evaluation bias (Speer, Tenbrink & Schwendeman 2019).
Manager Development and Accountability
Ratings Preview actively supports manager growth and encourages more thoughtful evaluations by:
Encouraging pre-work: calibrators review data and prepare for calibration discussions, making meetings more efficient and helping flag employees needing deeper conversation.
Fostering accountability: Managers know that their recommended ratings will be reviewed by other calibrators in the Ratings Preview step. This encourages greater attention and care to their recommended ratings, as they know justification will be needed.
Building shared frame-of-reference: Engaging in the pre-calibration process helps calibrators develop a shared understanding of performance rating definitions, leading to more consistent evaluations across your organization.
Improving the Calibration Process
Ratings Preview enhances the effectiveness and depth of your calibration discussions:
Promoting constructive dialogue: Three deliberately nuanced response options - "I support," "I abstain," "I have questions" aim to move beyond simple agreement or disagreement to encourage meaningful feedback and deeper insights.
These responses facilitate discussion and understanding rather than voting, helping you focus on areas that need in-depth conversation, while ensuring all viewpoints are heard.
"I support": Indicates clear agreement and endorsement of the rating.
"I abstain": Offers a neutral stance when calibrators lack sufficient information or prefer not to weigh in, serving as a useful starting point when reviewing multiple employees.
"I have questions": Prompts critical evaluation by encouraging calibrators to seek clarification or additional context, making sessions more informative.
This approach fosters transparency and enhances collaborative decision-making among your managers.
🟢 Support | 🟡 Abstain | 🔴 Questions |
I agree with the proposed rating | I don’t have enough context | I need clarification or have concerns |
Clear alignment | Neutral stance | Prompts further discussion |
Ratings Preview creates a structured, transparent, and collaborative environment that enhances the accuracy and fairness of performance evaluations, while promoting meaningful dialogue among calibrators.
Implementing Ratings Preview
Who is Ratings Preview for?
We recommend Ratings Preview for most organizations conducting calibrations, and also for those considering adding calibrations to their performance management process. This feature improves calibration outcomes by enhancing consistency, fairness, and clarity—particularly valuable for organizations navigating complexity, growth, or changes in their performance management approach.
Ratings Preview may not add as much value to your organization if:
You are a small company with high visibility across individuals work and level of impact,
You are running developmental-only performance systems, or
You have teams with very mature, well-aligned performance practices and high trust levels.
What Situations Might Benefit from Implementing Ratings Preview?
You might consider using Ratings Preview if: | Why? |
You’re working in a large, distributed, or matrixed organisation |
|
Your performance system is new or evolving |
|
If DEI is a focus |
|
You have high stakes performance cycles |
|
You have new or inexperienced managers |
|
You are trying to build a feedback culture |
|
Checklist for Using Ratings Preview
Unsure if Ratings Preview is right for you? This tool can help you determine if your organization could benefit from adding it to your next performance cycle, and how much support to provide to your managers.
Question | Yes | No |
1. Are you using ratings (numeric, label-based, or tiered outcomes) this cycle? | ☐ | ☐ |
2. Is there a new or revised framework (e.g. leveling guide, rating scale, values)? | ☐ | ☐ |
3. Are managers or employees new to this process or unfamiliar with expectations? | ☐ | ☐ |
4. Do multiple functions or locations use the same performance framework? | ☐ | ☐ |
5. Have you observed rating inflation, inconsistency, or clustering in past cycles? | ☐ | ☐ |
6. Do ratings link to compensation, promotion, or employment decisions? | ☐ | ☐ |
7. Is DEI or bias reduction a stated goal of your performance system? | ☐ | ☐ |
8. Do you have many new or first-time people managers participating? | ☐ | ☐ |
9. Is your organization trying to build shared accountability for performance quality? | ☐ | ☐ |
10. Would managers benefit from norming around what “great” looks like in your context? | ☐ | ☐ |
How to Interpret Your Results
More than 5 ‘Yes’ responses:
Your organisation is likely facing conditions where alignment, fairness, and shared understanding are critical.
We strongly recommend using Ratings Preview as a part of your cycle.
We also suggest running pre-calibration training for calibrators. This helps them get familiar with the process before finalising ratings or reviews and creates consistency across teams. Prioritise this support where stakes are high or frameworks are newly introduced.
3 - 4 ‘Yes’ responses:
Ratings Preview will still add value, especially in surfacing early misalignment and building manager confidence.
We recommend using it, alongside lightweight enablement.
This might include practice using sample employee profiles, short guides, or an online learning module. These resources can build comfort with the process without adding too much lift.
Fewer than 3 ‘Yes’ responses:
Ratings Preview may not be necessary this cycle. If your current context is low risk—for example, if you’re not using ratings, your system is well-established, or there is high trust and alignment, focus instead on feedback quality and clarity in your performance expectations.
You might still support managers with feedback coaching or quick check-ins to ensure consistency where needed.
Using Ratings Preview for the First Time
When implementing Ratings Preview for the first time, careful consideration of communication, training, and integration into your existing performance process is essential for a smooth and effective rollout.
What to Consider in Your Communications Plan
Transparency and visibility: Communicate upfront that Ratings Preview (and Calibration) will be used as part of the performance cycle. Managers should know that their manager review and proposed ratings will be visible to other calibrators. This transparency builds trust and encourages thoughtful submissions. Calibrators will be able to view and comment on proposed ratings, including those from other managers. Facilitators will be able to see all responses, including comments.
Process overview and timeline: Inform calibrators upfront where Ratings Preview sits within the overall performance process and calibration timeline. Build in appropriate time within your overarching performance process for calibrators to complete the Ratings Preview. While it's designed to be light-touch (around one minute per person), allowing sufficient time prevents rushed reviews.
Role of Ratings Preview vs. the calibration meeting: Emphasize that Ratings Preview is a complementary tool to the calibration meeting, not a replacement. Ratings Preview helps align standards and surface areas for discussion, while the calibration meeting is the forum for making final rating decisions and ensuring overall distribution fairness.
Streamlining calibration meetings: Explain that insights from Ratings Preview will directly inform and streamline calibration meetings. Discussions can focus on employees where questions were raised or significant variations exist, making sessions more efficient and impactful.
Encourage pre-work: Reinforce the importance of managers completing their individual performance reviews and preparing clear explanations for their proposed ratings before Ratings Preview.
Training
Ensure calibrators are well-trained on both the overarching calibration process and the specific functionalities and expectations of Ratings Preview. The aim is to help calibrators develop a shared understanding of performance rating definitions, leading to more consistent evaluations and norming around what “great” looks like in your context.
This training should ideally cover:
The purpose of calibration and Ratings Preview (e.g. bias reduction, creating greater accuracy, and a shared understanding of ‘great’).
How to effectively use the "I support," "I abstain," and "I have questions" response options.
Emphasize that the response options in Ratings Preview are designed to promote constructive feedback and dialogue, understanding they are designed to facilitate discussion, not simply count votes.
What level of detail and evidence is expected when providing comments or raising questions. If a calibrator disagrees with or changes a rating as a result of the Ratings Preview, they should provide clear, observable evidence.
How to access and review employee profiles for additional context.
Using Ratings Preview Data in Your Calibration Session
We recommend that the completed Ratings Preview is used to help facilitators effectively prepare for the calibration process, rather than as a shared view to use with your calibrators during the session. This approach ensures that comments made by calibrators are reviewed by the facilitator, rather than being seen by the entire calibration group during the session.
However, if there is a high degree of trust between your calibrators, or your group is very familiar with calibrations, you may choose to share all comments.
Steps:
Identify areas of alignment: Determine where "support" and/or "abstain" ratings indicate a majority of alignment.
Identify employees for focused discussion: Pinpoint individuals who have received the most "questions" from other calibrators, as these will be the primary focus of the session.
Facilitators may wish to follow up directly with calibrators before the session to clarify any unclear questions.
Utilize the flag feature: Highlight the employees you wish to focus on during the session:
Enhance discussion focus: These flags will be visible in the main calibration overview to help guide the discussion.
Moving from a Manual Pre-Calibration Process to Using Ratings Preview
If you've been using spreadsheets or other manual processes for pre-calibration, transitioning to Ratings Preview will feel different at first. The shift from manual tracking to a platform-based approach brings clear advantages: centralized data eliminates version control issues, automated audit trails track changes and feedback, and streamlined workflows save time for both HR and managers.
Getting Started with the Shift
Acknowledge the change: Moving from familiar spreadsheets to a new platform requires adjustment. Focus on what stays the same: the goal of ensuring consistency, fairness, and alignment before calibration meetings, while highlighting the efficiency gains.
Provide clear enablement: Ensure HR facilitators and calibrators understand how to navigate Ratings Preview, input ratings and feedback, and generate reports. Show them how the feature integrates with the broader calibration process within Culture Amp.
Start with key features: Point out capabilities that improve on manual methods, such as the "Support," "Abstain," and "I have questions" options that guide discussion more effectively than spreadsheet comments, easy access to Employee Profiles, and built-in guidance.
Address common concerns: Be ready to explain functionality, emphasize the platform's user-friendly design, and provide support during the transition period.
Consider Running a Pilot
Consider piloting with a smaller group of calibrators / facilitators before organization-wide rollout. This allows you to refine the process and address questions before full implementation.
After your first cycle using Ratings Preview, gather feedback from your users to identify what's working well and where further refinement might help. The Culture Amp platform should make the process more efficient, while maintaining the same focus on alignment and fairness that made your manual process valuable.
FAQs
How do we decide if Ratings Preview is right for us?
Ask yourself:
Are we seeing inconsistencies in ratings or feedback quality across teams?
Are managers confident in what "good" or "great" looks like?
Is fairness a known concern from previous cycles or engagement surveys?
Are we introducing any new frameworks, tools, or expectations this cycle?
If you answer yes to any of these, Ratings Preview likely has value for your organization.
When would Ratings Preview not be needed?
Ratings Preview may not add as much value if:
you are a small company with high visibility across individuals work and level of impact,
You are running developmental-only performance systems, or
You have teams with very mature, well-aligned performance practices and high trust levels.
Should I action/review every person in the preview, or just ones where I have something to say?
We currently recommend you action every person, and then only comment when you have specific feedback or questions to raise.
This ensures comprehensive coverage, while helping to efficiently focus discussion time in the following calibration session.
How much time do we recommend allowing for Ratings Preview before the calibration session? What level of detail should calibrators be going into?
Ratings Preview is not a re-review of performance. It is a light-touch process designed to align understanding of performance expectations for each level and category, and to check for alignment, consistency, and fairness before ratings are finalized.
The focus is on surfacing areas that may benefit from discussion, not on debating every rating.
Most calibrators can complete Ratings Preview in around one minute per person.
For a small number of employees, for example those where expectations or outcomes may need more discussion, a few extra minutes can go a long way.
In these cases, calibrators can review the Employee Profile for more detail, and add questions that they want to surface during the calibration session.
Here’s a time-planning guide:
Employee context | Suggested time | Focus |
Higher / lower performing employees | 2–3 min | Ensure the rating reflects contribution, context, and impact |
Where promotion or performance outcomes are under consideration | 5 min | Review the rationale, consider feedback and context, and prepare any questions |
Well-understood, steady contributors with no rating concerns | <1 min | Skim for alignment - no detailed review needed unless something stands out |
Is Ratings Preview enough on its own? Do I still need to run a calibration meeting?
No, we do not recommend only running a Ratings Preview. Ratings Preview and calibration serve different purposes and work best together:
Ratings Preview helps calibrators to align on standards and definitions of performance.
The calibration is the forum for making final rating decisions, adjusting for fairness, and checking distribution across teams.
Even with a well-run Ratings Preview, we recommend holding a calibration session to finalise outcomes.
If we have used Rating Preview, how long should we spend discussing ratings in the calibration session?
Ratings Preview is designed to make calibration sessions faster and more focused. It helps calibrators align on performance before the meeting, so time can be spent where it matters most.
The time needed will depend on how many employees need discussion and how confident managers are in their ratings. For example, is this their first time using the framework? Are they managing in a fast-paced or complex part of the business?
As a guide, we recommend allowing around 2 minutes per person who needs to be discussed. That should be enough to clarify intent, raise questions, and confirm alignment.
If Ratings Preview is done well, you won't need to review every rating in detail. Focus your time on the outliers, or the ones where questions remain.
If a calibrator disagrees with or changes a rating, what rationale or evidence should they provide to back up their recommendation?
The purpose of Ratings Preview is to build a shared understanding of what good performance looks like, not to challenge for the sake of debate.
If a calibrator wants to propose a different rating or flag one for discussion, their input should be supported by clear, observable evidence.
Helpful forms of evidence might include:
Specific outcomes achieved: such as progress on goals, delivery against KPIs, or impact on team or business results
Examples of behaviours: stories that illustrate how the person demonstrated key competencies or values
Summaries of feedback: insights from peers, stakeholders, or skip-levels that reflect patterns over time
A useful prompt to guide your input:
“What outcomes or observable behaviours led you to this rating, and how do they align with our expectations for this role level?”
What are the benefits of Ratings preview?
Ratings Preview helps managers align on what performance standards look like in practice. This improves consistency and fairness across teams.
It can also reduce bias early in the process, before final decisions are made. This creates a stronger foundation for a more equitable calibration session.
It builds manager confidence and helps them practise using your performance framework. Over time, it also lifts the overall quality of feedback.
Done well, Ratings Preview saves time in the final calibration. It helps surface misalignment early and shows employees that your organisation is serious about fairness, clarity, and learning.
I see that I can ‘support’ or ‘abstain’ on a rating - does that mean that we are ‘voting’?
No, Ratings Preview is not a voting process. The options you see ("I support," "I abstain," and "I have questions") are designed to guide discussion, not make decisions by majority.
"I support" means you agree with the rating.
"I abstain" means you don't have enough context to weigh in.
"I have questions" flags that you'd like more information or clarification.
These responses help focus the conversation. They show where there’s alignment, where there may be gaps, and where the group needs to dig deeper. The goal is shared understanding, not simply counting opinions.
➡️ Next Steps
Use the Ratings Preview checklist to assess whether it’s right for your organization
Decide when and how to introduce Ratings Preview in your performance cycle
Build awareness through early communication, especially with calibrators and managers
Develop your enablement plan: include training, process overviews, and examples
Pilot with a small group, if helpful, to refine your rollout
Read the facilitator and calibrator guides for practical setup and usage tips
💬 Need help? Just reply with "Ask a Person" in a support conversation to speak with a Product Support Specialist.