36th International Congress on Assessment Center Methods

Since the early 1970s, assessment center practitioners and academics have met on a regular basis to share research and discuss evolving applications of the methodology. The most recent event was held in St. Petersburg, Florida in mid-October. Participants came from around the world (20 different countries were represented) and heard various presentations on assessment center research and case studies. Described below are some key highlights from selected sessions.

Which is best: consensus discussion by observers or statistical algorithm?

Scott Highhouse, a well-known professor of industrial/organizational psychology, brought a fresh perspective to the long-standing controversy regarding how best to integrate assessment center data (by consensus discussion between observers or by a mathematical calculation of an overall assessment rating – OAR). Research results have been very clear. In most cases three or four out of many more dimensions assessed will be sufficient to predict the final rating done by the assessors. The meta-analysis by Arthur et al. (2003) revealed that a statistical combination of just three dimensions (influencing, problem solving, and organization/planning) is much more accurate in predicting job performance than the OAR as discussed among assessors.
Since research is very clear on the advantage of just calculating the assessment result instead of conducting a discussion, Highhouse speculated why these consensus discussions are still so popular. He believes that it goes back to the very beginning of the assessment center method in the 1940s when Dr. Abraham Murray established the assessment for the selection of future spies at the OSS (Office of Strategic Services), which was the forerunner of the Central Intelligence Agency (CIA) in the US during World War II. Murray was trained as a medical doctor and was accustomed to a team approach and doing „grand rounds.“ He never used statistical formulas in his assessment approach.

So why is this consensus approach still so popular? Highhouse pointed out that many assessors derive a strong sense of satisfaction from putting the evidence together and creating a holistic view of the assessee within the context of the integration meeting. One could argue that simply averaging ratings does not allow assessors to take into account subtle variations and patterns in participant behavior. However, Highhouse argues that while the consensus discussion might increase assessors´ confidence in their results, it can actually yield less accurate predictions. He shared several experiments that revealed the shortcomings of human decision making that take place in the consensus discussions. He concluded that details and nuance increase confidence in a prediction, but reduce accuracy. (This means that detailed discussion of observations in the assessor discussion can create a perception of thoroughness, but even experts do not always separate relevant from irrelevant data.)

Considering the strong evidence in favor of the statistical approach, Highhouse strongly argues against large numbers of assessment dimensions and the overall assessment rating (OAR) consensus discussion. He recommends combining dimensions using a simple formula: Problem Solving + Influencing + Organization/Planning + Communication = OAR (overall assessment rating).

Assessment center integrity: Does the word get around?

Assessment center security has always been a concern. Assessment center exercises are typically proprietary and time consuming to design and develop. Organizations are often faced with the question of whether to develop new exercises for each repeated administration of the assessment center. There is an underlying concern that past participants will discuss their assessment center experiences with future participants, possibly giving them an unfair advantage.
In this study, two groups of participants were run through a 1-day assessment center (3 exercises) on consecutive days over a 4-year period. The research questions were: Do participants on later days have an advantage over those on earlier days? Do participants in subsequent years have higher scores than previous years? The data clearly show that there were no overall differences from one administration to the next. If participants were actually communicating information about the assessment center from day to day (or year to year), it did not seem to be helping. (There was some anecdotal evidence that the participants did indeed talk about the assessment center.) The means and standard deviations have stayed the same over four years. There was not an advantage in going second (or third, etc.).
The researchers concluded that as long as the assessment center is valid, there is no compelling reason to revise the exercises. There seems to be something about assessment centers that make them almost „cheat proof.“ If the simulations are „real-life“ in design, the participants are more likely to be themselves (rather than doing what someone else told them to do). They act more naturally, given the realism of the situation.
An additional finding relates to the impact of developmental feedback from the assessment center on future performance (on a second assessment center, one year later). The higher performers continued to improve, while the low performers (who tended to resist the feedback) showed little or no positive change.

The question of motivation in developmental assessment centers

It is generally accepted that participants in assessment centers are motivated to do their best. This argument suggests that there should be very little variation in motivation either between or within participants. However, given the growth of developmental assessment centers, perhaps there is greater potential for motivation to vary because participants may have different reasons for attending the program. These differences in motivation may affect performance during the developmental assessment center, which may in turn affect the accuracy of the assessment, the quality of the feedback, and the likelihood of participant acceptance of the feedback. It is also possible that motivation may vary across exercises in the assessment center. If so, then what steps can be taken to get people more engaged?

In a study conducted by Dr. Alyssa Gibbons, there was clear variation in levels of motivation among 222 participants in a 4-exercise developmental assessment center, leading to the conclusion that a participant’s attitude towards a developmental experience positively affects motivation. Assessors perceive lots of variance in participants‘ motivation, and some exercises are more motivating than others. Assessors‘ ratings of performance are closely tied to their perceptions of motivation. The study showed that the best predictor of participants‘ motivation was their „attitude toward development in general.“ Individual exercises were seen as more motivating than group exercises.

Research from Zurich: Non-transparent assessment center improves predictive validity

One of the featured speakers at the congress was the assessment center researcher Dr. Martin Kleinmann from the University of Zurich. His research deals with the effect of feelings and projections of the participants on the results of the assessments. He found large individual differences in the participants´ ability to identify the criteria/dimensions to be measured in role plays and group discussions. Some of the assessment center participants would be able to identify the entire set-up of the assessment dimensions, while others were not even able to guess one single dimension. He calls this new effect ATIC (ability to identify criteria) – going beyond the „officially“ measured assessment center dimensions.

Ability of AC-candidates to look „behind the scene“

One interesting finding is that this ability strongly predicts performance in the assessment center. Those who have the capability „to look behind the scene“ are better performers. One might think that this has to do with cognitive ability, but this is only partially true. After statistical controlling for cognitive ability, ATIC is still a significant predictor of assessment center performance. Those participants who are able to identify what the interviewer wants to assess later emerge in the interview evaluation with a higher evaluation. There seems to be a social competency that involves identifying what is needed in a social situation, and then adjusting one’s behavior to meet these r

Cookie Consent mit Real Cookie Banner