Varies, but should have all three of these: (1) changes over time in average levels of group performance, (2) changes over time in validity coefficients, and (3) changes over time in the rank ordering of scores on the criterion. Then can pick some of the information below for evidence, or others not presented. Ghiselli and Haire (1960) followed the progress of a group of investment salespeople for 10 years. During this period, they found a 650% improvement in average productivity, and still there was no evidence of leveling off! However, this increase was based only on those salespeople who survived on the job for the full 10 years; it was not true of all of the salespeople in the original sample. Criteria also might be dynamic if the relationship between predictor (e.g., preemployment test scores) and criterion scores (e.g., supervisory ratings) fluctuates over time (e.g., Jansen & Vinkenburg, 2006). About 60 years ago, Bass (1962) found this to be the case in a 42-month investigation of salespeople’s rated performance. He collected scores on three ability tests, as well as peer ratings on three dimensions, for a sample of 99 salespeople. Semiannual supervisory merit ratings served as criteria. The results showed patterns of validity coefficients for both the tests and the peer ratings that appeared to fluctuate erratically over time. The third type of criteria dynamism addresses possible changes in the rank ordering of scores over time. This form of dynamic criteria has attracted substantial attention (e.g., Hofmann, Jacobs, & Baratta, 1993; Hulin, Henry, & Noon, 1990) because of the implications for the conduct of validation studies and personnel selection in general. If the rank ordering of individuals on a criterion changes over time, future performance becomes a moving target. Under those circumstances, it becomes progressively more difficult to predict performance accurately the farther out in time from the original assessment