Digitale Bibliotheek
Sluiten Bladeren door artikelen uit een tijdschrift
 
<< vorige    volgende >>
     Tijdschrift beschrijving
       Alle jaargangen van het bijbehorende tijdschrift
         Alle afleveringen van het bijbehorende jaargang
           Alle artikelen van de bijbehorende aflevering
                                       Details van artikel 6 van 9 gevonden artikelen
 
 
  Criteria for performance evaluation
 
 
Titel: Criteria for performance evaluation
Auteur: David J. Weiss
Kristin Brennan
Rick Thomas
Alex Kirlik
Sarah M. Miller
Verschenen in: Judgment and decision making
Paginering: Jaargang 4 (2009) nr. 2 pagina's 164-174
Jaar: 2009
Inhoud: Using a cognitive task (mental calculation) and a perceptual-motor task (stylized golf putting), we examined differential proficiency using the CWS index and several other quantitative measures of performance. The CWS index (Weiss and Shanteau, 2003) is a coherence criterion that looks only at internal properties of the data without incorporating an external standard. In Experiment 1, college students (n = 20) carried out 2- and 3-digit addition and multiplication problems under time pressure. In Experiment 2, experienced golfers (n = 12), also college students, putted toward a target from nine different locations. Within each experiment, we analyzed the same responses using different methods. For the arithmetic tasks, accuracy information (mean absolute deviation from the correct answer, MAD) using a coherence criterion was available; for golf, accuracy information using a correspondence criterion (mean deviation from the target, also MAD) was available. We ranked the performances of the participants according to each measure, then compared the orders using Spearman's rextsubscript{s}. For mental calculation, the CWS order correlated moderately (rextsubscript{s} =.46) with that of MAD. However, a different coherence criterion, degree of model fit, did not correlate with either CWS or accuracy. For putting, the ranking generated by CWS correlated .68 with that generated by MAD. Consensual answers were also available for both experiments, and the rankings they generated correlated highly with those of MAD. The coherence vs. correspondence distinction did not map well onto criteria for performance evaluation.
Uitgever: Society for Judgment and Decision Making (provided by DOAJ)
Bronbestand: Elektronische Wetenschappelijke Tijdschriften
 
 

                             Details van artikel 6 van 9 gevonden artikelen
 
<< vorige    volgende >>
 
 Koninklijke Bibliotheek - Nationale Bibliotheek van Nederland