• Factors Affecting Performance

  • FileName: job-analysis-criteria-reliability-validity.pdf [read-online]
    • Abstract: Factors Affecting PerformanceJob AnalysisMotivation • A job analysis generates information aboutthe job and the individuals performing thejob.– Job description: tasks, responsibilities, working

Download the ebook

Factors Affecting Performance
Job Analysis
Motivation • A job analysis generates information about
the job and the individuals performing the
– Job description: tasks, responsibilities, working
Performance Environment conditions, etc.
– Job specification: employee characteristics
(abilities, skills, knowledge, tools, etc.) needed
to perform the job
Abilities – Performance standards
Job Analysis Methods Uses of Job Analysis
• Job Analysis can focus on the job, on the • Information from a job analysis is used to
worker, or both assist with
– Job Oriented: focus on work activities – Compensation
– Worker-oriented: focus on traits and talents – Performance appraisal- criteria
necessary to perform the job – Selection- identifying predictors
– Mixed: looks at both – Training
– Enrichment and combination
Some Job Analysis Procedures
Worker Oriented Threshold Traits Analysis
1. PAQ (Position Analysis Questionnaire)
– Information input (what kind of information
does the worker use in the job)
2. TTA: Measures 33 Traits in six areas
– Physical (stamina, agility, etc)
– Mental Processes (reasoning, decision making,
etc.) – Mental (perception, memory, problem solving)
– Work Output (what machines, tools, or – Learned (planning, decision making,
devices are used) communication)
– Relationships – Motivational (dependability, initiative, etc)
– Job Context (environment) – Social (cooperation, tolerance, influence)
– Other Characteristics
Occupational Information Network (O*NET)
U.S. Dept. of Labor
Other Job Analysis Methods
• CIT- (Critical incidents technique) collects and – Worker Requirements (Basic skills,
categorizes critical incidents that are critical in Knowledge, education)
performing the job. – Worker Characteristics (abilities, values,
• Task Oriented Procedures interests)
1. Task Analysis- compiles and categorizes a list – Occupational Characteristics (labor market
of task that are performed in the job. information)
2. Functional Job Analysis (method)– describes – Occupation-Specific Requirements (tasks,
the content of the job in terms of things, data, duties, occupational knowledge)
and people. – Occupational Requirements (Work context,
organizational context)
Issues to Consider in Developing
O*NET Basic Skills
Criteria for Performance
• Reading • Long term or short term performance
• Active listening • Quality or quantity
• Writing • Individual or team performance
• Speaking • Situational effects
• Multidimensional nature of performance at
• Critical thinking work
• Repairing • What do we want to foster? Cooperation or
• Visioning competition, or both?
Criterion Deficiency
Conceptual versus Actual
• Conceptual Criterion– the theoretical Criterion Deficiency Criterion
construct that we would like to measure.
• Actual Criterion– the operational definition Relevance
(of the theoretical construct) that we end up
Criterion Contamination Actual
measuring. Criterion
We want the conceptual criterion and actual criterion to
overlap as much as possible.
Criterion Deficiency Types of Performance
• Criterion Deficiency– the degree to which • Task Performance– generally affected by
the actual criterion fails to overlap with the cognitive abilities, skills, knowledge &
conceptual criterion. experience.
• Criterion Relevance– the degree of overlap • Contextual Performance– generally affected by
personality traits and values includes helping
or similarity between the actual and others, endorsing organizational objectives, &
conceptual criterion. contributing to the organizational climate.
• Contamination– the part of the actual Prosocial behavior that facilitates work in the
criterion that is unrelated to the conceptual organization.
criterion. • Adaptive Performance– engage in new learning,
coping with change, & developing new processes.
Criteria Used by Industry to
Criteria Validate Predictors
• Supervisory performance ratings
• Criteria Should be • Turnover
– Relevant to the specific task • Productivity
– Free from contamination (does not include • Status Change (e.g. promotions)
other factors relevant to task performance) • Wages
– Not deficient (must not leave out factors • Sales
relevant to the performance of the task) • Work samples (Assessment Centers)
– Reliable • Absenteeism
• Accidents
Personnel Psy, by Schmitt, Gooding,Noe, & Kirsh (1984)
Predictor No. of Average Reliability
Studies Validity
Special attitudes
• Classical Model
31 .27
– An observation is viewed as the sum of two
62 .15 latent components: the true value of the trait
General mental
53 .25 plus an error,
• X= t + e
99 .24
• The error and the true component are
Work samples
18 .38 independent of each other.
21 .41 • The true and error component can’t be
Physical Ability
22 .32
Overall 337 .28
Test-Retest Reliability
Test-retest reliability is estimated by comparing respondents’ scores
Types of Reliability on two administrations of a test
Test-retest reliability is used to assess the temporal stability of a
Test-retest reliability measure; that is, how consistent respondents’ scores are across time
The higher the reliability, the less susceptible the scores are to the
Alternate-form reliability random daily changes in the condition of the test takers or of the
Split-half reliability testing environment
The longer the time interval between administrations, the lower the
Internal consistency (a.k.a., Kuder-Richardson reliability; test-retest reliability will be
a.k.a., Coefficient Alpha) The concept of test-retest reliability is generally restricted to short-range
random changes (the time interval is usually a few weeks) that characterize
Interrater reliability (a.k.a., interscorer reliability) the test performance itself rather than the entire behavior domain that is being
Long-range (i.e., several years) time intervals are typically couched in terms
of predictability rather than reliability
Test-retest reliability is NOT appropriate for constructs that tend to fluctuate
on an hourly, daily, or even weekly basis (e.g., mood)
19 20
Reliability Signal to Noise
• How consistent is a measure over repeated • Under the assumption of independence, we
applications. define reliability as
• Consistency is a factor of the error in the
measure. σ t2
• If we view an observation as X=T+E, we ρ= 2
can define reliability as the ratio of two σ t + σ e2
Job Analysis of the Student
Sources of Unreliability
• Cognitive skills– Analysis, innovation, • Item sampling
ability to learn • Guessing
• People skills– Cooperation, conflict • Intending to choose one answer but marking
resolution, & emotion intelligence another one
• Communication– Written and verbal • Misreading a question
communication skills • Fatigue factors
• Motivation and commitment
Methods of Estimating
Problems With Reliability
• Test-retest • Homogenous groups have lower reliability
• Parallel (alternate) -forms than heterogeneous groups
• Split-half (must use adjustment Spearman- • The longer the test the higher the reliability
Brown) • Most reliability estimates require that the
• Kuder-Richardson (Alpha) test be one-dimensional
• Inter-rater
Establishing Validity
Validity • Content validity– The degree to which the items in a
test are representative sample of the domain of knowledge
the test purports to measure
• 1. Whether a test is an adequate measure of
the characteristics it is suppose to measure. • Criterion Related Validities– the degree to which
a test is statistically related to a performance criterion.
• 2. Whether inferences and actions based on
– Concurrent Validation
the test scores are appropriate.
– Predictive Validation
• Similar to reliability, validity is not an
• Construct Validity– the degree to which a test is an
inherent property of a test. accurate measure of the theoretical construct it purports to
– Multi-trait Multi-method approach
Poor Reliability, Poor Validity Good Reliability, Poor Validity
29 30
Good Reliability, Good Validity Performance Appraisal Goals
• Assessment of work performance
• Identification areas that need improvement
• Accomplishing organizational goals
• Pay raises
• Promotions
Potential Problems Possible Solutions
• Single criterion- most jobs require more than one • Use of multiple criteria
• Focusing on behaviors
• Leniency- inflated evaluations
• Using multiple evaluators
• Halo- one trait influences the entire evaluation
• Similarity effects- we like people like us • Forcing a distribution
• Low differentiation- no variability • Important Issues:
• Forcing information- making our minds too soon. – Training the evaluators
– Rater’s motivation
Methods of Performance
• Basic Rating Forms • Supervisor’s assessment
– Graphic forms • Self-assessment– generally people
– BARS (Behaviorally anchored ratings scales) recognize their own strengths and
– BOS (Behavioral observation scales) weakness, but they are generally a bit
– Check lists (based on ratings of CI)
– Mixed scales
– 360 degree feedback • Peer assessment– very accurate in
• None have shown overall advantage predicting career advancement.
Performance Appraisals
• PA systems that have failed in court
generally were
– Developed without the benefit of a Job
– Conducted in the absence of specific
instructions to raters
– Trait oriented rather than behavior oriented
– Did not include a review of the appraisal with
the employee

Use: 0.0101