Mastering Modern Psychological Testing Theory and Methods 1st Edition by Cecil R. Reynolds -Test Bank

$25.00

Category:

Description

INSTANT DOWNLOAD WITH ANSWERS

Mastering Modern Psychological Testing Theory and Methods 1st Edition by Cecil R. Reynolds -Test Bank

`Chapter 4 Test Questions

 

  1. In Classical Test Theory, the X represents _______ and the T represents ________.
    1. measurement error, observed score
    2. observed score; stable test-taker characteristics X
    3. observed score; measurement error
    4. stable test-taker characteristics; observed score

 

  1. _________ refers to the consistency or stability of test scores.
    1. Measurement error
    2. Reliability X
    3. Variance
    4. Validity

 

  1. Classical Test Theory focuses our attention on ________ measurement error.
    1. random X
    2. variable
    3. standard
    4. systematic

 

  1. The mean of error scores in a population is equal to ________.
    1. 1
    2. 0 X
    3. 1
    4. 10

 

  1. There is (a) _______ relationship between an individual’s level on a construct and the amount of measurement error impacting their observed score.
    1. no X
    2. weak
    3. moderate
    4. strong

 

  1. ________ is/are usually considered the largest source of error in test scores.
    1. Administrative errors
    2. Clerical errors
    3. Content sampling X
    4. Time sampling

 

  1. On a test comprised of constructed response items, it is important to consider:
    1. administrative errors.
    2. clerical errors.
    3. inter-rater differences. X
    4. time sampling.

 

 

  1. The reliability coefficient (rxx) equals true score variance (s2T) divided by the:
    1. observed score.
    2. measurement error .
    3. variance due to measurement error.
    4. variance of the total test. X

 

  1. What conclusion could be accurately drawn from a reliability coefficient of .80?
    1. 18% of test score variance is due to true score variance.
    2. 20% of test score variance is due to true score variance.
    3. 64% of test score variance is due to true score variance.
    4. 80% of test score variance is due to true score variance. X

 

  1. If 6% of test scores’ observed variance is due to measurement error, the reliability coefficient of the test would be:
    1. .06.
    2. .36.
    3. .60.
    4. .94. X

 

  1. Test-retest reliability is primarily sensitive to measurement error due to:
    1. content sampling.
    2. content sampling and temporal instability.
    3. factor invariance.
    4. temporal instability. X

 

  1. Alternate form reliability based on simultaneous administration is primarily sensitive to measurement error due to:
    1. content sampling. X
    2. content sampling and temporal instability.
    3. practice effects.
    4. temporal instability.

 

  1. Alternate form reliability based on delayed administration is sensitive to measurement error due to:
    1. content sampling.
    2. content sampling and temporal instability. X
    3. practice effects.
    4. temporal instability.

 

  1. As a general rule, _________ tests produce more reliable scores than ______ tests.
    1. brief; lengthy
    2. intensive; extensive
    3. longer; shorter X
    4. shorter; longer

 

  1. The uncorrected split-half reliability coefficient __________ the reliability of the full test score.
    1. accurately reflects
    2. overestimates
    3. underestimates X
    4. provides an indeterminate reflection of

 

  1. Split-half reliability is primarily sensitive to measurement error due to:
    1. content sampling. X
    2. content sampling and temporal instability.
    3. practice effects.
    4. temporal instability.

 

  1. __________ is sensitive to the heterogeneity of the test content.
    1. Alternate-from reliability with delayed administration
    2. Coefficient Alpha X
    3. Split-half reliability
    4. Test-retest reliability

 

  1. _________ is applicable when test items are scored dichotomously while _________ can be used when test items produce multiple values.
    1. Coefficient Alpha; KR 20
    2. KR 20; Coefficient Alpha X
    3. Split-half reliability; test-retest reliability
    4. Test-retest reliability; split-half reliability

 

  1. On a classroom essay test, __________ is a major concern.
    1. inter-rater reliability             X
    2. internal consistency reliability
    3. split-half reliability
    4. test-retest reliability

 

  1. The reliability of composite scores is generally ________ the reliability of the individual scores that contributed to the composite.
    1. equal to
    2. higher than X
    3. lower than

 

  1. Which of the following is a measure of inter-rater agreement that takes into consideration the degree of agreement expected by chance?
    1. KR 20
    2. Strong-Campbell Beta
    3. Cronbach’s Coefficient alpha
    4. Cohen’s kappa X

 

  1. Which of the following methods is necessary when estimating the reliability of a test score intended to predict performance at a future time?
    1. Alternate form reliability with simultaneous administration
    2. Coefficient alpha
    3. KR 20
    4. Test-rest reliability X

 

  1. Which reliability estimate would be preferred for a score derived from a test with heterogeneous content?
    1. Coefficient Aplha
    2. KR 20
    3. Split-Half Coefficient X

 

  1. Which method of rating reliability would be appropriate for scores from a speed test?
    1. Coefficient Alpha
    2. Kuder Richardson 20
    3. Test-retest reliability X
    4. Split-half reliability

 

  1. Reliability coefficients based on a homogeneous sample would likely be ________ coefficients based on a heterogeneous sample.
    1. equal to
    2. larger than
    3. smaller than X

 

  1. As the reliability of a test score_______ the standard error of measurement _______.
    1. decreases; increases X
    2. decreases; decreases
    3. increases; remains the same
    4. decreases; remains the same

 

  1. Sally’s obtained scored on a statistics exam is 75. The SEM is 2. With what confidence interval would we capture her true score 68% of the time?
    1. 71 to 79
    2. 73 to 77 X
    3. 69 to 81
    4. 70 to 80

 

  1. Generalizability Theory typically uses which statistical procedure to estimate reliability?
    1. Analysis of Variance (ANOVA) X
    2. Correlation Coefficient
    3. Linear Regression
    4. Multivariate Analysis of Vvariance (MANOVA)

 

 

 

  1. The average of all possible split-half coefficients is known as:
    1. Coefficient alpha. X
    2. correlation coefficient.
    3. alternate form reliability.
    4. Spearman-Brown coefficient.

 

  1. A limitation of the test-retest approach to estimating reliability is the influence of:
    1. administration effects.
    2. content effects.
    3. practice effects. X
    4. temporal effects.

 

  1. The Spearman-Brown formula is used to:
    1. correct a split-half reliability coefficient. X
    2. estimate construct reliability.
    3. perform a curvilinear transformation of the scores.
    4. perform a linear transformation of the scores.

 

  1. As reliability increases, confidence intervals:
    1. X
    2. do not change.

 

  1. _________ is a result of transient events in the test taker (fatigue, illness, etc.) and the testing environment (temperature, noise level, etc.).
    1. Administration error
    2. Content sampling error
    3. Temporal instability X
    4. Systematic measurement error

 

  1. The reliability of difference scores is typically _______ the reliability of the individual scores.
    1. equal to
    2. higher than
    3. lower than X

 

  1. The reliability index reflects the correlation between:
    1. true scores and observed scores. X
    2. true scores and measurement error.
    3. observed scores and measurement error.
    4. true scores and true scores.

 

 

 

 

 

  1. If the reliability coefficient equals .81, the reliability index equals:
    1. .19.
    2. .81.
    3. .90. X
    4. 0.

 

  1. What happens to the size of confidence intervals as reliability coefficients increase?
    1. They decrease. X
    2. They increase.
    3. They remain the same.
    4. It is indeterminate – it depends on the construct being measured.

 

  1. The ____________ is an index of the amount of measurement error in test scores and is used in calculating confidence intervals.
    1. Standard Error of Estimate
    2. Standard Error of Measurement X
    3. Spearman-Brown Coefficient
    4. Skew Coefficient

 

  1. ______________________ is a useful index when comparing the reliability of the scores produced by different tests, but when the focus is on interpreting the test scores of individuals, the ________________________ is more practical.
    1. Reliability Coefficient; Standard Error of Measurement X
    2. Standard Error of Measurement; Reliability Coefficient
    3. Standard Error of Estimate; Coefficient Alpha
    4. Standard Error of Estimate; Reliability Coefficient

 

  1. In Item Response Theory, information on the reliability of test scores is typically reported as a:
    1. Test Information Function X
    2. Standard Error of Estimate
    3. Skew Coefficient
    4. Coefficient of Determination
    5. Coefficient of Non-determination

 

Chapter 6 Test Questions

 

  1. An oral examination, scored by examiners who use a manual and rubric, is an example of _________ scoring.
    1. objective
    2. subjective X
    3. projective
    4. validity

 

  1. A fill-in-the-blank question is a ___________ item.
    1. constructed-response X
    2. selected-response
    3. typical-response
    4. objective-response

 

  1. Which of the following formats is a selected-response format?
    1. Multiple-choice
    2. True-false
    3. Matching
    4. All of above X

 

  1. How many distracters is it recommended that one provide for multiple choice items?
    1. 2
    2. 2 to 6
    3. 3 to 5 X
    4. 4

 

  1. When writing true-false items, one should include approximately _________ true and ________ false.
    1. 30%; 70%
    2. 50%; 50% X
    3. 70%; 30%

 

  1. When developing matching items, one should keep the lists as ___________ as possible.
    1. heterogeneous
    2. homogeneous X
    3. sequential
    4. simultaneous

 

  1. What is a strength of selected-response items?
    1. Selected-response items are easy and quick to write.
    2. Selected-response items can be used to assess all constructs.
    3. Selected-response items can be objectively scored.         X

 

  1. ___________ require examinees to complete a process or produce a project in a real-life simulation.
    1. Projective tests
    2. Performance assessments X
    3. Selected response test
    4. Multi-trait/multi-method tasks

 

  1. A strength of constructed-response items is that they:
    1. eliminate random guessing. X
    2. produce highly reliable scores
    3. can be quickly completed by examinees.
    4. eliminate “feigning.”

 

  1. You are creating a test designed to assess a flute player’s ability. Which format would assess this domain most effectively?
    1. Performance assessment X
    2. Matching
    3. Selected-response
    4. True-false

 

  1. General guidelines for writing test items include:
    1. the frequent use of negative statements.
    2. the use of complex, compound sentences to challenge the examinees.
    3. the avoidance of inadvertent cues to the answers. X
    4. arranging items in a non-systematic manner.

 

  1. When developing maximum performance tests, it is best to arrange the items:
    1. from easiest to hardest. X
    2. from hardest to easiest.
    3. in the order the information was taught.

 

  1. Including more selected-response and other time-efficient items can:
    1. enhance the sampling of the content domain and increase reliability. X
    2. enhance the sampling of the content domain and decrease reliability.
    3. introduce construct irrelevant variance.
    4. decrease validity.

 

  1. In order to determine the number of items to include on a test, one should consider the:
    1. age of examinees.
    2. purpose of test.
    3. types of items.
    4. type of test.
    5. All of the above X

 

  1. __________ are reported as the most popular selected-response items.
    1. Essays
    2. Matching
    3. Multiple-choice X
    4. True-false

 

  1. When writing multiple-choice items, one advantage to the ______________ is that it may present the problem in a more concise manner.
    1. direct-question format X
    2. incomplete sentence format
    3. indirect question format

 

  1. What would be the recommended multiple-choice format for the stem: ‘What does 10×10 equal?’
    1. Best answer
    2. Correct answer X
    3. Closed negative
    4. Double negative

 

  1. Which multiple-choice answer format requires the examinee to make subtle distinctions among distracters?
    1. Best answer X
    2. Correct answer
    3. Closed negative
    4. Double negative

 

  1. Which of the following is NOT a guideline for developing true-false items?
    1. Include more than one idea in the statement. X
    2. Avoid using specific determiners such as all, none, or never.
    3. Ensure that true and false statements are approximately the same length.
    4. Avoid using moderate determiners such as sometimes and usually.

 

  1. What is a strength of true-false items?
    1. They can measure very complex objectives.
    2. Examinees can answer many items in a short period of time. X
    3. They are not vulnerable to guessing.

 

  1. _________ scoring rubrics identify different aspects or dimensions, each of which is scored separately.
    1. Analytic X
    2. Holistic
    3. Sequential
    4. Simultaneous

 

 

  1. With a _______ rubric, a single grade is assigned based on the overall quality of the response.
    1. analytic
    2. holistic X
    3. reliable
    4. structured

 

  1. One way to increase the reliability of short-answer items is to:
    1. give partial credit.
    2. provide a word bank.
    3. use the incomplete sentence format with multiple blanks.
    4. use a scoring rubric. X

 

  1. What item format is commonly used in both maximum performance tests and typical response tests?
    1. Constructed-response
    2. Multiple-choice
    3. Rating scales
    4. True-false X

 

  1. For typical-response tests, which format provides more information per item and thus can increase the range and reliability of scores?
    1. Constructed-response
    2. Frequency ratings X
    3. True-false
    4. Matching

 

  1. Which format is the most popular when assessing attitudes?
    1. Constructed-response
    2. Forced choice
    3. Frequency scales
    4. Likert items          X
    5. True-false

 

  1. What is a guideline for developing typical response items?
    1. Include more than one construct per item to increase variability.
    2. Include items that are worded in both positive and negative directions. X
    3. Include more than 5 options on rating scales in order to increase reliability.
    4. Include statements that most people will endorse in a specific manner.

 

 

  1. Examinees tend to overuse the neutral response when Likert items use ________ and may omit items when Likert items use __________.
    1. an odd number of options; an even number of options X
    2. an even number of options; an odd number of options
    3. homogenous options; heterogeneous options
    4. heterogeneous options; homogenous options

 

  1. Which of the following items are difficult to score in a reliable manner and subject to feigning?
    1. Constructed-response X
    2. True-false
    3. Selected-response
    4. Forced choice

 

  1. Guttman scales provide which scale of measurement?
    1. Nominal
    2. Ordinal X
    3. Interval
    4. Ratio

 

  1. Which assessment would best use a Thurstone scale?
    1. Constructed-response test
    2. Maximum performance test
    3. Speed test
    4. Power test
    5. Typical response test X

 

  1. According to a study by Powers and Kaufman (2002) regarding the relationship between performance on the GRE and creativity, depth, and quickness, what were the findings?
    1. There is substantial evidence that creative, deep-thinkers are penalized by multiple-choice items.
    2. There was no evidence that creative, deep-thinkers are penalized by multiple-choice items. X
    3. There was a significant negative correlation between GRE scores and depth.
    4. There was a significant negative correlation between GRE scores and creativity.

 

  1. _________ are a form of performance assessment that involves the systematic collection and evaluation of work products.
    1. Rubrics
    2. Virtual exams
    3. Practical assessments
    4. Portfolio assessments X

 

 

  1. Distracters are:
    1. rubric grading criteria.     X
    2. the incorrect response on a multiple-choice items.
    3. words inserted in an item intended to “trick” the examinee.
    4. unintentional clues to the correct answer.

 

Reviews

There are no reviews yet.

Be the first to review “Mastering Modern Psychological Testing Theory and Methods 1st Edition by Cecil R. Reynolds -Test Bank”

Your email address will not be published. Required fields are marked *