The measure of validity based on an evaluation of subjects, the topics, or the content covered by the items in the test
Involves the logical examination and evaluation of the content of a test [including the test questions, format, wording, and processes required to test takers] to determine the extent to which the content is representative of the concepts that the test is designed to measure
describes a judgement of how adequately a test sample's behavior is representative of the universe of behaviors that the test was designed to sample
1. It measures extent to which items on a test are representative of the attribute or construct the test claims to measure [representative of construct]
---The level of clarity of the vision of the construct being measured can normally be reflected in the content validity
Researchers by using content validity attempt to include relevant factors excluding irrelevant factors
2. Built during and after test development
During test you define test universe & identify relevant content areas, matching items with those identified areas.
---In this phase goal is to create "test blueprint," and to create items that match the blueprint. A test blueprint is based on the construct to be measured, and its aim is to identify the key aspects of the construct that need to be measured, and their importance, which for example, could be represented by a number of items in a specific
section
After development of test you can gather Subject Matter Experts [SMEs] to judge the content validity of your test, and then use the content validity ratio [CVR], which quantifies the degree of content validity
Content Validity is dependent of human judgement, diff ppl might make diff judgements, so the extent to which a test is seen as content valid for its purpose, really depends on the group you're working with [their background, culture, etc.], even when use experts content validity depends on group of experts you select, or on your own expertise in the area.
Recommended textbook solutionsMyers' Psychology for AP
2nd EditionDavid G Myers
900 solutions
Myers' Psychology for the AP Course
3rd EditionC. Nathan DeWall, David G Myers
955 solutions
Social Psychology
10th EditionElliot Aronson, Robin M. Akert, Samuel R. Sommers, Timothy D. Wilson
525 solutions
Social Psychology
10th EditionElliot Aronson, Robin M. Akert, Timothy D. Wilson
525 solutions
Upgrade to remove ads
Only SGD 41.99/year
- Social Science
- Psychology
-
Flashcards
-
Learn
-
Test
-
Match
-
Flashcards
-
Learn
-
Test
-
Match
Terms in this set [19]
Why must measures be reliable? What is the main consequence of using an unreliable measure in a study?
Measures must be able to measure the same thing and give consistent data. If a test is unreliable, then the researcher's can't trust the data received.
What is measurement error, and what are some things that cause it?
-the participant's observed score is the result of factors that distort the observed so that it isn't precisely what it should be
-transient states, stable attributes, situational factors, mistakes
Why is it virtually impossible to eliminate all measurement error from the measure we use in research?
Because we're human and things are constantly changing and reacting
What is the relationship between the reliability of a measure and the degree of measurement error it contains?
-reliability of a measure is an inverse function of measurement error
What does the reliability of a measure indicate if it is .60? .00? 1.00?
-60% of the
total variance in scores is systematic or true-score variance
-0% of the total variance in our scores reflects the true variability in whatever we are measuring
-100% of the total variance in a set of scores is the true-score variance
What does a correlation coefficient tell us? Why are correlation coefficients useful when assessing reliability?
-a statistic that expresses the strength of the relationship
between two measures on a scale from .00 to 1.00
-it reveals the degree to which two measurements yield similar scores
What are three ways in which researchers assess the reliability of their measures?
-test-retest reliability
-iterrater reliability
-interitem reliability
When would you calculate Cronbach's alpha coefficient? What does it tell you?
-to measure interitem reliability
-when the alpha exceeds .70, we know that the items on the measure are systematically assessing the same construct and less than 30% of the variance in people's scores on the scale is measurement error
What is the minimum reliability coefficient that researchers consider acceptable? Why do researchers use this minimum criterion for reliability?
-.70
-if no more than 30% of the total variance is die to measurement error, than the measure is reliable enough to use
For what kind of measure is it appropriate to examine test-retest reliability? interitem reliability? interrater reliability?
-if the attribute being measured would not be expected to change between two measurements [intelligence, attitudes, or personality]
-items on a scale
[extraversion, shyness, paranoia]
-when two people record and observe others behavior
Why are researchers sometimes not able to test the reliability of their measures?
-b/c the scale could be self-report
What steps can be taken to increase the reliability of measuring techniques?
-standardize administration of the
measure
-clarify instructions and questions
-train observers
-minimize errors in coding data
What is validity?
-the extent to which a measurement procedure actually measures what it is intended to measure rather than measuring something else or nothing at all
Distinguish among face validity, construct validity, and criterion-related validity. In general, which kind of validity is lease important to researcher?
-extent to which a measure appears to measure what it's supposed to measure [done by professionals]
-saying whether a particular measure relates as it should to other measures [hypothetical constructs]-->a measure should correlate w/ other measure that it should correlate w/ [convergent validity] and NOT correlate w/ measures it should not correlate w/ [discriminant validity]
-extent to which a
measure allows us to distinguish among participants on the basis of a particular behavioral criterion
-least important-->face value
Can a measurement procedure be valid but not reliable? Reliable but not valid? Explain.
-it can be reliable but not valid because all the test questions can relate to each other but not actually be a test of what you need to find
-it canNOT be valid, but
reliable
Distinguish between construct and criterion-related validity.
-convergent and discriminant validity
-concurrent and predictive validity
Distinguish between concurrent and predictive validity
-main difference between them involves the amount of time that elapses b/w administering the measure to be validated and the measure of the
behavioral criterion
-same time vs some time in the future
How can we tell whether a particular measure is biased against a particular group?
-examine the predictive validity of a measurement
How do researchers identify biased test items on tests of intelligence or ability?
-they take those who scored the same and make them all take a single part of the test. If a group [race, gender, age] didn't do well as a whole or all missed a question, then the test item is biased.
Sets with similar termsBehavioral Research Methods Test 2
31 terms
devychel
Psych 312-exam 1: Chapter 3
46 terms
kendall_johnson2
HDF 415 Test 2
27 terms
emmcarin
Research Chapter 5 Test
54 terms
Victoria_Gilchrist5
Sets found in the same foldermisc questions
12 terms
abbybowling13
intro to psych research
8 terms
abbybowling13
Experimental Psy. chapter 4 and 2nd half of 5
75 terms
beanna_okeugo
Other sets by this creatortest 1
21 terms
abbybowling13
strategies for successful work/family relationship
4 terms
abbybowling13
reasons against physical punishment
6 terms
abbybowling13
what science says about effective discipline
9 terms
abbybowling13
Verified questionsPSYCHOLOGY
What part of the brain triggers the release of adrenaline to boost heart rate when you’re afraid? a. Amygdala. b. Thalamus. c. Medulla. d. Hippocampus. e. Hypothalamus.
Verified answer
QUESTION
What do we call an anxiety disorder marked by a persistent, irrational fear and avoidance of a specific object, activity, or situation? a. Obsessive-compulsive disorder. b. Phobia. c. Panic disorder. d. Generalized anxiety disorder. e. Posttraumatic stress disorder.
Verified answer
QUESTION
The oldest theory about human motivation, which focuses on unlearned, complex patterns of behavior present throughout a species, is known as a. arousal theory. b. drive-reduction theory. c. instinct theory. d. extrinsic motivation. e the hierarchy of needs.
Verified answer
PSYCHOLOGY
Based on what you have learned about language development, do you think all students in elementary school should be taught a foreign language? Why or why not?
Verified answer
Recommended textbook solutions
Myers' Psychology for AP
2nd EditionDavid G Myers
900 solutions
Myers' Psychology for the AP Course
3rd EditionC. Nathan DeWall, David G Myers
955 solutions
Child
2nd EditionGabriela Martorell
239 solutions
Essentials of Psychology: Concepts and Applications
6th EditionJeffrey S. Nevid
792 solutions
Other Quizlet setsWorkers Compensation
32 terms
mckenna_monahan3
Evolutionary Psychology
193 terms
caitlin_dickson
Adv AT Lower Extremities O, I, Course, Movement, N…
17 terms
kelleylittle
E2-Hatch-Nonwoven Fabrics
68 terms
khiemster
Related questionsQUESTION
Binet and Terman would most likely disagree about the:
5 answers
QUESTION
18. The kind of thinking involved in generating creative solutions to a problem is called
2 answers
QUESTION
A psychologist administers an intelligence test to 100 fourth graders. One month later the psychologist returns and readministers the test. The psychologist is probably interested in
15 answers
QUESTION
Wechsler Adult Intelligence Scale [WAIS] [ pg. 535 ]
2 answers