1) The more powerful/robust the statistical test using, the higher likelihood of chance a) true b) false 2) types of operationalization a) self-report b) interviews c) observational d) physiological 3) different object receive different scores-arbitrary category, discrete a) Property of Identity/nominal scale b) Property of magnitude/ordinal scale c) Property of equal size/interval scale d) Property of absolute zero/ratio scale 4) assigned numbers are greater than or less than- order of placement, discrete a) Property of Identity/nominal scale b) Property of magnitude/ordinal scale c) Property of equal size/interval scale d) Property of absolute zero/ratio scale 5) the differences between each number are consistent- comparing equal intervals, 0 is something, continuous a) Property of Identity/nominal scale b) Property of magnitude/ordinal scale c) Property of equal size/interval scale d) Property of absolute zero/ratio scale 6) the differences between each number are consistent- comparing equal intervals, 0 means nothing, continuous a) Property of Identity/nominal scale b) Property of magnitude/ordinal scale c) Property of equal size/interval scale d) Property of absolute zero/ratio scale 7) people give consistent scores on every item of a questionnaire a) Internal reliability b) Split half reliability c) Test-retest reliability d) Inter-rater reliability e) Cronbach's alpha/ coefficient alpha 8) split a test in half and see if the two halves are similar- same time, different forms a) Internal reliability b) Split half reliability c) Test-retest reliability d) Inter-rater reliability e) Cronbach's alpha/ coefficient alpha 9) people get consistent scores each time they take the test a) Internal reliability b) Split half reliability c) Test-retest reliability d) Inter-rater reliability e) Cronbach's alpha/ coefficient alpha 10) two coders ratings of a set of targets are consistent with each other a) Internal reliability b) Split half reliability c) Test-retest reliability d) Inter-rater reliability e) Cronbach's alpha/ coefficient alpha 11) correlation based statistic looking at inter-item correlation: degree that various items on a test or scale are measures of the same thing a) Internal reliability b) Split half reliability c) Test-retest reliability d) Inter-rater reliability e) Cronbach's alpha/ coefficient alpha 12) does it look like a good measure?- subjective a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 13) is it measuring all components of the construct?- subjective, need operational definition a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 14) do two distinct constructs produce unrelated scores?- empirical a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 15) does it correlate with key behaviors?- empirical a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 16) is there a strong relationship between two measures of the same construct?- empirical a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 17) is it measuring what it's supposed to measure? a) face validity b) content validity  c) criterion validity d) divergent validity e) convergent validity f) construct validity 18) question format: endless possible topics a) open ended b) forced choice c) Likert scale  d) semantic differential 19) question format: two options one or the other a) open ended b) forced choice c) Likert scale  d) semantic differential 20) question format: specific options- On a scale of 1-5 how polite are Canadians? a) open ended b) forced choice c) Likert scale  d) semantic differential 21) question format: Anchored by two opposing ideas- Are Canadians 1 (polite) or 5 (rude)?   a) open ended b) forced choice c) Likert scale  d) semantic differential 22) observers do not know which conditions participants have been assigned and are not aware what study is about. Attempt to prevent observer bias. a) masked research design (blind design) b) double blind 23) observer and participants have no idea what group they are in or what's being observed a) masked research design (blind design) b) double blind 24) reactivity a) when a behavior keeps happening consistently b) when a behavior is extinguished and then comes back c) when people change their behavior in some way when they know that someone else is watching them d) when observers react to results differently based on what they expect to find 25) ways to prevent reactivity a) unobtrusive observations-blend in b) wait for the subjects to get used to being observed c) measure traces/results of a behaviour d) you dont. e) double blind f) have someone other than the researcher observe 26) used with quantitative data, inter rater reliability * (pleased with over .90) a) ICC-interclass correlation coefficient b) Cohen's Kapa c) Pearson's r 27) qualitative data, number of times raters were in agreement.   a) ICC-interclass correlation coefficient b) Cohen's Kapa c) Pearson's r

Leaderboard

Visual style

Options

Switch template

Continue editing: ?