Suzi Mayhem Outdoes John Cleese

by AlanF 122 Replies latest jw friends

  • larc
    larc

    Dark,

    You don't need to respond to my last question. I posted it before reading the earlier comments. When doing so, I found out that Alan already clearly proved the assert of travel faster than the speed of light was not supported by the equations suggest by someone else. If fact, he demonstrated clearly that they misunderstood the equations. Now, if you have read this earlier material and still assert what you do, then so be it. If you have any comments to make, you should address Alan's original proofs of just drop the subject. I am not interested in poetic dialogue and quotes from novelists. Saying that c squared is a big FxxKin number doesn't impress me either.

  • larc
    larc

    Alan,

    I apprectiate you depth of knowledge in the subject under discussion. I recently read a book that you might be interested in, "E=mc squared." (I can't do the exponent sign on my webtv keyboard.) There's nothing new to you in the book about science, but I think you would enjoy the authors discussion of the history of science over the last 200 years, as well as his description of the personalities and interpersonal interactions of various scientists.

  • larc
    larc

    Alan,

    Since you are mathematicly gifted, I would like run an idea by you. It has to do with the degree of accuracy in human judgements. The standard method for determining this is to correlate one person's quantified judgements, e.g., on a rating scale, with another person's judgements. The correlation coefficient is assumed to be the proper index of accuracy. This has been the standard metric for the past 90 years. However, there is a problem here. One person is being compared to imperfect standard, i.e., another person. Therefore, the correlation is an underestimate of one person's accuracy. If a person could be compared to a "true score", this would be a better index. Empirical tests have been done by correlating one person's score with the sum of a larger number of others. Measurement theory assumes that as a larger sum is used, the error term decreases, since error between people is random, therefore, the sum approaches the theoretical true score. There is a body of research using this line of reasoning. Another approach is to correct the correlation between two people by taking the square root of the correlation. No one, to date, has tyed these to methods together. I conducted an experiment and found the correlation between (1) the summed scores approach and (2) the square root approach is .982. Not bad, for a result in the "soft science" of psychology. I met great resistance from the journal reviewers and the journal editor. Finally, the editor wrote that if I obtained the empirical results from a second sample and applied it back to math corrections in the first sample, he would give it another look. I am in the process of doing this within the next two weeks. Anyway, I ran this by you to see if you had any thoughts on my logic or the math involved.

  • Focus
    Focus

    larc stated:

    Alan,
    Since you are mathematicly gifted, I would like run an idea by you.

    AlanF has not yet responded, so I'll put my oar in if I may. What I don't comment on, I don't have an issue with.

    It has to do with the degree of accuracy in human judgements. The standard method for determining this is to correlate one person's quantified judgements, e.g., on a rating scale, with another person's judgements. The correlation coefficient is assumed to be the proper index of accuracy. This has been the standard metric for the past 90 years. However, there is a problem here. One person is being compared to imperfect standard, i.e., another person. Therefore, the correlation is an underestimate of one person's accuracy.

    The last statement is incorrect. The two people could be making the same, though wrong, judgements (could happen for a variety of reasons - to name but one, common socio-economic-educational sources of "error") - in which case the correlation coefficient would be an overestimate (not an underestimate) of the target person's accuracy.

    If a person could be compared to a "true score", this would be a better index. Empirical tests have been done by correlating one person's score with the sum of a larger number of others.

    Please be very explicit here - there are at least two very different ways to interpret what you have just stated.

    Did you mean using a population "average" instead of a second individual for the purpose of comparison?

    Measurement theory assumes that as a larger sum is used, the error term decreases, since error between people is random,

    Please justify the last clause. Most "random" things aren't.

    therefore, the sum approaches the theoretical true score.

    I do not understand this statement - please be specific and use precise terminology. Is regression towards the mean meant?

    There is a body of research using this line of reasoning. Another approach is to correct the correlation between two people by taking the square root of the correlation.

    As a typically-defined correlation coefficient belongs to [-1,1], half the range will produce an imaginary result when square-rooted.

    No one, to date, has tyed these to methods together. I conducted an experiment and found the correlation between (1) the summed scores approach and (2) the square root approach is .982.

    You do not mention sample size. One cuckoo doth not a spring make, or however the saying goes.

    Not bad, for a result in the "soft science" of psychology. I met great resistance from the journal reviewers and the journal editor. Finally, the editor wrote that if I obtained the empirical results from a second sample and applied it back to math corrections in the first sample, he would give it another look. I am in the process of doing this within the next two weeks. Anyway, I ran this by you to see if you had any thoughts on my logic or the math involved.

    If you are more specific, I am sure meaningful opinions can be expressed.

    --
    Focus
    (The Generation of Random Numbers is FAR too important to be left to Pure Chance! Class)

  • larc
    larc

    Focus,

    Thank you for your thoughts. Let me provide a little more detail and answer your questions. A sample of 51 college students were asked to rate the nutritional value of 30 food items on a zero to nine scale. From an intercorrelation matiix it was possible to find the correlation between each person's rating and each other person's, termed reiiability. That is there were 50 pairs of correlations for each student. From this, each person's average reliability with others was determined. The averages ranged from .54 to .80.

    Also it was possible to add the fifty other ratings and correlate this total with a student's rating, termed one versus total. Single versus total score correlations were plotted against the person's average reliability. The x axis is the the person's average reliability. The Y axis is the correlation of the person's rating with the total of 50 others. The square root of the person's reliability formed a line through the dots, and the correlation between the dots (empirical data) and the square root calculation was .982. Thus, the sqare root of the person's average correlation with others produced almost identical results with the empirical data. For example, if a person's average reliability was .64 the sqare root would be .80. The one versus 50 correlation was nearly idintical to .80 for a person with an average reliability of .64.

    I will leave the method questions alone until you have a chance to chew on this to see if it is clearer or muddier now.

    I want to congratulate you on your first point of contention. You are absolutely right about similiar but wrong judgements producing overestimates. No one among the reviewers or the editor mentioned this.
    In measurement theory for 90 years, they have assumed that the type of data discussed here contains two components, true varience and random error variance. All research in psychometrics are based on this assumption. The only person to question this assumption was Robert Wherry at Ohio State. In 1952 he wrote a manuscript stating that each score is made of three parts, not two: true variance, random error variance, and correlated error variance, or shared error or bias. He shared his ideas with collegues, but his work was not published until after his death by a student of his in Personnel Psychology, 1982, p.521, Bartlett and Wherry. His work has been ignored. As the article states "Theorum (38) The reliability of a rating scale tells us very little about its value, since the apparent reliability may be due to bias rather than true score."

    Despite bias however, the formula and my data shoud correlate as expected.

    Thanks again, for your comments they were well thought out.

  • riz
    riz
    Nor is it a question of tongue in cheeks!

    As self-appointed Queen of the Double Entendre, I must say that I am impressed. As a wise woman once said, "Adroit minds think alike."

    riz

    Insanity in individuals is something rare- but in groups, parties, nations, and epochs it is the rule. - Nietzsche

  • Focus
    Focus

    I will work through the math and come back here with an opinion as to whether the stated proximity is to be expected (also, please put an email address in the body of your text). Just a couple of points to be clarified first:

    larc wrote:

    Focus, Thank you for your thoughts. Let me provide a little more detail [..] Also it was possible to add the fifty other ratings and correlate this total

    larc, there are many measures ("metrics") for correlation, as I'm confident you are aware. The additive method of arriving at a "rest of population" score table is IMO most meaningful if the correlation coefficient being computed is one of rank - is it, and what formula is being used? Also, who says a 0 and a 9 score for a particular food item are "worth" as much as a 4 and a 5? Where one is computing a sum (or mean; same end result here) of 50 scores, this point is very moot. Linearity is an assumption.

    square root

    I take it if the correlation coefficient was -ve, the logic you use would require the "square root" to be defined as the negative square root of the absolute value of the coefficient (rather than i times the positive value thereof?)

    I want to congratulate you on your first point of contention. You are absolutely right about similiar but wrong judgements producing overestimates. No one among the reviewers or the editor mentioned this. In measurement theory for 90 years, they have assumed that [..] Thanks again, for your comments they were well thought out.

    Note that you can use a Monte Carlo (stochastic) technique yourself (just in Excel, say) to bypass the math and see whether pseudo-random evaluations still produce the closeness you have stated.

    --
    Focus
    (Thinking for oneself occasionally is a profitable exercise Class)

  • larc
    larc

    Focus,

    Thanks again for your analysis. First and easiest to answer, my e mail address is in the yellow mail box by my name.

    If you "work through the math", you may want to look at Ghiselli's book "Theory of Psychological Measurement", 1964 pp. 234,235., that shows the derivation of the square root as a correction for error of measurement in a correlation. Other books are: Hays, Statistics for Psychologists 1963, pp. 501, 502, and Nunnally, 1978, Psyhcometric Theory, p. 238.

    The correlational method is the Pearson Product Moment Correlation, a least square fit formula, as I understand it. The equation assumes interval data. As you correctly point out (Hats off to you again) rating scales do not meet this assumption. The numbers are ordinal, not interval and only rank differences in the numbers can be assumed. Behavioral Scientists are fast and loose with assumptions at times, but since their data is pretty sloppy most of the time it usually doesn't matter.

    Regarding negative correlations and the square root, this is not a problem because it is assumed that in a judgement task that the range of values are between zero and one. If the correlations are not significantly greater then zero, then the whole study would be dropped and the conclusion would be that people simply can't make a particular judgement.

    I have to give some thought to the Monte Carlo method and how it could be used. I know of one study that varied reliability distribution means and standard deviations to determine how the outcomes would differ from some expected results. I have go back and do some home work before I can respond in any more detail.

    Please send a note to my e mail address, so I can contact you. I have a couple other questions, but I should probably take them off line, so Dark Clouds can get back to quoting novelist's opinions of science.

    Larc (of the Science is more fun than pioneering class)

  • mommy
    mommy
    I have to give some thought to the Monte Carlo method and how it could be used


    My dream car Bro. Larc. ...I see it now baby blue, tee tops, V8 engine, never below 90 on a deserted highway. And Bob Seger playing,"Roll me away" on the radio. Oh one day I will see it
    wendy

  • ZazuWitts
    ZazuWitts

    Hey MOMMY Wendy,

    At one time Larc and I were the proud owners of a 1963 split-rear-window Corvette = dream car!!!!! White, red leather interior. Only car we ever sold for $$$$ more than purchase price. Just the memories make be feel'young at heart.' Phrumm, phrumm, phrumm, PHRUMM,
    ...PHrUMMMM, ...poor cyber-simulation of 'sound' while waiting at any red light...okey, dokey, hot shot, this is a demon car....will leave you in the dust, ha,ha. Zazu, now old and hopefully 'wise' enough to let dangerous 'ego' wallow in the dust!!

Share this

Google+
Pinterest
Reddit