Which online Asperger’s tests
I suppose I could do such a thing, but I'm not sure I'm motivated to do it. I don't exactly want to place an IQ-test in Aspie-quiz again, and as I already wrote, Aspie-quiz have minimal loading on IQ, and so do most of the issues used.
I also know which questions / question types are most relevant for Autism vs AS/HFA/PDD, but these are diagnostic issues and not anything natural.
It is probably a coincidence that F1 has slightly higher scores than the AQ, and get more "very likely Aspie". Remember that in R4, the situation was reversed. It is not even given that the difference is statistically significant. Besides, both R4 and F1 used exactly the same scoring algorithm, and similar scoring factors (as these are averages of previous versions). The difference was in actual questions only.
OK, but I have no interest in separating ASCs diagnosis. I'm primarily interested in human natural variation. I find the DSM-labels quite problematic and stigmatizing and will certainly not support giving them any justification.
Giving the current respondents in the AS/HFA/PDD-group, which in many cases do not live up to their diagnosis neither in the AQ test, nor in Aspie-quiz, I find this relatively impossible to do. It would require clinical diagnostics on respondents doing Aspie-quiz, which I cannot do.
I suppose I could do such a thing, but I'm not sure I'm motivated to do it. I don't exactly want to place an IQ-test in Aspie-quiz again, and as I already wrote, Aspie-quiz have minimal loading on IQ, and so do most of the issues used.
Creating separate categories for PDD-NOS etc. for test-takers to select in future would probably be sufficient if comparing the AQ and Aspie Quiz were ever decided to be investigated. My suggestion about having uncontaminated samples was purely in relation to testing the relative ability of the 2 tests to identify diagnosed AS/HFA, which would have involved creating a sample of diagnosed AS/HFA (i.e., excluding low IQ, self-diagnosed, and PDD-NOS).
Since you mentioned that the AS/HFA/PDD sample partly consisted of the self-diagnosed, in addition to PDD-NOS, and there was no AS/HFA sample, those results cannot be used to compare the 2 tests in identifying AS/HFA anyway.
OK, but I have no interest in separating ASCs diagnosis. I'm primarily interested in human natural variation. I find the DSM-labels quite problematic and stigmatizing and will certainly not support giving them any justification.
OK, but I don't understand not taking advantage of doing any analysis that could provide interesting or useful information. So many people try to compare the qualities of the Aspie Quiz and AQ anyway, and analysing their important differences could provide useful information.
Giving the current respondents in the AS/HFA/PDD-group, which in many cases do not live up to their diagnosis neither in the AQ test, nor in Aspie-quiz, I find this relatively impossible to do. It would require clinical diagnostics on respondents doing Aspie-quiz, which I cannot do.
I recall you mentioning the AS/HFA/PDD group were not all diagnosed anyway. Or was that just for some versions? Even if they all reported being diagnosed, the mixture of AS/HFA and PDD-NOS would affect the sample's properties. If no uncontaminated control and diagnostic groups have been obtained, then aren't some of the analyses that have been performed also questionable?
I suggested a ROC analysis (not knowing the difficulty obtaining better samples) because the fraction of true positives versus false positives (which could be done for a broader AS/HFA/PDD sample or just an AS/HFA sample, depending on what you wish to investigate), plotted for various cut-off values, would provide AUC values to determine how well the Aspie Quiz performs in identifying PDDs. It could also be used to examine cut-off values. I'm curious; how was the difference of 35 cut-off arrived at?
I looked into the history of scoring, and here is the history:
Version I: A single Aspie score was given. 140 was used as a cutoff to "very likely Aspie" and 60 was used as a cutoff to "very likely neurotypical". No weights were used, and all questions were Aspie-formulated
Version II: Still a single Aspie score, but manual score weights were used for questions. Cutoff to "very likely neurotypical" was changed to 50.
Version III: The two-score model was introduced. Factor loadings where used for scoring. Cutoffs where Aspie score - NT score >= 50 and <= 50.
Version ND and 5: Manual score weights. Cutoffs the same as in version III.
Version 6: Factor loadings used once more for scoring. Cutoffs still the same as in version III.
Version 7: The current scoring is adapted (cutoffs changed to 35).
The thing is that in early development of Aspie-quiz, manual scoring produced superior results (that's why factor-loading method tried in version III was abandoned). The reason for two different scores also lie in the experimental approach for using factor loadings, as two complementary factors is the key finding. However, without the possibility to validate the test in a clinical setting, using manual score weights was considered highly problematic, so in the end that method had to be abandoned in order to be of scientific value. That's why this change was done in version 6, and since then stayed the same.
I'm still not sure why 35 was selected. This was 7 versions before the AQ-test was tried, so there can be no relation to this. Quite likely it was motivated by misdiagnosis of diagnosed AS/HFA/PDD, setting this to around the 80% found in the AQ test.
Besides, there is statistics about diagnosed AS/HFA/PDD (but not only AS/HFA), but this group is today pretty much deteriorated and no longer get the 80% of confirmed diagnosis it once did in early versions. It is possible that annonoucing something here could temporarily increase the validity of this group, but I don't know.
I'm not sure what ROC analysis is, but if you could provide me a reference I'll look into it.
Now that I looked at it, it turns out that in R4, a different diagnosis scheme was used. R4 did separate autism, AS/HFA and PDD-NOS. It is just the report that is done to standardize the diagnostics. IOW, it is possible to extract information about the AS/HFA group and PDD-NOS group.
Here is the diagnostic statistics from R4 using SQL-questions directly to the database:
Diagnosed Autism group, non-zero AQ score: 12, 9 (75%) with AQ score >= 32 and 9 (75%) with "very likely Aspie"
Diagnosed AS/HFA group, non-zero AQ score: 68, 56 (82%) with AQ score >= 32 and 51 (75%) with "very likely Aspie"
Diagnosed PDD/NOS group, non-zero AQ score: 0 (nobody that did the AQ test were diagnosed with PDD-NOS)
Additionally:
Diagnosed Autism group: 30, 21 (70%) with "very likely Aspie"
Diagnosed AS/HFA group: 158, 99 (63%) with "very likely Aspie"
Diagnosed PDD/NOS group: 0 (nobody in the whole sample indicated a PDD-NOS diagnosis)
I don't think these results provide a resolution to which test is best.
A tutorial with accompanying references can be found here.
Comparing the abilities of the AQ and Aspie Quiz to identify PDDs can only really be done properly using such an analysis, as long as the samples selected are uncontaminated and unbiased. Plotting the fractions of true positives versus false positives for each cut-off will give a far clearer picture of how the data behaves. It also allows one to determine whether any differences in discriminant power between 2 tests are statistically significant. Therefore, a vital analysis to perform.
I recall you included self-diagnosed due to low numbers. Is this still the case?
I agree.
A thorough analysis would be required to determine this (as long as the samples are unbiased/uncontaminated). It cannot be properly determined from the above figures alone.
I read the blog entry; you seem to change your mind about your above comment without indicating why, stating again about how the Aspie Quiz seems to be better at discriminating. Without a thorough analysis, this is pure speculation; then there is further speculation based upon this speculation as to which AQ questions make the Aspie Quiz do a better job at discriminating AS/HFA. The discrimination issue has not been dealt with yet, let alone an analysis on which AQ questions might affect its discriminant power, so the blog entry seems premature.
It's also unclear whether the contamination issues in F1 were resolved.
A tutorial with accompanying references can be found here.
Comparing the abilities of the AQ and Aspie Quiz to identify PDDs can only really be done properly using such an analysis, as long as the samples selected are uncontaminated and unbiased. Plotting the fractions of true positives versus false positives for each cut-off will give a far clearer picture of how the data behaves. It also allows one to determine whether any differences in discriminant power between 2 tests are statistically significant. Therefore, a vital analysis to perform.
This is impossible to do.
1. It is impossible to confirm / reject diagnosis of participants
2. It is impossible to get an unbiased sample
The only solution to this problem is to accept the procedure that already has been carried out on the AQ test, and compare actual participant's scores on the tests. This will tell us about relative, discriminative power of the tests, but not about absolute power.
I recall you included self-diagnosed due to low numbers. Is this still the case?
No, the groups denoted as "diagnosed" does not contain self-diagnosed. What you asked about before was the male/female AS/HFA/PDD group, which was not denoted as diagnosed, and thus did contain both self and professionally diagnosed participants.
All the blog-datas are based on only the professionally diagnosed groups, just as the direct SQL-data above.
The relative comparision clearly shows that Aspie-quiz did a better job than the AQ test with these participants. The ROC cannot be carried out on Aspie-quiz (or the AQ test) using the data from Aspie-quiz. However, it is perfectly legal to compare different groups between these tests and drawing conclusions about relative discriminant powers. These analyses clearly established that Aspie-quiz did a better job in both versions. When the AQ test gave higher scores, the highest score difference was in the control-group, while when Aspie-quiz gave higher scores, the highest score difference was in the diagnosed group.
Not sure what the contamination was about, but if you mean the PDD-group, nobody even indicated being self-diagnosed with PDD in R4. In F1 PDD was included in AS/HFA.
BTW, I started a new version checking the EQ test instead, as I think the EQ test is much worse than the AQ test.
I'll actually do the ROC test on the only possible configuration. This configuration is to compare self and professionally diagnosed with the rest of the sample. This is the only reasonably way to go about it since it is impossible to verify if people that have not indicated a professional diagnosis would be diagnosable or not. This comparision is also interesting because it checks how well people's own perceptions of being different are picked up by the AQ-test and Aspie-quiz.
After reading the description thoroughly, I can also see several problems with Simon Baron-Cohen's claims about the AQ test:
This is a big problem for the AQ-test, since the AQ test have been constructed from the Gold standard. How did Baron-Cohen handle this problem?
This is similarily a big problem for the AQ-test when the control-sample seems to have been selected in an extremely neurotypical environment. All the questionable cases seems to have been absent.
Aspie-quiz also slightly suffers from the first problem because so many people have done it, and much inspiration in self-diagnosis potentially could be from issues in Aspie-quiz.
Some ROC-analysis results on the self + professionally diagnosed population vs everybody else for F1 and R4.
F1:
Aspie-quiz:
AUC = 0.776
Std-error = 0.0331
95% CI = 0.712 - 0.832
AQ:
AUC = 0.771
Std-error = 0.0334
95% CI = 0.707 - 0.828
R4:
Aspie-quiz:
AUC = 0.761
Std-error = 0.0350
CI = 0.696 - 0.818
AQ:
AUC = 0.731
Std-error = 0.0372
CI = 0.664 - 0.791
This was unfortunately done with only 200 subjects (it is possible to do on 715 and 648) because of limitations in the ROC software I downloaded. Giving the close match between the tests, it seems inconceivable that a population of 715 would solve the issue of which test is best. At least I won't pay for a licence of MedCalc just to do this analysis. If somebody has a ROC-application that can work with 715 samples, and wants to do this, I can send CSV-formatted data.
2. It is impossible to get an unbiased sample
True. 1 is impossible, and I don't see how the issues in 2 can ever be minimized enough. Also, the diagnosed groups would have been evaluated using various instruments due to the lack of consistency in diagnostic practices. If it weren't for 2, I think the most that could be done would be to simulate and explore the effect of how various fractions of diagnosable people in an NT control group would affect the results.
This is a big problem for the AQ-test, since the AQ test have been constructed from the Gold standard. How did Baron-Cohen handle this problem?
In the AQ paper of Woodbury-Smith et al. (2005), which measured an AUC, they do not address this matter. However, there's not much that can be done with regard to it anyway because the diagnostic criteria are behaviourally based, and tests such as the AQ and Aspie Quiz are also based on behavioural traits rather than being more independent tests such as brain scans or blood tests (which don't exist for identifying PDDs). What the researchers should do, however, is mention such drawbacks when publishing results.
This is similarily a big problem for the AQ-test when the control-sample seems to have been selected in an extremely neurotypical environment. All the questionable cases seems to have been absent.
Aspie-quiz also slightly suffers from the first problem because so many people have done it, and much inspiration in self-diagnosis potentially could be from issues in Aspie-quiz.
The control group in the original AQ paper was selected randomly anyway and the test posted out to them. It's more valid than selecting from narrower populations, such as those present on the internet, which would have more bias. In the Woodbury-Smith et al. (2005) study, their participants consisted of those who were referred to the CLASS clinic. Therefore, they did consist of many borderline/questionable cases; the clinicians then conducted assessments on that population to gather their true positive and true negative data for AS/HFA. The AQ wasn't based on milder PDD cases anyway.
I don't think the above results can resolve the AS/HFA issue. Even ignoring unbiased samples and other issues, the accuracy of the tests at classifying any given groups you do have still varies with cut-off values chosen (which can be seen from their curves), and so the various cut-offs for both tests would need to be examined, not just the 2 pre-selected ones. The number of correctly identified cases is found to be greatest for around the 26 cutoff for the AQ in one study, but in the original paper, I think the cutoff had about the same accuracy for 26 and 32. For this analysis, the cut-off that correctly identifies the most cases is likely to be different again (especially because the samples are biased and the diagnosed sample not really AS/HFA). It would be necessary to compare how the 2 tests perform in identifying the given groups using their most effective cut-offs; i.e., the points at which the values of true positives and negatives produce the most correctly classified cases for each test.
I don't know which one is more reliable. I know that every test I've taken online I've gotten the probably/likely/probable results. For the Aspie quiz I think I got 180 or somewhere around with an NT score of 20 if I'm remembering correctly.
I got I think 40-42 on the AQ test. On the AS test 'danielismyname' posted, I got a 31 and it said Asperger's was probable.
But I'm still hesitant to say that I have Asperger's out loud, really, until I talk to a psychiatrist. Which will hopefully be soon when I finally get insurance.
ElizabethInDallas
Emu Egg
Joined: 21 Feb 2011
Age: 57
Gender: Female
Posts: 4
Location: Dallas Metroplex
I think there is an over-reliance on testing. After all, Asperger was the one to first identify the ism, and to reinvent his diagnosis criteria is what looks to me like a simple means of trying to profit off us. Numbers are only numbers. Like with any other physical condition, tests are a benchmark, nothing else. A positive diagnosis is still the final judgement call of the professional familiar with the patient, and is NEVER error-proof. Witness Exhibit A: I was tested for thyroid disorder by multiple doctors for over a decade and tested 'normal' throughout. Only upon sonogram and CAT scan (at my insistence, since I had all the classic hypothyroid and adrenal failure symptoms), did they discover 90% of my thyroid was gone. As in, never had it... never will. I was born with a tiny fragment on one side, and the hormones they tested for (TSH and T4) tell NOTHING about wether the thyroid itself is functioning. They tell if the pituitary is sending the thyroid the correct message -- that's ALL. I was very ill when someone finally ran the scans to get me to shut up and leave them alone. Problem is, I was right. If I hadn't insisted, I could very well be dead now.
My son has a positive diagnosis of AS by two highly-qualified doctors, although his school district diagnosticians refused to acknowledge it (anything under the autism umbrella costs $$ to accommodate), and preferred to slap an ED label on him for Anxiety Disorder. Funny... Anxiety is one of the primary symptoms of kids with AS... especially the gifted ones.
As for your diagnosis, GOOD LUCK. There is very little research, and very few individuals qualified to give an adult diagnosis. These tests are based primarily on children... and male children at that.
I suggest reading some of Tony Attwood's literature on female AS, as well as Rudy Simone's book, "Aspergirls: Empowering Females With Asperger Syndrome". Rudy also writes an Aspergirls column for Psychology Today that you should check out.
I personally did not get my own dianosis, nor could I locate a professional that was familiar with adult AS or female exhibition of AS. My diagnosis came from my son's diagnostic pediatrician, during his evalution of my son. The doctor also has AS himself. He didn't know of anyone in the metroplex that would be able to do any better.
My opinion is, read Rudy Simone's book , as well as a couple of Tony Attwod's. Once I found out what ism we were dealing with, there was a sinking in my gut as I read these (followed by the tears of recognition, mourning...and relief at finally having answers). It's just too new of a condition, and I personally have no trust in questionaires. They are written mostly by pointy-headed researchers with no true understanding of the condition. They're still learning, too. They read books... and books on books... and very few if any have any hands-on experience or personal investment in being correct. That, and most psychiatrists/psychologists I know are every bit as screwed up as the rest of us -- if not more. Don't take those things as gospel. You know yourself every bit as well as they do. If it doesn't have the ring of truth, don't buy into it. You have to be your own advocate and not let up until you are satisfied with what you find. Every one of us who blindly accepts a questionnaire diagnosis while still having doubts is doing all of us a disservice -- but most critically to themselves.
I wish you the best! I hope you find your answers!
Similar Topics | |
---|---|
Food Sensitivities - Any DIY/At Home Tests? |
07 Oct 2024, 4:34 pm |
Where do you like to go online? |
26 Sep 2024, 6:59 am |
Autism discrimination/harassment online |
Yesterday, 12:25 pm |
Online misogyny spikes after Trump win |
03 Dec 2024, 1:23 pm |