Decoding My Autism

For a long time, I've tried to understand what goes on in my brain. Along the way, I've learned some things and made discoveries that have helped me live a better life.

 
 
  • Dan Bell

How to Separate Fact from Fiction

Recently, in a Facebook group, I was explaining some of the brain chemistry involved in autism. Someone asked me where I was getting my information.


As I explained to him, the information I share, including that which is found in the articles on this blog, comes from pulling together dozens of different sources that I've read. I read books, studies, articles, and sometimes Wikipedia. As a note, I usually only use Wikipedia to help me understand what different chemicals and vitamins do, and will often read the cited sources.


Thankfully, I took a number of statistics and social science classes in college, which taught me about good research practices. It has helped me to recognize which sources to accept and which to doubt.


However, I recently had an experience that reminded me that you have to be cautious about what sources you accept. Some studies are misrepresenting their findings, or simply using bad research methods. This is especially true when it comes to autism, and the many efforts to try to show that vaccines cause autism.


The other day I got a message from my sister, along the lines of "Have you seen this?"


She shared a link to this article, titled "First-Ever Peer-Reviewed Study of Vaccinated vs Unvaccinated Children Shows Vaccinated Kids Have a Higher Rate of Sickness, 470% Increase in Autism"


My immediate thoughts? "WHA?!??!", followed by "Uh-Oh".


Why did this concern me? Because a phrase in the title: Peer-Reviewed Study.


Peer-reviewed means that a study has been sent to other experts in the same field to review, to check the quality of the work and make sure the research was done using sound methods. If a study is published in a peer-reviewed journal, it means that the study represents the best currently available research practices in a field.


Because of this, I'm reading this article, seriously wondering if it could be true. Was there really evidence that vaccines cause autism? After all, there was a study published just last month, studying over 600,000 children, showing that there was no link between the MMR vaccine and autism.


And I had just written an article a few weeks ago about the non-genetic factors in autism, linking to multiple sources saying vaccines don't cause autism. Did I need to make an edit to this article, saying "Oops, maybe they do?"


So I looked deeper into this. I wanted to read the actual study. Following links, it took me to this study, titled "Pilot comparative study on the health of vaccinated and unvaccinated 6- to 12- year old U.S. children"


Snopes.com has discredited this study, saying: "This study, with its suspect statistics and devil-may-care attitude toward methodological design, is a case-study in how to publish a misleading paper with faulty data".


Even before I found the Snopes report on this study, while reading this study and its methods, I seriously called into question the validity of the study. Not just seeking to bash this article, I'll use it as an example as I talk about good research methods. This blog post is a bit longer and more dense than most I write, but will hopefully help you as you learn more about autism, and help you better spot whether or not a study can be relied on.


Things that are important in a good research study, which I will discussion each in turn, include the following: a representative sample; large sample size; good research methods, low margin of error, and no research bias.


Representative Sample


The study says it looked at 666 homeschooled children. Both this number and "homeschooled" raised red flags to me. Not that I have anything against homeschooling - I was homeschooled myself.


Why did this concern me? One of the important things for a study to be valid is that it have a representative sample - meaning that the people in the study accurately reflect the characteristics of the larger group they are intending to study. Only a fraction of children in the US are homeschooled - it varies by state, but on average about 3% of children in the US are homeschooled.


So, immediately this means that the study subjects don't represent the population as a whole.


Additionally, the study also says that of the families of the kids studied, 92.5% were white, 91.2% were Christian, and came from predominantly wealthy households. Hardly representative of children as a whole, and would be found in different cultures, economic groups and ethnic groups.


Large sample size


Second, the issue of the number of subjects. Again, this isn't about any taboo about the number 666. It's just that it's a small number. When it comes to a research study, it's best to have a large sample size. The more research subjects you have, the better. It reduces the chances that quirks about the individual people are going to influence the results.


As an example, say you're trying to determine how patient children are when asked to wait. If you only study 3 children, from the same family, your information isn't going to very accurate for children in general are going to respond. They were all raised by the same parents, and therefore are likely to respond the same way. But if you studied the behavior of several thousand children, from different regions, you're more likely to get the typical behavior of children.


666 is a fairly small number, hardly representative of the millions of cases of autism in the US.


Additionally, according to the data in the study, only 22 children in the study actually had autism - 19 vaccinated, 3 unvaccinated. The entire results and conclusions of the study rested on just 22 cases of autism, 3.3% of the children in the study/


Research Methods


It's also important that a study uses good data collection methods. Say I wanted to know the average income of your neighborhood. How would I do it? Imagine that I did so just by asking you if you live in a wealthy or poor neighborhood. That probably wouldn't give me a very accurate answer. How would you know how much your neighbors actually make? Or how much debt they have? How do you define wealthy or poor?


A more accurate method would be to ask all your neighbors their household income and their debt. That would allow me to come up with an average household income for your neighborhood.


However, my method might be met with some resistance. If I showed up at people's doors conducting a survey, and ask how much their household income was, there would definitely be people that wouldn't want to tell me. Many would see it as an invasion of privacy


While our example study did it's best to preserve that privacy, it did so in a way that harmed the results of the study. Per the study, they collected their information via anonymous online questionnaires.


Another red flag. This means that the information was self-reported. Per the study, diagnoses were not confirmed with doctors. This can lead to false information. If someone other than a medical professional tries to make a diagnosis, they could downplay or exaggerate symptoms, miss them altogether, or label one thing as another. A child assumed to have autism could, for example, in actuality be dyslexic and socially awkward.


Margin of Error


When a study reports data, it has to also the report accuracy of the data. It does so by reporting the margin of error, or how confident they are that their research methods were good and that their numbers are therefore accurate. In good research methods, that margin of error is small.


For example, say I've reporting that a state gets an average of 40 inches of rain a year, with a margin of error of 1 inch. This means I am saying that, due to potential mistakes in my data collection methods, the actual average could be anywhere from 39 to 41 inches. There's some variation there, but not enough to cause much concern


Now, what if I had that the margin for error was 20 inches. This means that that the actual average could be as low as 20 inches, or as high as 60 inches. That's a huge difference! You wouldn't want to rely on a study like that.


In our example study, when they reported how many vaccinated vs unvaccinated children have autism, they reported an odds ratio, a measure that compares the relative odds of two populations with differing medical histories (i.e. vaccinated versus unvaccinated) developing a certain medical condition (in this case, autism).


They report an odds ratio of 4.2, meaning that vaccinated children are 4.2 times more likely to have autism. Along with the odds ratio, they also reported the margin of error. They reported that, due to margin of error, the odds ratio could potentially be anywhere from 1.2 to 14.5! This means that they are saying that vaccinated children could be anywhere from 1.2 to 14.5 times more likely to have autism. 1.2 times more likely to have autism means there is hardly any relationship between vaccination and autism. This is a huge margin, indicating their methods aren't all that accurate.


Research bias


Another thing to consider is whether those conducting the research has biases about the research topic. Do they have biases or an agenda that will cause them to skew the results of their work? Were they funded by an organization with an agenda?


This information can be a little harder to find. However, in our case, the main author of the study was Anthony Mawson, a Professor of Epidemiology at Jackson State University. He is also a vocal supporter of Andrew Wakefield, the doctor that first published the article falsely claiming that vaccines cause autism.


It was also funded by two organizations, Generation Rescue (founded by anti-vax activist Jenny McCarthy), and Children’s Medical Safety Research Institute, founded by vaccine skeptic Claire Dwoskin.


Clearly, there was an agenda in this study.


Credible sources


You also have to watch out for whether the journal an article is published in is credible.


I get most of my studies from PubMed, a database of articles from different journals, created by the National Institutes of Health.


Our example study, however, was published in the Journal of Transitional Science, a journal that has been accused of accepting articles for publication without editing, in exchange for a hefty $2000 publishing fee. It is not indexed by PubMed. In other words, this study may not have actually been peer-reviewed, which is exactly what caused me concern in the first place.


Being from a journal that does not edit content is the scientific equivalent of being published in a grocery-store tabloid. If the article you are reading does not cite sources, or give an author name, it is likely not worth your time.


Knowledge is Power


There are a lot of different sources of information out there, especially when it comes to autism. Many can be inaccurate or even intentionally misleading. Hopefully the tools I've shared in this article will help you weed out the good from the bad.

23 views
Keyboard
 

©2019 by Daniel G. Bell