Blinded by Science: Examining the Australian Government’s Sexual Assault Statistics to Expose How Such Science Is Derived, How It’s Applied, and Why It’s Not Really as Scientific as It’s Represented to Be

Posted on December 6, 2014

0


Here is the Australian government’s Institute of Family Studies’ sexual assault “Facts & Figures” page.

And here is the first thing it says: “Statistics carry significant power and persuasion.”

That’s putting it mildly. That power and that persuasion influence lives on a magnitude that no numbers could quantify. Appreciate that figures concerning sexual assault and how these figures are popularly exploited influence court rulings in all cases that touch on violence or the purported fear of it, including in civil and family court, cases based on allegations of harassment, stalking, child abuse, and/or domestic violence, among others.

You’ll encounter these statistics bruited ubiquitously on the Internet.

“Sexual assault statistics are based on two main types of data,” according to the Australian government website:

  • victimisation survey data—data collated from surveys conducted with individuals, asking them about their experiences of sexual assault victimisation, regardless of whether they have reported to police; and
  • administrative data—data extracted through the various systems that respond to sexual assault (e.g., police, courts, corrections or support services).

Important to note at the outset of this discussion is that statistics often quoted by advocates and commentators of one stripe or another (including journalists) may originate from survey responses, that is, from “intelligence” that may be unqualified by any corroborating investigation. Though this post looks at Australian statistics, figures cited as originating from the United States, for example, are derived the same way. When a statistic is phrased “[x number] of [men or women] report being the victim of [x],” that figure was derived from survey responses.

The Australian Institute of Family Studies draws its statistics from six national surveys. This number suggests scrupulous science, but no ascertainable accuracy can be ascribed to the raw data, which is anecdotal.

The 2012-13 Crime Victimisation Survey (CVS), for example, which is one of the six surveys from which the Australian government draws its statistics, is based on interview responses from one member (“selected at random”) of 30,749 “fully responding households,” that is, on the personal interpretations and alleged experiences of fewer than 31,000 people, a study sample that represents about a tenth of 1% of the Australian population. What percentage of this sample is male and what percentage female isn’t reported on the CVS webpage (though other surveys, like the Personal Safety Survey, do report gender-specific conclusions).

Survey-based statistics are among the sorts you’ll encounter broadly promulgated in feminist “fact sheets” and brochures—and consequently everywhere else.

Important to consider, furthermore, is that “administrative data” (police and court statistics), the second data set from which government figures are derived, may itself be influenced by the former sort of data. Survey responses, much touted, may exert either a direct influence on how officers of the law and courts are trained to respond to or interpret allegations, or they may exert a proximal influence by having inspired the direction of social science research that’s used for training. The former data, survey responses, may in other words determine the conclusions and actions of agents of the justice system to some degree, and possibly to a very considerable one.

“Statistics carry significant power and persuasion,” and neither police officers nor judges are any less susceptible to that power and persuasion than anyone else. In fact, they more than almost anyone else are required to absorb these statistics.

Granted, survey statistics are probably as comprehensive as it’s practical for them to be, and contrary statistics that these figures are rejoined with by advocates for disenfranchised groups like battered men may themselves be based on surveys of even smaller groups of people. All such studies are subject to sampling error, because there’s no practicable means to interview an entire population, and sampling error is hardly the only error inherent to such studies, which are based on reported facts that may be impossible to substantiate.

What must be appreciated in all of this is that what’s called “science” is far from certain and is no more verifiable or creditworthy than are responses to online petitions like this one: “Stop False Allegations of Domestic Violence.” Both types of data, that is, are anecdotal.

The significant difference is that respondents to petitions aren’t “randomly selected” or interviewed by trained questioners. There are no “controls.”

So-called controls, however, may themselves influence findings.

Government surveys are inherently biased insofar as their aim is to collect information according to specific questions. The questions determine the nature and bounds of the responses to them and are determined by designated topics of interest.

Petitions in contrast place no constraints on respondents’ comments—and indirectly garner uninhibited answers to questions like, “Have you or someone you know been the victim of fraudulent abuse of court or state process?”

They garner answers to questions, that is, that the government doesn’t care to ask.

Copyright © 2014 RestrainingOrderAbuse.com