Stand your ground/castle law stuidies

Jimboh247

New member
I did not realize that "castle" doctrine, in some states, extends outside one's own home.

I was always under the assumption that as long as I was in my house, or on my property, I did not have to retreat.

If given the choice of standing my ground in public, or retreating to a safe area, I'd definitely retreat. If, and only if the aggressor continues would I consider lethal force.

I understand "stand your ground" on your own property, not so much in public.
 

Jimro

New member
The math from the Texas study is torturous.

The overall violent crime rate in Florida from 2005 to 2012 per 100k residents is:

2005: 702.2
2006: 705.8
2007: 705.5
2008: 670.3
2009: 604.9
2010: 542.9
2011: 519.3
2012: 492.6

Clearly Castle laws do not have a negative impact on the total rate of violent crime.

As far as "murder" and "manslaughter" the numbers are all over the charts, and increase or decrease depends on where you set the zero. By setting the zero at 2005 when Florida had the lowest number of homicides in a 20 year spread it makes the following years show a growth in homicide.

However, if you set the zero at 1992 (1,191 homicides), there is a reduction in total number of homicides to 2012 (1,009 homicides), but a 5.5 Million person increase in the population. The authors failed to take in historical variability as a confounding factor in their methodology, and the review board should have caught that.

So violent crime has been on the demise since 1992 in Florida, homicide numbers have fluctuated between a high of 1,191 and a low of 881. Average homicides per year, 977, with a standard deviation of 115. However this is for total homicides in the state of Florida, and not per 100k residents, which has continually gone down (save for one year, slight bump) as the population grew faster than the crime rate.

So, looking at the methodology makes me think that the authors of the article took a conclusion, then twisted the math to make an argument with a pseudo-scientific justification.

Jimro
 

overhead

New member
I read the first study, and to me it appears they are going to a lot of trouble just to end up with what amounts to an incredibly small sample size and a problem with the old "correlation does not equal causation" issues. They did a wonderful job of making it as complicated as possible, though. Of course that is coming from a guy that barely squeaked by statistics and could not stand econ. classes.

The second study might be interesting, but I am not willing to pay 5$ to read it and the summary does not tell me much about their methods.

The premise of the first seems to be that the lowered "cost" of shooting someone in a self-defense situation (removing civil liability and making justifying the use of deadly force in self defense easier) would increase the number of people willing to shoot someone in a questionable self defense situation instead of just "retreating" out of danger. I cannot imagine, in the heat of the moment, people are considering this sort of thing. If they are, I would suggest they are not likely under an imminent threat of great bodily harm or death. That being said I am open to having my opinion changed, but the first study did not do it for me.
 

kevinjmiller

New member
Just reading through the Texas A&M study (didn't read the Georgia State one because it appeared to require a paid subscription) I noticed that the authors included statistics for burglary and robbery even thought such crimes are not defined as part SYG or even CD laws. Now that might be because Texas is one of the few (only?) states that allows, by legislation and common law, the use of deadly force in defense of property, but one would think that competent, unbiased researchers would know this and factor it out accordingly. They did not: "To the extent that criminals respond to the higher actual or perceived risk that victims will use lethal force to protect themselves, we would expect these crimes [burglary, robbery, and aggravated assault] to decline after the adoption of castle doctrine. " (4.2 Deterrence, p17)

To me this is, at best, a red herring, and more likely points to logical fallacies in the study. I did not delve into the myriad of statistical details of this study (proof by intimidation?) to see how much their inclusion of burglary and robbery data corrupted other aspects of their conclusions, but I lean toward similar conclusions as Jimro and overhead: This study is flawed in composition, execution and conclusion.
 

2damnold4this

New member
Jimro, there does seem to be a large variation in murder rates from year to year. Some of the SYG states showed a drop in murder rates that did better than the regional and national averages but others showed results that lagged behind. The Texas A&M folks claim to have statistically significant results. I'm not good enough with math to tell.


Kevinjmiller, the Texas A&M folks checked to see if SYG laws might have a deterrent effect on burglary, robbery and aggravated assault. The reason they gave for checking for a deterrent was that some backers of SYG laws have claimed that these laws help prevent crime.

Overhead, it did seem to be a reach to claim that SYG laws were responsible for the increased murder rates claimed by the authors.


One thing the A&M folks mention is the possibility that the increase in murders were murders that were justifiable but incorrectly classified by the FBI's UCR. They say they don't think it likely but it is possible.


What I'd like to know is if these "extra" homicides are in the home, a place of business or on the street.
 

Evan Thomas

New member
I also don't intend to spend money to read the Georgia study.

The Texas study uses a definition of "castle doctrine" that's a bit different from what we're used to: they use "expanded castle doctrine" to refer to what we'd call "stand your ground" laws. (See table 1, p. 36, for a summary of laws in the states they studied.)

Be that as it may, a couple of posts seem to reflect a basic misunderstanding of the methodology of the Texas study. The authors are using results across states to compare expected and actual changes in crime rates, with adoption of castle doctrine laws as the independent variable. From the introduction:
. . .we primarily identify effects by comparing changes in castle doctrine states to other states in the same region of the country by including region-by-year fixed effects. Thus, the crucial identifying assumption is that in the absence of the castle doctrine laws, adopting states would have experienced changes in crime similar to non-adopting states in the same region of the country.
The graphs in Figure 1 make this comparison directly for experimental and control states. The data in Figure 2 are before-and-after (adoption of "castle doctrine" laws) comparisons within states, but they're comparing the differences from the control (non-castle-doctrine) states. They show consistent increases in those differences after the adoption of the new laws.

It misses the point to critique the study on the basis of changes in crime rates within a particular state. The fact that one doesn't understand the methodology doesn't invalidate it.

As to robbery and burglary, it would be odd if they were excluded, given that muggings and other armed robberies, as well as most break-ins, are committed with the intent of stealing rather than committing mayhem on the victims; robbery and burglary rates are obvious dependent variables in a study of whether these laws deter crime.
 
Last edited:

2damnold4this

New member
The part that I have difficulty with is the math. We obviously have a lot of statistical noise with fluctuations of murder rates in states and regions over time. I don't have the math skills to check and see if the Texas A&M researchers are correct in their assertion that they got a statistically significant result. If they did find something significant, it could have implications in the coming debate over SYG laws. Maybe the NRC will check into the figures.


Perhaps someone who is a faculty member of a university can read the Georgia paper for free and report back.
 

Evan Thomas

New member
I don't have the math skills to check and see if the Texas A&M researchers are correct in their assertion that they got a statistically significant result.
Statistical significance isn't an "assertion;" it's a test of the data that's determined by the research design. It means, roughly, "probability that the observed result was due to random variation rather than manipulation of the independent variable." So if a difference is significant at the .01 level, that means there's a 99% probability that it's due to the variable being studied, and a 1% probability that it was due to chance.

Table A1, p.42, gives the results, with significance levels, for the types of crime the authors studied before and after passage of various "castle doctrine" laws. Note that the study controlled for a number of possible confounding variables, including:
. . . policing and incarceration rates, welfare and public assistance spending, median income, poverty rate, unemployment rate, and demographics.

Statistical significance is a mathematical assessment of the difference between two (or more) sets of data; the meaning of "significance" in this context is a technical one. Whether the results are seen as important (for social policy decisions, for example) is a separate question.
 

2damnold4this

New member
I guess I'm having trouble framing my question about the result. I understand what a statistically significant result means. Other folks have reached different results when looking at states that changed their castle/SYG laws (John Lott looking at 1977 to 2005). What I'm asking is if A&M plugged the numbers in right. If they did, why doesn't that jive with Lott's earlier numbers?
 

Evan Thomas

New member
From the article you linked above: "Starting with Florida in 2005, at least 24 states have adopted some variation of a stand-your-ground law."

As to the comparison with Lott's data, that would seem to be your answer right there. His data end in 2005, while Hoekstra and Cheng are analyzing data from states that changed their laws in 2005; their pre-change period for comparison includes data starting in 2000, but they're interested in data from the point at which Lott's study ends. So there's no reason their data should be consistent with his, other than wishful thinking.
 
Last edited:

Jimro

New member
Aguila Blanca, how does study methodology refer to Vanya's post #4? I recommend you get a copy of "Study a study and test a test" http://www.amazon.com/books/dp/0781774268 to help you understand that not all research papers are well written, or well reviewed.

As to the comparison with Lott's data, that would seem to be your answer right there. his data end in 2005, while Hoekstra and Cheng are analyzing data from states that changed their laws in 2005; their pre-change period for comparison includes data starting in 2000, but they're interested in data from the point at which Lott's study ends. So there's no reason their data should be consistent with his, other than wishful thinking

It all depends on where you set the zero point. The underlying data set will be the same, but setting a zero point at a particular place in time will determine whether you see a rise or fall in the results in any number of data sets.

For example, if you set the zero point for "global warming" at 1900 you can demonstrate global warming. If you set the zero point at the height of the Medievel Warm Period then you show no warming.

My previous analysis of the A&M study was to show that the researchers deliberately discounted normal variation in homicides in their methodology by artificially limiting the scope of their investigation. In order to come up with a real confidence value they have to prove that their results are not the result of normal variation.

Alternately, their methods of "control" was any state that did did not enact SYG laws, which includes a lot more states than any individual graph they showed. This means the control had a larger population to statitistically normalize than the "experiment" which is bad design.

Jimro
 

2damnold4this

New member
As to the comparison with Lott's data, that would seem to be your answer right there. his data end in 2005, while Hoekstra and Cheng are analyzing data from states that changed their laws in 2005; their pre-change period for comparison includes data starting in 2000, but they're interested in data from the point at which Lott's study ends. So there's no reason their data should be consistent with his, other than wishful thinking.


We have two sets of data. One from 1977 to 2005 and the other from 2000 to 2010. The first shows states that strengthen self defense laws have a decrease in murder while the second shows states that strengthen self defense laws have an increase in murder. If the changes in murder rates were due to the changes in laws, why did one study show a negative change and the other a positive? Does changing the law have a beneficial effect if it is done in certain years but a detrimental effect if it's changed in other years?
 

carguychris

New member
...their methods of "control" was any state that did did not enact SYG laws, which includes a lot more states than any individual graph they showed. This means the control had a larger population to statitistically normalize than the "experiment" which is bad design.
IMHO this is one of the two most readily apparent flaws in the study methodology, and is related to the second most apparent flaw- singling out Florida.

The authors justify this because FL was the only state to enact the laws in 2005, but the FL data also seems to show the most mathematically neat surge in homicides after the law is passed, suggesting that the authors have fallen victim to the "Texas sharpshooter" fallacy.

http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
 

Jimro

New member
We have two sets of data. One from 1977 to 2005 and the other from 2000 to 2010. The first shows states that strengthen self defense laws have a decrease in murder while the second shows states that strengthen self defense laws have an increase in murder. If the changes in murder rates were due to the changes in laws, why did one study show a negative change and the other a positive? Does changing the law have a beneficial effect if it is done in certain years but a detrimental effect if it's changed in other years?

1977 to 2005 is 28 years of data.

2000 to 2010 is 10 years of data.

An intellectually rigorous person would give more weight to the conclusion that was derived from the larger data set. However, all these statistical studies must have very stringent controls to rule out confounding factors. Over the past 100 years crime has been "associated with" (meaning someone got two graphs with the same or inverse curve over the same time period) the economy, climate, population density, number of 1st gen immigrants, etc.

So saying that a specific law, in a specific state cause a specific change in the homicide rate is a very risky argument to make, if you use a large data set. For example, instead of doing the math in a regression analysis they could have done a case comparison between two states of similar population size, density, and urban areas, and done a "controlled pair study."

A controlled pair study is useful in medicine and research, and very useful when you have a very small control population. This is particularly useful with cities of similar size, density, and income distribution to analyze the effects of laws (although State laws can become confounding factors).

Jimro
 
Top