Originally Posted by
dopushups
I've watched this thread the last couple days and figure it is time to join in.
Couple inaccuracies here.
First- I'm not really making this an issue. The internet did. I did the study because some in the community were upset by my not allowing one optic in one course.
Second- You make an suggestion here that this was done for publicity. I know you have been involved with threads on other sites where you know that I took steps to ensure this was not a point of contention. One of the main ones was releasing a clean report on the other firums, without any links to my website. I also delayed any social media posting on it for 48hrs- which as most know is the sweet spot for those looking to generate views.
Third- There are other Unit folks that educate students on this issue. Many consulted on the report. Many well known competition shooters and instructors did as well. However- they all refuse to publicly state the issue they have personally seen in courses and ONLY mention it in courses- many accompanying it with the entire class doing a "story board" live shoot to see the actual affects. The reason for this is because of comments like this and the ensuing drama that is accompanied by the emotional response and some of the more toxic forums. You won't see any public support for the article because most Unit guys in the industry do not enjoy hopping into internet conversations like this or the forums in general. Before anyone asks- no I won't drop names, if they want to join the circus- that will be their choice.
Fourth- ""everything with glass has parallax" is not a useful statement. Unless you don't care about inaccuracies or how they combine with other inaccuracies to form compounding inaccuracy. That would be like saying "all bullets have drop, it doesn't matter".
So, either way- you should care or don't. If you think the results of the report are BS- then the protocols used are available for download. You can replicate it and produce date that disproves the results. If that isn't your contention and you think it doesn't matter- that's fine as well. It is simply data. It is no differnt that battery life, size, weight, form factor, light transmittance, etc. You decide whether the capabilities, limitations and characteristics meet the requirements for your useage and make the call on the equipment.
I think some are taking this report, which only measures one aspect of an optic, as a buyers guide. It is not and should not be taken that way. It should also not be taken as an absolute representation as real world performance. Some of the optics tested consistently with observed performance, some did not. The MRO is a prime example. Its test results were horrible. That being said- I have yet to see a student with an MRO exhibit any significant POI shift between groups. The main reason for this- is that the degree of aiming dot deviation as it related to the degree of view angle was not measured. The main reson for this, is that we wanted the test to be as simple as possible to allow for anyone to replicate it. This worked out very well because, as you can see in the report, we had numerous remote testers including: an Army SFG, a few SWAT Teams, individual LE officers, and civilians. (In case those in this thread were unaware, to avoid any perceptions of bias- I did not peresonally involve myself in any of the testing. The volunteer testers elected its own test administers within the group and ran the tests. I just entered the results into the spreadheets to produce the graphs and averages.) At what point a dot starts to move during what degree of viewing angle is a pretty important consideration. And it does vary. Here is a quick video I did showing the movement of 4 of the optics that were tested in the report- an EXPS 3,0, a T-1, an MRO, and an LCO:
If you slow down the video or pause it at points, you can clearly see some optics will start to deviate immediately after departing from the centered axis of view. Some of the optics were more sensitive to lateral movement, some to horizontal, etc. All were different.
This shouldn't be an emotional point of contention. It is just data. Either it is valid or invalid. Either it matters or it doesn't. It isn't a buyers guide. There were no "winners". The sole reason I did the report was not to justify a descision I made for one course, because I don't need to do that. The hope was to change something in the industry. That thing was this very conversation. People get emotionally attached to gear. Broad statemants are made about what people should or shouldn't use without any hard data to back it up or without knowing the users specific requirements. A lot of this is due to the fact that not a lot of actual testing data gets released from the .gov or .mil circles. The goal was to raise the bar a bit and encourage guys that didn't agree with the report to actually go out and replicate it to disprove it. I would be honestly very happy if someone did. The most simple way to defuse some of the industry toxicity, brand fanboys, and general marketing hype is to foster an approach to equipment evaluation that removes emotions from the equation.