WOUND BALLISTIC TEST METHODOLOGY:
KOCHER, LaGARDE, & EARLY MEDICAL RESEARCH:
Since the advent of armed conflict eons ago, combatants have attempted to discover methods to measure wounding effects and find ways to increase the lethality of their weapons. The history of attempting scientifically based wound ballistic research begins with Dr. Theodore Kocher’s work and Dr. LaGarde/COL Thompson’s studies in the late 1800’s, along with those by DeLorme, Bircher, Stevenson, Longmore, and Makins. The efficacy of the concepts elucidated by these early wound ballistic researchers was proven on the far flung battlefields of British colonial campaigns, the Spanish-American War, the Russian-Japanese and Turkish-Balkan conflicts, and during the carnage of World War One. Unfortunately following WWI and for much of the 20th century wound ballistics entered into a metaphorical dark age, where the majority of research was marred by erroneous emphasis on kinetic energy “deposit”, a failure to fully comprehend the physiologic and anatomic effects of temporary stretch in relation to permanent crush during a projectile’s path through tissue, the use of tissue simulants that had no correlation with living tissue, and an over-reliance on flawed computer models.
KE & P(I/H):
Through the 1950’s and 1960’s, U.S. Army estimates of bullet lethality were obtained by firing projectiles into 20% ordnance gelatin, measuring the kinetic energy (KE) deposited in the tissue simulant, and then relating the “deposited” KE to some previously determined empirical relationship between KE and the probability of incapacitation, given a hit P(I/H). The KE theory postulated that the energy deposit measured in the first 15 cm of 20% gelatin correlated linearly with the volume of damage that would be found in 20.5 cm of tissue penetration. This methodology used by Aberdeen Ballistic Research Lab (BRL) and the Biophysics Laboratory of Edgewood Arsenal was completely flawed for numerous reasons, including the fact that kinetic energy is not a wounding mechanism, KE does not reflect anatomic and physiological damage from penetrating projectiles, and the probability of a hit is a training function, not a wound ballistics issue.
RII/COMPUTERMAN:
The Relative Incapacitation Index (RII) developed by the National Institute of Justice Law Enforcement Assistance Administration in 1973, was an attempt to determine which handgun bullets would have the greatest wounding effect and would incapacitate a human most reliably. Using a overly simplistic “computer man” model of human anatomy, RII erroneously assumed that the size of the temporary cavity produced by a given handgun bullet in ordnance gelatin is directly proportional to the wounding effect and incapacitation produced by that bullet in a human. The study recommended lightweight, high velocity bullets with rapid expansion in tissue and frangible, pre-fragmented bullets, such as the Glaser Safety Slug, as producing the greatest wounding effect and most reliable incapacitation in humans. The RII completely ignored the size and depth of the permanent cavity, the tissue which is actually destroyed by the bullet. Since many tissues in the human body are elastic, they absorb the stretch and tissue displacement produced by temporary cavitation with minimal damage.
Lightweight, high velocity handgun bullets which rapidly expand in tissue have decreased penetration depth compared to heavier, slower, less deformed bullets and cannot consistently reach the major organs and blood vessels in the torso, especially from transverse and oblique angles. Frangible handgun bullets designed to fragment on impact, like the Glaser and MagSafe, produce large shallow wounds, have extremely limited tissue penetration depth, and cannot consistently reach the major organs and blood vessels in the torso, especially from transverse and oblique angles. In addition, they cannot defeat commonly encountered intermediate obstacles.
Shallow penetrating, lightweight, high velocity, rapidly expanding bullets and frangible, pre-fragmented bullets were recommended because of the widespread fear of handgun bullet over-penetration, in other words, a bullet which completely passes through the body, exits the other side, and continues on to potentially endanger innocent bystanders. This feared hazard has been greatly exaggerated. The skin on the exit side of the body is tough, resilient, and flexible, and can have the same resistance to bullet passage as four inches (10 cm) of muscle. This often results in bullets ending their path just under the skin at the anticipated exit point rather than over-penetrating as might be expected. In addition, those few bullets which over-penetrate after hitting the target are not any more dangerous to innocent bystanders than the overwhelming majority of bullets fired by law enforcement personnel which miss the intended target all together. According to Special Agent Urey Patrick, formerly Assistant Chief of the FBI Firearms Training Unit:
"Choosing a bullet because of relatively shallow penetration will seriously compromise weapon effectiveness and needlessly endanger the lives of law enforcement officers using it. No law enforcement officer has lost his life because a bullet over-penetrated his adversary, and virtually none has ever been sued for hitting an innocent bystander through an adversary. On the other hand, tragically large numbers of officers have been killed because their bullets did not penetrate deeply enough."
The RII was seriously flawed and its recommendations erroneous. Deeper penetrating bullets have proven to be far superior to shallow penetrating bullets in LE OIS incidents since they have sufficient penetration to consistently reach the major organs and blood vessels in the torso, even from transverse and oblique angles and through intermediate obstacles.
EKE/AKE:
An outgrowth of the earlier KE theory, the Expected Kinetic Energy (EKE) model was developed by the U.S. military in 1975 to assess bullet lethality; in 1977 this new EKE model became the U.S. recommended method for the NATO small arms trials and also established it as the official Army Model. The EKE model estimated P(I/H) by correlating the weighted sum of experimentally determined, incremental kinetic energy deposits in 20% gelatin with existing estimates of P(I/H) from animal experiments. Note--although the notation “P(I/H)” was and is used in the literature, the meaning assigned was expected value of incapacitation given a hit; the preferred modern notation is E(I/H), expected level of incapacitation, in order to avoid the widespread misunderstanding that “P(I/H)” is a probability of incapacitation. EKE was later renamed AKE (ARRADCOM Kinetic Energy) and remains a current Army and NATO standard.
To compute the AKE of a particular projectile, ARL obtained the velocity decay curve by shooting into a 38 cm long block of 20% gelatin. The event is recorded with high seed cameras and the velocity versus distance kinetic energy decay curve is extracted by analyzing the camera footage on a frame by frame basis--this is called “dynamic” gel testing. From this decay curve, ARL can derive the energy deposit function within the gelatin medium. This function is then fed into a complex algorithm to calculate the expected level of incapacitation given a hit, or E (I/H). The AKE method for bullets is based upon summing the incremental kinetic energy lost in the gel block multiplied by the probability the projectile is still in the body at the same depth of penetration in the body component (thorax, abdomen, etc…) being evaluated. These probabilities have been generated for the whole body and for a number of specified major body components. The probabilities were estimated from horizontal shots on a number of shot-lines at different angles around a standing male body. This weighted value, AKE, is then inserted into an empirical correlation to predict a level of incapacitation given a hit. It is important to note that current dynamic testing (AKE and E(I/H)) actually measures the energy lost by the projectile, and NOT the damage done by that energy.
Unfortunately, like its KE predecessor, as well as the RII/COMPUTERMAN, EKE/AKE methodology has numerous flaws, including a continued reliance on kinetic energy deposit as a measure of wounding rather than assessing potential physiologic and anatomic damage potential, an overly simplistic and inaccurate COMPUTERMAN anatomic and physiological model that does not account for different tissue types along a shot-line through the body, an inability for the COMPUTERMAN model to assess shot-lines other than standing and account for intervening body sections, projectiles that in reality have quite distinct terminal performance end up have their reported performance blurred to “just about the same” as all other projectiles when the expected levels of incapacitation are computed using the erroneous COMPUTERMAN model, an overemphasis on temporary stretch effects over permanent crush injuries, an inability to assess the synergistic effects of fragmenting projectiles, and ignoring the requirement that projectiles must have adequate penetration to reach critical anatomic structures deep within the body from any angle and despite intervening objects. AKE also fails to account for projectile total penetration, yaw effects, and bullet fragmentation. Finally, the dynamic AKE method requires expensive test measurement equipment and extensive data reduction and analysis.
LAIR, FACKLER, & IWBA:
A renaissance in wound ballistics began in the 1980’s at the Letterman Army Institute of Research (LAIR) Wound Ballistics Laboratory under the direction of COL Martin Fackler. The researchers at LAIR shot multiple projectile types at varying velocities into 50-100 kg hogs as well as various tissue simulants in order to discover which tissue simulant most closely correlated with living muscle tissue. The final determination was that 10% Type A, 250 bloom Pharmagel (250A ordnance gelatin) at 4 degrees Centigrade was the tissue simulant that most closely correlated with living muscle tissue. Gelatin must have approximately the same density as the tissue it is simulating; both 10% and 20% gelatin can fulfill this requirement, but they do so at different temperatures. However, the traditionally used warmer 20% gelatin was determined to result in overexpansion of projectiles and excessive velocity retardation compared to the cooler 10% gelatin that more accurately replicated the damage pattern seen in living tissue. Other advantages of 10% compared to 20% ordnance gelatin is the decreased cost, simpler fabrication, and easier storage. Rather than relying on high speed motion picture analysis of gel block impacts and calculated KE loss, Dr. Fackler’s research at LAIR measured the actual damage the projectile did to the gelatin block by assessing the radial cracks and fissures in the gelatin--this is referred to as “static” gel testing. Compared to the dynamic method, static testing is extremely cost effective and does not require as much time, equipment, or infrastructure to conduct.
Dr. Fackler’s seminal work emphasized the anatomical and physiological effects of penetrating projectiles and clearly described the primary wounding mechanisms of tissue crush and stretch. His efforts also illuminated the effects of bullet upset--including yaw, fragmentation, and expansion in modifying wounding effects. Dr. Fackler also emphasizing the critical importance of adequate projectile penetration depth to ensure disruption of the major organs and blood vessels in the torso from any angle and through excessive adipose tissue, hypertrophied muscle, or intervening anatomic structures, such as a raised arm. The medical research at LAIR also debunked and decried the frequent overemphasis on kinetic energy, high velocity, occult pressure waves, and faulty computer modeling when attempting to analyze projectile terminal effects in the human body. Obviously, these research results lead to a significant degree of conflict and animosity between the medical researchers at LAIR and the ordnance engineers at Aberdeen, Edgewood, and Picatinny.
Following his retirement from the military in 1991 and the closing of LAIR, Dr. Fackler founded the International Wound Ballistics Association (IWBA) to continue his research and data dissemination; IWBA put out a quarterly journal of research papers for the next decade. Some of the IWBA’s greatest contributions were in correlating lab testing and LE OIS incident forensic data to validate the accuracy of 10% gelatin as a tissue simulant in shots to living human torsos, developing the 4 layer denim test to assess the ability of handgun JHP projectiles to resist plugging with clothing materials and robustly expand, describing the terminal performance variability of the SMK OTM commonly used by LE and military snipers, recommending heavier 5.56 mm projectiles, exposing exotic ammunition vendors making exaggerated, fraudulent claims, as well as arguing for better body armor testing standards than the flawed NIJ methodology.
FBI BRF:
In the wake of the FBI Miami shooting in 1986, the FBI launched an ambitious program to improve the state of LE wound ballistics. The FBI solicited input from individuals in the military, law enforcement, medical, engineering, and forensic communities who were widely respected for their wound ballistic expertise; with this guidance, the FBI Ballistic Research Facility at the FBI Academy in Quantico, VA was established. Round table wound ballistic seminars were held by the FBI in 1987 and 1993. Like the researchers at LAIR, the FBI rejected the flawed “computer man” modeling, calculations based on kinetic energy, and exaggerated temporary stretch effects in favor of an anatomic and physiologic damaged based “static” analysis using 10% ordnance gelatin testing. Most importantly, the FBI BRF quantified adequate penetration depth for duty projectiles as being between 12 and 18 inches and established standardized intermediate barrier testing. In addition, the FBI BRF has documented the advantages gained in transitioning from handgun caliber sub-machine guns to rifle caliber carbines, such as the 5.56 mm M4, for LE entry and patrol use, emphasized the procurement of ammunition capable of defeating intermediate barriers with minimal reduction in terminal effectiveness (i.e. “barrier blind), and designed and implementing the most comprehensive and innovative body armor assessment protocol in existence. The FBI BRF shares their expertise and testing acumen with U.S. military SOF organizations needing mission essential, time-sensitive, accurate, precise, real-world relevant wound ballistic data that is unavailable via the expensive, time consuming, bureaucracy laden conventional military testing establishment. The FBI BRF provides crucial input regarding innovative new munitions developed to meet SOF warfighting needs, especially since the onset of anti-terrorist combat operations in the wake of 9/11/01.
COMPUTERMAN/ORCA:
U.S. Army Research Laboratory’s Survivability Lethality Analysis Directorate’s (ARL/SLAD) Operational Requirement-based Casualty Assessment (ORCA) computer modeling system was initiated in 1992 and has continued to the present. The COMPUTERMAN model of the human body is composed of a large number of horizontal cross-sections in which all tissues (muscles, organs, bones, blood vessels, and nerves) are dimensioned in detail. The limbs can be articulated to some degree (positions that cannot be created include arms or legs crossing in front of the body). Shot-lines through COMPUTERMAN are constrained to be straight lines between entry and exit points. A particular trajectory in the body is computed from the parameters of the fragment; and the determination of the resulting incapacitation is made from the hole size made in the various tissues encountered. COMPUTERMAN makes estimates of level of incapacitation based on the levels of functioning present in the four limbs at specified time intervals after wounding and on their importance to specific missions.
ORCA attempts to be a more comprehensive model for estimating incapacitation from a number of classes of body injury. ORCA does include a far wider range of injury mechanisms, extends the measure of incapacitation beyond the four limbs, and uses a more detailed model of the human body. Unfortunately, ORCA still contains as its ballistic insult subroutine, a refined version of the flawed COMPUTERMAN, because of this, the current ballistic wounding model is the same as COMPUTERMAN. The ORCA model proposes several metrics that attempt to evaluate the impairment caused by injuries to the body, for example, the Weighted Task Average Impairment (WTAI) metric provides the supposed percent reduction of impaired tasks relevant to a specific activity or job. Another metric, the Job Impairment (JI) is used to determine if an average human can successfully perform the totality of tasks that in aggregate constitute a specific job, for example infantry rifleman, vehicle crewman, helicopter pilot, etc... ORCA is compromised by a strong reliance on adaptation of previously flawed COMPUTERMAN models & EKE/AKE methodology, a failure to fully appreciate the infinite variety of stochastic variables inherent in trying to predict the potential incapacitation of a human, and an excessive averaging of measurements leading to loss of data fidelity (with too many fuzzy data points and gross averaging of physiological responses, a hit from a .22LR begins to look similar to a hit from a .338 Lap Mag).
JSWB-IPT:
The U.S. Joint Service Wound Ballistic Integrated Product Team (JSWB-IPT) was founded in 2002 following increasing complaints about the poor performance of issue 5.56 mm ammunition in CQB by U.S. SOF units engaged in OEF (Operation Enduring Freedom) combat operations. Feedback from the field suggested there was a larger performance differential between small-arms systems than predicted by the Army’s standard AKE and ORCA models of terminal performance evaluation; specifically, many combat AAR’s suggested that AKE/ORCA predicted better terminal performance from issued 5.56 mm ammunition than actually occurred in real-world combat engagements. To attempt to sort through these issues, the JSWB-IPT brought together experts from numerous communities, including "military users, law enforcement, trauma surgeons, aero ballisticians, weapon and munitions engineers, and other scientific specialists". Over the next 4 years researchers with the JSWB-IPT made more than 10,000 gelatin test shots at 4-6, 100, and 300 meters using eight calibers in 53 different combinations of cartridges and weapons at a cost of $6 million.
Differences between those organizations using static vs. dynamic methodologies soon became fractious. USSOCOM, NSWC Crane, the USMC, Department of Homeland Security (DHS), the FBI BRF, and almost all other U.S. LE agencies utilized some form of static testing in 10% gelatin. Alternatively, historical Army testing, current ARL testing, the U.S. Secret Service, and much of NATO utilized dynamic test methods in 20% gelatin. Beyond the gelatin mix ratio controversy, ARL took issue with the use of static damage based metrics to evaluate projectile performance and insisted the dynamic method was the only official Army “lethality” model, despite its failure to fully reflect actual combat derived wound ballistic findings. In contrast, military organizations and LE agencies with strong, scientifically based ammunition terminal performance testing programs have conducted reviews of their shooting incidents with much the same results as those originally reported by Gene Wolberg of San Diego PD in the IWBA proceedings--that there is an extremely strong correlation between properly conducted and interpreted 10% ordnance gelatin static laboratory studies and the anatomic and physiological effects of projectiles in actual human shooting incidents. Likewise, the last several years of OCONUS military operations have provided a tremendous amount of combat derived terminal performance information. When the JSWB-IPT analyzed this information in aggregate, the test protocol that was found to most closely correlate with actual shooting results and became the agreed upon JSWB-IPT “standard” evolved from the one first developed by Dr. Fackler at LAIR in the 1980’s, promoted by the IWBA in the 1990’s, and used by most reputable wound ballistic researchers, as noted above--static 10% gel testing.
Somehow in the 6 weeks between 12 April 2006 when the 331 page JSWB-IPT final report draft copy was submitted to U.S. Army higher command levels for review and 23 May 2006 when the JSWB-IPT results were publicly unveiled by the Army, the paper had shrunk to a mere 19 pages, the JSWB-IPT major findings were erased from the document, and the tenor of the report was utterly altered…
Partly in response to the truncated JSWB-IPT results and dissatisfaction with the U.S. Army’s continued use of the flawed ORCA computer modeling, the PM of USMC Infantry Weapons instituted a Phase I ammunition study using FBI BRF testing methodology. The use of the FBI protocols allowed the testing to proceed rapidly and cost efficiently. The Marine Phase I ammunition study dated 11 August 2006, contrasted U.S. military issue 5.56 mm ammunition with FBI 5.56 mm barrier blind ammunition, as well as the 6.5G and 6.8 SPC intermediate calibers. In this testing, the 6.8 mm 115 gr OTM performed best, followed by the FBI 5.56 mm 62 gr bonded JSP and 6.5G 120 gr OTM; none of the military issue 5.56 mm ammunition performed as well, especially when assessing intermediate barrier capability and initial upset depths. This further conformed the results discovered by the JSWB-IPT.