Early in World War II, the U.S. introduced a new helmet design that was supposed to greatly reduce head trauma in battle. Almost immediately upon their introduction, field hospitals became inundated by a surge of new patients with head wounds. Alarmed by the unanticipated increase, the decision was made to recall the new helmets.
Meanwhile, roughly during the same period, following a large number of bomber losses due to enemy anti-aircraft and fighter activity, the Navy’s Center for Naval Analyses initiated an urgent study of its surviving bombers, studying battle damage to determine the sites bombers were taking the most damage in combat. Like epidemiologists scouring the DNA of patients who’d miraculously survived a near-always fatal disease for clues for a cure, the Navy endeavored to use the intelligence gained by studying the surviving planes, recording the location of every bullet hole, shrapnel scar and battle damage, to guide it in building more survivable aircraft.
They began by creating a map of those places where the data showed the planes were most vulnerable. They noticed that the surviving planes took most damage in the wingtips, horizontal stabilizers, and a narrow section in the central fuselage area, and theorized that by making structural improvements and bolstering the armor in those areas, they could improve the survivability of future aircraft.
Ironically, both the decision to recall the new helmets and to armor the known locations of battle damage on surviving bombers was the wrong one.
So wrong, in fact, the decisions became textbook examples of a common phenomenon that came to be known as “survivorship bias,” or making erroneous conclusions based on an incomplete data set.
In the case of the stricken bombers, statistician Abraham Wald convinced the Navy to view its results from a different perspective. He successfully argued that, contrary to initial assumptions, the maps charting battle damage were actually pointing to areas of strength, not weakness. The fact that all those bombers returned after receiving damage to the areas highlighted in the maps meant those areas were sufficiently reinforced. Instead, he demonstrated, it was those areas where no damage was apparent that it could be inferred to be points of weakness requiring additional armor. After all, the fact that not a single plane had returned with damage to that area suggested damage to that area was not survivable.
Similarly, when the new helmets were introduced to the battlefield, the sudden increase in head injuries, it was later concluded, was not a symptom of faulty helmets, but evidence of the helmets working just as advertised. For the first time, patients who would’ve never survived their wounds before were showing up in the field hospitals with a second chance at life. The increase was conclusive evidence that lives were being saved.
Survivorship bias isn’t limited to battlefield conditions. In fact, we encounter it nearly every day. When choosing stocks, for instance, it’s not unusual for a broker to look at all the top performers in a sector and make generalizations about that sector’s performance. However, he or she may have overlooked all those stocks within the sector that had gone belly up. As a result, it’d be easy to think a sector in steep decline was actually healthy.
In associations, survivorship bias can show up when determining what’s working and what’s not with your event. If you only ask people who attend your events, chances are you’ll get a pretty good glimpse of their motivations, and be able to tailor next year’s event to keep them coming back. But what about all those who aren’t attending your events? What’s holding them back? A deep dive into their motivations could be the first step in creating an event with more universal appeal.
Want to understand the total addressable audience for your association’s event? We can help. Email Jack Macleod, Chief Growth Officer at 360 Live Media today at jack@360livemedia.com