In the late 1800s and early 1900s
It was common knowledge that rain could be produced by expert rainmakers. The practice of rainmaking typically involved the rainmaker, often in a tall tower, mixing together a secret concoction of chemicals in a caldron or evaporating tank. The mixing of chemicals produced vapor – and sometimes explosions – that were said to attract rain.
It was so widely accepted that rainmaking significantly affected the weather that farmers and even city governments hired rainmakers to produce rain in times of drought. Indeed, the practice was quite effective in the eyes of the public. A hired rainmaker, sometimes called a “moisture accelerator,” would come to town and build a tower, the construction of which would often draw a crowd. Once work began, privacy was required, but explosions and mutterings could be heard coming from the direction of the tower. And then…rain would pour from the sky! Eureka!
In truth, there was a correlation between rainmakers coming into town and rainfall. However, it was not the mystical powers of the probably poisonous vapors that the rainmaker was producing that caused the rain. Most rainmakers happened to be very apt to sign contracts to begin work right about the time that the natural season for rain began for that region. With their own eyes, people saw the causal effect between rainmaker tactics and rainfall, and the reputation of the rainmaker was bolstered. Even though not all attempts succeeded in producing rain, reputations were rarely tarnished. We strongly believed in the legitimacy of the profession, and we saw what we wanted to see.
Myth: An experienced expert can cause rainfall through clandestine methods due to a causal relationship.
Scientific Finding: Confirmation bias is a psychological phenomenon that influences human beings to strongly attend to confirming evidence and disregard disconfirming evidence of something that they believe.
Scientific Advancement Requires
A healthy level of skepticism and scrutiny for what is presented to us as fact. The practice of rainmaking likely hindered our scientific understanding of atmospheric and meteorological phenomena due to our natural tendency to be drawn to details that confirm our existing beliefs and to disregard disconfirming evidence.
Rainmakers were successful because they were effective salespeople and promoters. There was no empirical evidence to support the methodologies of rainmaking, which is why it was not effective in causing the results that were claimed, and why the profession is no longer practiced. Today, while the weather app on your smartphone does not come with a 100% accuracy guarantee, our scientific understanding of the influences of rain has allowed us to develop instruments to assess atmospheric conditions with increasing accuracy.
It is easy to look back and chuckle at the thought of a wizard in a tower claiming to cause rain and think “how could we possibly believe that nonsense?” We have learned so much and come so far since then, right? In many ways, the answer to this question is yes. In other ways, we give substantial credence to the “moisture accelerators” in the world today.
In the research and application of psychological principles, the variables of interest are most often characteristics, attitudes, thoughts, feelings, and behaviors. In other sciences, measurement is often achieved by using physical devices such as a yardstick or thermometer. As physical devices are of no use in measuring a person’s thoughts or attitudes, the most critical tool for behavioral science is psychological assessment.
Think About It.
When you buy a thermometer, you probably tacitly assume that the device has gone through some sort of testing and calibration process to ensure accuracy. After all, what good is a thermometer if it does not read the accurate temperature? For the instrument to produce the results that it claims to produce, though, specific methodologies must be followed in its design and testing. It would be irresponsible for a company to sell a thermometer that was not manufactured with quality-ensured, evidence-based methodological guidelines.
This is true for instruments of all kinds, including psychological assessments intended for applications in the workplace. Such instruments commonly assess cognitive ability, personality, values, and leadership characteristics. These assessments are highly valuable to organizations because individual differences like these can have a substantial impact on the organization’s performance, culture, image, morale, and so forth. For example, research has demonstrated that emotional stability and conscientiousness positively predict performance across a wide range of jobs, roles, and contexts. Many other individual differences have significant implications for specific contexts, such as leadership effectiveness and professional development.
For the benefit of the employee, employer, organization, and beyond, it is critical to have the ability to accurately assess these types of important individual differences. Like the thermometer, though, we cannot have confidence in the results of an assessment if it is not subjected to a scrutinized methodological process in its development and validation. As some of you may have experienced, you can go online right now and discover information about your personality based on what Disney princesses you like. Unfortunately, the results of this assessment, in reality, tell you far less about the implications of your personality traits than what your Netflix account suggests for you as recommended viewing.
I use the history of rainmaking and the Disney princess assessment as examples of methods for which the lack of scientific credibility is obvious. Unfortunately, it is not always as easy to detect, especially if we only attend to confirming evidence.
In academic research, adherence to stringent methodical standards of assessment development and validation is ensured by a peer-review process, making the publication of invalid or unsubstantiated assessments highly unlikely. No such process exists in typical real-world practice situations. Upon first beginning my career as a consultant of organizational and leadership development, I was shocked at the frequency with which I was introduced to a new assessment and (with a healthy level of skepticism) looked for the technical manual, only to find that there was no technical manual, no research-derived statistics, often not even a veiled attempt to demonstrate any semblance of reliability and validity.
Strap on Your Skeptical Goggles.
We do not hire rainmakers anymore; we should not be using palm reading as psychological assessment in the workplace. If the history and development of the assessment you are considering describe traveling through foreign lands and seeing visions in the stars, start looking for disconfirming evidence. Not only is it irresponsible to sell an instrument that is not tested and calibrated with any legitimacy, for a consultant offering services to people regarding their business and livelihood to push such an instrument is disgraceful to the entire profession.
At Xecutive Metrix, our consultants are behavioral scientists and any tool we consider is highly scrutinized (and intensely debated upon) before applied to any of our client solutions. For more information contact Dr. John-Luke McCord.
コメント