*Originally published to manufacturingreality.org
“Morality is a test of our conformity rather than our integrity.”
-Jane Rule
If you are anything like me, and god help you if you are, then you have spent a considerable amount of time observing the other people around you and trying to figure out why they do the things they do. Why does one person develop personal idiosyncratic rituals while another doesn’t? What causes one person to behave in a specific way in response to certain stimuli, but another reacts completely differently to the exact same situation? Questions like these, I have found, can be exceedingly difficult to quantify for the purposes of explanation and understanding. But when has that ever stopped humans from trying?
In the current behavioral experiment that we have all been subjected to(whether we wanted to participate or not), I thought it might be a good idea to revisit some experiments that were conducted during the prior century to see if they might be able to provide us with some context or insight that could assist us in understanding this world that we currently find ourselves in. Past is prologue, or so they say…
The Asch Conformity Study
In 1951, Solomon Asch was a professor teaching psychology at Swarthmore College in Pennsylvania. Asch spent the bulk of his career researching the effects of the influence groups of people exert on the beliefs and opinions of individuals. In this particular year, Asch would conduct an experiment that would serve as the foundation for his own work for many years to come. This has become known as the Asch Conformity Study, and is one of the more well known behavioral experiments of the 20th century; in the circles that pay attention to those sorts of things, at least. It may be less well known to the general public.
Asch was keen to uncover the extent to which a group is able to influence an individual, even when what the group is presenting as agreeable may be viewed by the individual as disagreeable. So he created a research project to study the effects of this phenomenon, the premise of which was fairly simple. Test groups would consist of 8 people. 7 participants of each test group would be “actors,” and one would be the actual test subject of the study. The actual test subject was instructed that the “actors” were participants as well. Test groups would be presented with two images. The first image would have a single straight line. The second image would contain three straight parallel lines of differing lengths, with one line matching the exact length of the line from the first image. Each image pair was considered a trial, and each test group would complete 18 trials.
During each trial, the group was asked to identify the line presented in the first image from among the three lines presented in the second image. Participants would answer verbally, one at a time, with the real test subject always answering last. For the first 2 trials of each group, the “actors” were instructed to give the correct answer. On the third trial, they were instructed to give an incorrect answer. “Actors” would then give incorrect answers for 11 of the remaining 15 trials.
These 12 trials where the “actors” were instructed to supply incorrect answers were the critical focus of the study: to observe how individuals reacted and performed with or against the influence of the group. There was also a control group that tested the same process, only that group tested individually with just a researcher present and no influence from a group of “actors.” Very little more on this group later. Volunteers were recruited from the area around Swarthmore, 50 participants were selected from the applicants, and the study was underway.
If you are new to the discipline of psychology, it is important to understand that there are many interpretations and opinions about interpretations of the data produced by this study, and to a larger extent the data of all such studies. I choose to not bore you with any of that mental masturbation, so let’s look at some of the findings.
Of all the answers given by the participants when the “actors” presented an incorrect answer, 36.8% of all responses conformed to the incorrect answer. 5% of the participants always conformed to the incorrect answer, whereas 25% of participants always defied the group. This of course points to 75% of the participants recording at least one incorrect answer. Compared with the control group mentioned earlier, which returned an error rate of less than 1%, the raw data would seem to suggest that the group’s influence was considerable.
The actual test subjects of the study were also interviewed at the study’s conclusion to gain more insight into why they may have chosen to give an incorrect answer during one of the critical trials, or if they were even aware that the answer they were giving was incorrect. Asch found, during these interviews, that in 50% of the instances of an incorrect answer conforming with the group’s incorrect answer, the participants were in fact unaware that their answer was incorrect. Some of the participants reported that the pressure of the group’s incorrect answer caused them to question their own interpretation of what the correct answer was. Some participants also stated plainly that they knew that the group’s answer was incorrect, but that they did not want to stand out from the group.
The Asch Conformity Study did manage to accomplish one thing quite well: it effectively proved that the phenomenon of “peer pressure” is not only real, but that it’s consequences can be measured. Asch would continue this work in later studies, some of which set out to determine what the measurable thresholds of this phenomenon might actually be. Asch’s own opinion of the findings of this first study still echo across the decades: “that intelligent, well-meaning young people are willing to call white black is a matter of concern.”
The Stanley Milgram Experiment
1961 was the beginning of a tumultuous decade, both in America specifically, and around the world in general. In April of that year, the recently apprehended Nazi war criminal Adolf Eichmann was put on trial in Jerusalem for crimes relating to his position in the government of Nazi Germany and his complicity in the organization and execution of the Holocaust. Back in the United States, professor Stanley Milgram was teaching psychology at Yale University. Milgram, and many people around the world at that time, found themselves mentally grappling with one of the most perplexing existential questions humans have encountered: what allows for seemingly good people to follow the direction of demonstrably observable evil people?
There was a prevailing hypothesis in some academic circles that there could potentially be some sort of genetic component of the German people that made them more susceptible to the influence of the Eichmann’s and Hitler’s and Mengele’s of the world. Apparently, nobody at the time saw the implicit irony in this notion; or if they did, they weren’t very vocal about it. Milgram himself was convinced enough of this idea that he developed an experimental model for the purpose of testing this hypothesis. His intent was to use a small, randomly selected group of Americans to prove his concept, so that it could then be tested on larger groups of people of German heritage in an attempt to understand the psychology of genocide.
The study consisted of 3 components: an “experimenter” who would conduct each test session; a “teacher,” a volunteer who would be the actual subject of the study; and a “learner,” who was presented to the “teacher” as another volunteer but was actually instructed to perform a specific role during the study. The “teachers” were informed that the purpose of the experiment was to test the effects of punishment on the human ability to memorize content. I find myself wondering if that sounded as crackpot in 1961 as it does today?
The “teacher” and “learner” were separated so they could not see each other, but could still communicate with one another. The “learner” was strapped into a contraption that appeared not dissimilar to an electric chair, and the “teacher” was given a test shock prior to trial commencing to demonstrate what the “learner” would be experiencing. The content in the study consisted of paired words. For each pair, the “teacher” would speak the first word aloud and the “learner” was responsible for selecting the correct matching word from a choice of 4 possible answers. If the “learner” gave an incorrect answer, the “teacher” was instructed to press a button that would deliver an electric shock to the “learner” as punishment.
The first shock would be administered at 15 volts, and each successive incorrect answer by the “learner” would increase the voltage of the shock by 15, up to a maximum of 450 volts. The generator providing the electricity was visible to the “teacher,” and clearly marked with designations like “slight shock,” “severe,” and “danger.” At least, this is how the “teacher” was supposed to understand the whole thing to work.
In reality, nobody received any shocks, excepting the demonstrations for the “teachers” before the trials began. “Learners” were instructed to give incorrect answers so that the “teachers” would be forced to deliver the electro-shock punishment; up to and including the maximum shock at 450 volts. “Learners” were also instructed to cry out in agony as the voltage increased, beg the “teachers” to stop the trial, or any other manner of performance they thought might provoke an emotional reaction from the “teacher;” including just not even responding at all, as if they had lost consciousness or worse. If the “teacher” hesitated at any point, the “experimenter” was instructed to step in and “prod” them. 4 “prods” were devised; each could be used only once, and they had to be used successively. The “prods” were:
- Please continue.
- The experiment requires you to continue.
- It is absolutely essential that you continue.
- You have no other choice; you must go on.
If the “teacher” continued to protest after the 4th prod, then the trial was immediately stopped. Other “prods” were developed and utilized if the “teacher” made a specific comment, responding to the protestations of the “learner.”
The Milgram experiment is another famous study in human psychology. It has received its fair share of criticism over the years for the methodology employed as well as the questionable moral implications that accompany conducting this type of research on unsuspecting people. But it may be more well known for the actual results that it produced. If you have never stumbled across this study before, ask yourself(if you were in the role of “teacher”) when would you have called off the experiment? Consider in your answer that you would have tested the shock punishment before hand, so you know that it is real.
The results were absolutely stunning. 65% of participants delivered the maximum 450 volt shock at least once (emphasis mine). 100% of participants reached the level of 300 volt shocks. All of the participants displayed physical signs of discomfort at some point during the trial, but every single one also continued when prodded. 100% of the participants paused the experiment at least once to question it.
Milgram’s expectation prior to the study was that the American group would serve as a proving ground to facilitate greater research into what predisposed the German people to support(or at the very least not question) the atrocities their government had perpetrated. What he found, and exposed for the entire world to see, is that it was not a problem that was existent with the German condition. It was instead a problem with the human condition. The results were so profoundly disturbing to the academic world that the German-focused study was shelved and never conducted.
Milgram himself concluded as much in his 1974 article The Perils of Obedience:
“The legal and philosophic aspects of obedience are of enormous importance, but they say very little about how most people behave in concrete situations. I set up a simple experiment at Yale University to test how much pain an ordinary citizen would inflict on another person simply because he was ordered to by an experimental scientist. Stark authority was pitted against the subjects’ [participants’] strongest moral imperatives against hurting others, and, with the subjects’ [participants’] ears ringing with the screams of the victims, authority won more often than not. The extreme willingness of adults to go to almost any lengths on the command of an authority constitutes the chief finding of the study and the fact most urgently demanding explanation. Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.”
The essay is an excellent read, if you can make the time for it; I highly recommended it. I don’t know about you, but I see a lot in this one. I see a lot of my own life reflected in it. It is reminiscent of situations that I have encountered where the prevailing mentality is running against my instinct or even just plain common sense. What do you think? Does it sound like anything you’ve ever experienced?
The Stanford Prison Experiment
This may be the most famous of the three studies that I chose to focus on. It has certainly been the subject of much recent debate and controversy, as various players in the entertainment industry have unsuccessfully attempted to tell the story more than once over the last decade.
The Stanford Prison Experiment, as it has become known, was conducted in 1971 by Philip Zimbardo, then professor of psychology at Stanford University in California. Funding for this study was provided by the United States Office of Naval Research. The intent of the research team was to investigate the psychological effects of perceived authority and power, primarily as it related to the interactive dynamics between dominant and subordinate groups; specifically, the relationship between inmates and their guards. ONR was hopeful that the data the study would generate could aid in understanding those dynamics to the benefit of the armed forces.
This may also be one of the most widely criticized behavioral studies ever conducted, given that the parameters set up to govern the study didn’t strictly adhere to the scientific method, no control group was devised or utilized, and some of the participants have since openly admitted to attempting to “game” the results. Not to mention the fact that the individual overseeing the entire project was also an active participant. These are all extremely valid criticisms; however, I do not intend to argue for or against any of them. There are more important lessons to learn here.
The study was designed to run as a two-week live simulation. Volunteers were solicited from the local population, and were then further vetted in order weed out any individuals with criminal backgrounds, evident psychological pathologies, or medical conditions that could pose potential problems. From the pared down pool of applicants, a group of 24 were selected that shared a typical for the time middle-class upbringing. Participants were then randomly divided into two teams of 12 via coin flip, with one team to fulfill the role of “prisoners” and the other team the role of “guards.” 9 participants on each team were active in the study at any given time, with the remaining three on each side serving as alternates if anyone were to opt out of the study, since their participation was voluntary.
In the basement of Jordan Hall, a makeshift penal facility was constructed. It consisted of three “cells” capable of holding three “prisoners” each, a “yard” area for exercise (without equipment, mind you), and various facilities set up for the “guards” for when they were not actively guarding. There was even a “solitary confinement” area crafted from a spare closet. “Prisoners” were to be confined to the facility for the duration of the study. The “guards” would rotate in and out on three 8-hour shifts. “Guards,” however, were permitted to leave the facility at the end of their shift. Zimbardo himself would serve as “superintendent,” while one of his assistants fulfilled the role of “warden.”
One day prior to the study commencing, the “guards” were gathered together for instruction. They were informed that they were expected to exert psychological control over the “prisoners,” but they were expressly forbidden from inflicting any physical harm or withholding food or drink from the “prisoners.” They were encouraged to instill a sense of powerlessness in the “prisoners.” Other protocols were also communicated at that time, such as the “prisoners” only being referred to by their institutional number designation. The “guards” were all issued matching clothing to serve as uniforms, wooden batons, and mirrored sunglasses to prevent eye contact. The goal of the guard role, whether expressly stated during this meeting or not, was to be the instrument of dehumanization and deindividuation that Zimbardo could use to study the resulting behavior of the “prisoner” group. Let that last sentence sink in before you continue.
The “prisoners” had agreed ahead of time to go the whole nine yards, and the study began when each of them was “arrested” at home and transported to the local police station in Palo Alto for processing. There they were fingerprinted. Their mug-shots were taken. They were strip-searched, deloused, issued their “prisoner” uniforms, and then transported to Jordan Hall to begin serving their sentence. Day 1 of the experiment could be termed relatively uneventful. Day 2 is when things started to go off the rails.
While I could begin reporting the events that transpired over the course of this experiment, I choose not to. Some of you are probably already familiar with this story, as am I. For those of you that aren’t, and for whatever reason actually want to learn more about the brutality it generated, you can start here. In my mind, words do an incredibly insufficient job of explaining what actually happened. Zimbardo shut the study down on Day 6. One participant playing the role of “prisoner” had to be removed from the study at only 35 hours in, after he allegedly began “acting strangely.” Events escalated quickly after that. After the experiment had been halted, and evaluations were performed on the participants, three of the “guards” were purportedly determined to have displayed “genuinely sadistic tendencies” in the course of fulfilling their roles in the study. Some of the participants in the “prisoner” role reported afterward feeling helpless and dehumanized, even though they knew the whole time that their participation was voluntary.
Their participation was voluntary… both “guards” and “prisoners.”
The entire time.
So What Did We Learn Today?
“No man has any natural authority over his fellow man.”
-Jean-Jacques Rousseau
It would appear that, as a general observance, the crowd does indeed hold some sway over the individual; but not all of them, and not all of the time. And there does appear to be a significant enough portion of the population that will always trust themselves to interpret things accurately regardless of what the mob may be screaming at them. And thank god for that, right? I’d rather not think about what manner of untenable situations we might find ourselves in without them. So maybe that is as it should be.
There also appears to be a very deeply ingrained obedience to authority that is evident in the general population, and moreover in the so-called Western world. Granted, all three of these studies were conducted within the borders of the United States, so perhaps the argument can be made that this is more of an American phenomenon than a global phenomenon. We can have that discussion. But when a majority of ordinary people, subjected to a perceived authority that asks them to violate their own morality to complete a task, chose to defer to that authority instead of following their own moral compass; and the behavior of the majority can be demonstrated as largely malleable(either by authority or any other force); well, that has the potential to create a very dangerous environment for the individual. Where the very existence of the individual could even be perceived as a threat. I, for one, would not welcome that kind of a world. I like being me. I don’t want to be like you.
And I think it is still ok to say that.
But maybe I would have been in the 25% that Asch found always went against the crowd. That’s always a possibility.
More important than what I think, though, is what you think. What did I get right? What did I get wrong? What didn’t receive enough attention, and what got too much? Let me know in the comments down below.
[Editor’s Note: For those who may be curious, this article was originally researched, written and published many months before the current public debate surrounding the concept of Mass Formation Pyschosis.]