table of contents:
just inefficient, or actually counterproductive?
In the previous chapters
Why the reticence?
Nothing to brag about
We have addressed lack of efficiency of sustainability communications. That is already somewhat calamitous, considering the importance and urgency of promoting new attitudes towards the issue. A lack of efficiency implies insufficient progress towards a goal. In this chapter we will show how ineffective communication can actually be counterproductive, moving us away from the objective.
Before we start exploring the solutions, this chapter will first look at a few examples of communicators putting their foot in it. We will be referring to recent research that exposes behaviours that you may find counter-intuitive, even shocking.
We have seen in “Chapter 1: What’s Amiss?” how explicit communication is only efficient when the conditions are present to cause the recipient to engage System 2 thinking. This could be that the subject is of high interest to the recipient, or that he is particularly attentive for other reasons. But, what happens when such conditions are not present? Instinctively, one would assume that the communication would merely be wasteful, but nonetheless innocuous.
Don’t bet on it. Contrary to the popular saying ‘all publicity is good publicity’, explicit communication can easily become counterproductive if the recipient is not receiving it in the way that is intended. In his book ‘Language Intelligence’, Jo Romm explains how perniciously false myths occur and propagate:
‘When the Center for Disease Control and Prevention put out a flier to debunk myths about the flu vaccine, it repeated several myths, such as ‘The side effects are worse than the flu’ and labeled them false. A study of people given the flier found that ‘within 30 minutes, older people misremembered 28 percent of the false statements as true’. Worse, ‘three days later, they remembered 40 percent of the myths as factual.’ And they identified the source of their erroneous beliefs as the CDC itself. ‘
‘...one of the brain's subconscious rules of thumb is that easily recalled things are true.’
An important challenge with verbal communications is that, beyond a certain threshold, and that threshold is low, it requires the recipient to engage effortful thinking. Part of your audience will engage System 2 thinking and may take your message on board intact; part of your audience will let System 1 truncate and even distort the meaning of the message. The challenge is to balance the benefit of A) intact messages against the damage caused by B) truncated messages. If the message is not compelling enough to engage System 2 for a sufficiently large portion of your audience, you may find that the net effect of your communication is counterproductive. Although many of the simplified messages may not survive for long, those that do survive are stored in subconscious memory, wherefrom they are difficult to ‘edit’ or dislodge.
A complex message may not necessarily appear as such. For instance, it does not need to be particularly long. Just the negation of a concept is one layer of complexity, even if the cognitive effort required may seem insignificant. A brief study "No sign of quitting" by Oxford, MIT and Yale explains that we have a tendency to filter out the negation in a sentence. From the message ‘Don’t drink and drive’ you retain ‘drink’ or ‘drive’ but not ‘don’t’; from ‘No smoking’ you retain ‘smoking’; from ‘Just say no to drugs’ you retain ‘drugs’.
This is closely related to the ‘mere exposure’ effect, which you may know as the ‘pink elephant’ effect. If you order someone ‘Do not, whatever you do, think of a pink elephant’, the person is powerless to prevent a pink elephant from popping into his mind. That is System 1 at work. Anybody who imagines he can consciously control the effect of mere exposure should try the pink elephant test.
The effectiveness of complex messages is unpredictable in the best of cases. In today’s world of media clutter and frenetic lifestyles, the challenge has ballooned and one must be brutally realistic about the limitations of verbal messages when dealing with mass audiences.
reverse priming effects
In Chapter 1 we talked about Dr Robert Heath's theories about our resistance to advertising messages. Research by Laran et al "The Curious Case of Behavioral Backlash" brings a fascinating insight into the consumers’ subconscious reactions to advertising messages, underscoring Dr Heath’s theories.
‘Five experiments demonstrate that brands cause priming effects (i.e., behavioral
effects consistent with those implied by the brand), whereas slogans cause reverse
priming effects (i.e., behavioral effects opposite to those implied by the slogan).’
Among their conclusions, they state that:
[…] ‘Slogans cause reverse priming effects and brands cause priming effects because people perceive slogans, but not brands, as persuasion tactics.’ […] ‘These findings provide evidence that consumer resistance to persuasion can be driven by processes that operate entirely outside conscious awareness.’ […] We suggest that priming effects are reversed when consumers perceive a marketing tactic as a source of persuasion. […] While virtually all marketing stimuli are persuasion tactics, consumers might perceive certain marketing stimuli, but not others, as persuasion tactics.
A particularly intriguing finding was that brands and slogans actually had opposite priming effects, rather than just different levels of priming effects. We will see in the next chapter why this finding is particularly relevant to this white paper. The researchers felt compelled to seek an explanation as to why brands are not perceived as a persuasion tactic:
[…]Brands are not treated as simply another tool in a marketer’s tool kit. For instance, brands are attributed humanlike [sic] personality traits and are treated like relationship partners with whom consumers develop emotional attachments and share commitments (Aaker 1997; Aaker, Fournier, and Brasel 2004). […] Few if any products are sold that do not have a brand name associated with them. A brand name may be a cue to prestige or quality, but because it is a generic feature, like price, that all products necessarily need to have, consumers may not perceive a brand as a persuasion tactic. […] Central to our hypothesis, the findings suggest that brands are perceived to be innocuous (no different from common sentences) but that slogans are perceived to be persuasion tactics.
Another significant finding was ‘unequivocal evidence that slogans elicit correction without any conscious intervention.’ Despite being short and simple, aiming squarely for our System 1, despite an emphasis on being catchy and memorable rather than explanatory and persuasive, even slogans raise barriers in our subconscious. How much room does that leave for more complex verbal communication?
Not everybody might have been of aware of just how deep-rooted this defense-mechanism actually is. Most of us are, however, deeply aware of the barriers going up when confronted with the typical smooth-talking second-car salesman:
Flattery by a salesperson can even produce automatic negative judgments (Main et al. 2007). It is plausible that these automatic negative effects occur because, like slogans, insincere flattery activates an automatic goal to resist persuasion. This view would predict that persuasion tactics activate a nonconscious goal to correct for bias that not only leads to a reverse priming effect on behavior but also underlies automatic negative effects on attitudes.
polarisation despite education
There is a common denominator to all the examples above: if your communication requires System 2 processing, you’d better make sure that it will trigger System 2 processing. When your audience diverts a System 2 message to System 1 thinking, you have lost control of your message. The consequences can be unfortunate, including turning the meaning of the message on its head.
This act of downgrading the message can be a reflex or it can be deliberate. By studying the deliberate action, a team from Yale (Motivated Numeracy and Enlightened Self-Government, Kahan et al, Yale Law School, 2013) managed to document one of the most perplexing social phenomena. It goes a long way to explaining the polarization of opinions on big issues. Since sustainability is certainly an issue that polarizes opinions, this research is particularly relevant.
Many of us have been scratching our heads about the question that lies at the centre of their research: ‘Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence?’
Spontaneously, one would be inclined to think that opinions converge towards consensus, conjointly with the increase in the audience’s knowledge and intelligence, or its ‘Numeracy’ as itwas defined in this research: a measure of the ability and disposition to make use of quantitative information.
The findings confirmed the assumption: the higher the Numeracy, the closer one got to a consensus. But only on some type of issues. On other types of issues, exactly the opposite happened: the higher the Numeracy, the higher the polarization. The contrast between the two groups was almost comically obvious. What distinguished the two groups? The first group of issues were strictly scientific issues, devoid of implications beyond the world of medicine. The second group of issues were known to be politically polarized, such as climate change, or gun control.
This is what was happening: when a policy-relevant fact becomes suffused with culturally divisive meanings, the pressure to form group-congruent beliefs will often dominate whatever incentives individuals have to ‘get the right answer’ from an empirical standpoint. […] If he gets the ‘wrong answer’ in relation to the one that is expected of members of his affinity group, the impact could be devastating: the loss of trust among peers, stigmatization within his community; and even the loss of economic opportunities (Kahan 2012).’
conforming to the peer-group
Subjects capable of correctly interpreting the complexity of the data (System 2) would do so only when the less effortful, heuristic assessment (System 1) contradicted their ideological identities. It becomes rational for otherwise intelligent people to use their critical faculties when they find themselves in the unenviable situation of having to choose between crediting the best available evidence or protecting their belonging to a socio-political group.
It would be tempting to conclude that people are prepared to distort facts in order to defend the beliefs of their peer-group. That is neither what the research says, however, nor would it be fair or reasonable to expect a citizen, however well-educated, to apply the rigor of a trained scientist on questions of this dignity. While it might be desirable for people to promptly update their beliefs upon new evidence, it is a big ask when this may entail a direct impact on their social life and status. At the very least, one must allow for a degree of soul-searching, which can sometimes tilt into denial. Whatever the case, it represents a crippling challenge for rational, verbal communication: the more educated your audience, the higher the difficulty of reaching consensus on controversial issues.
‘It would also be consistent with, and help to explain, results from observational studies showing that the most science-comprehending citizens are the most polarized on issues like climate change and nuclear power’ (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Hamilton 2012)."
Laran et al. could only conclude that “Visual inspection suggests that polarization—as measured by the gap between subjects of opposing political outlooks assigned to the same experimental condition—was greatest among subjects highest in Numeracy. Such a result would fit the […] hypothesis which predicted that subjects capable of correctly interpreting the data would resort to the form of effortful, System 2 processing necessary to do so only when the less effortful, heuristic or System 1 assessment of the data threatened their ideological identities.” (Laran, Dalton, Andrade, 2014).
Simply put, if an idea effortlessly matches the beliefs of one’s peer-group, we easily adopt that idea without further investigation. If, on the contrary, the idea appears to contradict the beliefs of one’s peer group, we are likely to engage more effortful thinking in the hope that this will reveal an explanation that conforms to the beliefs of the peer group. To a communicator, this must sound like grappling with a wet bar of soap.
It is dangerously naive to believe that we can convince people to change acquired behaviours through rational arguments.
"If facts were sufficient to persuade people, then experts in science would rule the world. But facts are not, and scientists do not."
We filter out all the facts that do not match our views. We all have filtering worldviews (extended metaphors or frames) through which we view the world. "
"Facts cannot fight false frames. You must fight metaphorical fire with metaphorical fire."
"If you cannot change the public's worldview, microcosm, paradigm, extended metaphor, or frame, then you cannot change how they perceive the facts."
(Joseph J.Romm, Language Intelligence, 2012)
As an example, Romm points to how Democrats and Republicans continue to give almost diametrically opposite descriptions of the Iraq WMD controversy, well after the 9/11 Commission's conclusions and the the Government's retraction.We are programmed to instinctively defend the status quo within the peer group, or the frame, that we fit into:
“When the facts don’t fit the frame, the facts get rejected, not the frame.”
(Susan Nall Bales, Frame Works institute)
This has brought us to another important aspect of mass communication, which we are going to tackle in the next chapter: the concept of ‘tribalism’.
How and why does it happen that people turn the message on its head?
Is there any part of an advertising message that we do not instinctively filter?
Does higher knowledge and intelligence lead to consensus?