Case Study: Testing in Practice in Slovenia
Here’s how Simon Maljevac the director of Legebitra talks about the steps taken to test different messages:
Legebitra ran three focus groups with a professional research agency and then three more with the help of university professors and students. Despite facing significant constraints (they couldn’t hire venues, pay participants, meet the sampling criteria or transcribe the conversations), the students produced high quality results that were very similar to the professional ones.
Who did they test?
Legebitra chose to test their messages with a targeted sample of the ‘moveable middle’ in Slovenia, which meant designing a screening questionnaire that excluded supporters and opponents. They knew some of these characteristics from their own research into audience (for example, they wanted to talk to people between 30 and 45 years old). With an agency, they created further criteria, drawing on existing survey data.
For example, one screening question was developed from the European Values Survey which asks ‘who would you not like to have as neighbours?’ People ranked ‘LGBT people’ among a longer list (which includes people from another race, drug addicts etc.) and then Legebitra excluded anyone who put LGBT people at the high or low extremes. Because of stark gender differences in attitudes towards LGBT people in Slovenia, they decided to run one focus group with just women, one with just men, and another mixed.
What did they test?
Legebitra created three messages based on their framing tasks. They turned their messages into three short videos, using friends, and tried out different slogans which they got feedback on in the focus groups.
Each video had a clear hypotheses for why it would be successful: one of them was using humour and appealing to benevolence values, for instance, and another one appealed to common ground by demonstrating that the everyday life of a gay man is no different to anyone else’s.
In contrast to the videos, they also discussed a series of old campaign messages.
What outcome did they measure?
During the focus groups, the moderators asked numerous questions to see whether the message of the video was understood, what kind of emotional reactions people had and whether people identified with the actors. They asked questions like: What did you remember the most?; What is this video trying to tell you?; How strongly did you identify yourself with this video? The discussions provided great insight into people’s beliefs.
One particularly useful outcome was getting a sense of people’s boundaries: the point when they stopped going along with a message and had a strongly negative reaction. In the all male focus group, for example, watching a scene with two men kissing provoked such a negative reaction that they couldn’t concentrate on the content or intention of the message.
“One thing that became clear in the focus groups was that people in Slovenia really understood the issue of rights and discrimination for lesbian and gay people, on a rational level. People could clearly articulate the need for protection under law, and rarely provided arguments against that. However, on an emotional level there was still resistance, and the emotional reaction tended to trump the rational arguments. Our big challenge now is to connect with people emotionally rather than rationally.”
Another well-known factor that biases test results is the sleeper effect: sometimes a message can trigger an immediate response, but this response can change over time when the message “seeps in”.[/ultimate_modal]
Interaction !
If you don’t have a budget to run a professional testing process, what are the “home made” testing procedures which you could nevertheless set up? Share your experience and/or your thoughts in the comments section here on the page!