3 out of 4 people make 75%: a quick guide to surveys

There’s an old joke about three people, who are interviewing for a job. They are given a pre-interview analysis task, let’s say ‘Is the funding this organisation gives to PhD students fairly distributed across demographic groups?'. The mathematician is the first to interview. She gives the interviewer a detailed breakdown of the weighted amounts granted to each category of student, but can’t really speak for whether this is fair or not. Next comes the departmental administrator (No, I don’t know what job both of these people would be going for; shhh it’s a joke format).  He has the figures in front of him, and it doesn’t matter what way you break it down, it still isn’t enough for the department’s overheads. Maybe if you squeeze the funds in this way. But for an additional X you could have this and this. Great, still no answer to the interviewer’s problem. Finally comes the statistician. He clearly has a sheet of figures and numbers in from of him. The interviewer looks at this, looks at the statistician and says “Ah, good. You’ve run the numbers. Finally, someone must have an actual answer for me? What do the stats say?”. The statistician smiles.

“What did you have in mind?”.

Now, this speaks to both the smarminess of statisticians, and the pliable way in which stats can be used. But, if the joke needed a fourth character, it could very easily include a surveyor. Because whatever way you cut the data, it derives from questions that have been asked, somewhere along the line. And what questions you ask – and how you ask them – matter very much indeed.

As a Biologist who has accidentally ended up doing a social-sciences PhD, I’m very aware of the requirements of a good survey. Indeed, as my survey is the key tool I’ll be using for 99% of my PhD, I’ve been pretty fixated on getting it right.  Following a number of months of thinking in Oxford, and then a test in Botswana (which was definitely not an excuse to escape January in the UK), I developed a 170-question, structured survey, which will be administered to livestock owners and managers across my study sites. But where to go from there?

First of all, I had to actually get to my initial site. Since February I’ve been the proud owner of a 1991 Land Cruiser VX. In other words, the only kind of car that’s acceptable for doing a job like mine. It also happens to be a few years older than me, and much more broken than I am. But more on that another time. In late February, I arrived in Tanzania, picked up my car, and drove to Nairobi. Straightforward as this sounds, it wasn’t: I arrived just as the floods started to hit Southern Kenya. Which meant that, on the drive from Amboseli across to Namanga, I had to wait half a day for the water to recede from a bridge. Luckily, the water did go down, and I was able to cross. But the following day some more rain turned up and the bridge was washed away.

Bridge complications notwithstanding, within a few days I was able to get down to Shompole. This is a small, relatively unknown part of Kenya, just across the border from Tanzania. It’s an interesting place, where the local Maasai people have banded together to designate separate wildlife and cattle areas in the land which they collectively own. This year has seen the end of a long drought, in which families lost huge amounts of livestock. As the landscape dries out, the community is forced to move more and more into the wildlife conservancy, but when the weather is clement the segregation works well. There are cheetah, lion, leopard, elephants and (critically) livestock and people, all sharing the same landscape; in my opinion, it’s a model for how conservation should work.

Shompole is relatively traditional – while there aren’t warriors per se, and the young men don’t ochre and braid their hair, they do still wear traditional ‘shukas’. And, crucially, speak Maa. So my survey had to be translated not into the relatively-easy-kiSwahili but the impossibly-hard-Maa. Working with a number of Maasai, we translated it. Now, I don’t know if you’ve ever worked with a team to translate something into a language you don’t understand? If not, I recommend that you don’t. Ever. It was abject torture. What’s the difference between ‘should’ and ‘would’? How does ‘expect’ differ from ‘will’? And what on earth do you do when happy, excited, and enthusiastic are all the same word? (I still don’t know the right answer to this, so if you’ve got ideas…). We patched and compromised and translated and back-translated and all in all it took about a week to translate 170 lines text. Thank God I never have to do that again†.

Having finally finished the translation, the next job was the trial-run. In essence, I needed to make sure the questions made sense, to make sure the responses looked relevant, to make sure my enumerators actually knew how to administer the survey. There were a few hiccoughs, but nothing major. People’s main concern was that they were not being paid to take part. This is a thorny issue in surveying for numerous reasons, which (for me) boil down to:

Money. I’m a PhD student. The funding for this research is cobbled together from small grants here and there and whatever I can find under the sofa. I can barely afford research assistants, let alone pay participants.I’m not asking much of them. The survey takes about 45 minutes, with a nice young chap asking you the questions and noting down your responses. You don’t even have to read anything. And anyway, I’m asking questions about cows for goodness sake (this being the No. 1 favourite topic for any Maasai). People talk about their cows just for fun. It sets a precedent. Wherever I go, I’m working with partner organisations. And if I start paying people to take my survey, they will expect that from my partners in the future. Which they probably can’t afford. And then people get cross, and things get complicated. Avoid.This is meant to be helpful: I’m trying to figure out how to help people protect their animals. They should genuinely benefit from answering my questions.

Of course, I know things aren’t as simple as that. I’m a relatively rich foreign researcher, asking for relatively poor people to take time out of their day to talk to me. There’s a strong argument they should be compensated, just on principle. But unfortunately this is a PhD; I can’t afford to stand on principle. So I’m not paying for surveys, much to everyone’s disappointment.

What I am doing, however, is using local research assistants. On each site, I’m hiring a number of people, training them, paying them to collect my data. Because a blonde-haired, blue-eyed woman turning up in people’s bomas and asking ‘when did you last kill a predator?’ might not – astonishingly – get the same results as a member of the community asking the same thing. Anyway, I can’t be everywhere at once, so enumerators it is. For many kinds of survey, this is a big ask. Training people to do detailed, unstructured interviews is beyond my pay grade, and would probably take weeks. But I’m not doing unstructured interviews. Mine aren’t even semi-structured. They are fully, comprehensively structured; more structured than a whale bone corset. Every question is closed ended, there is no room for interpretation. There is a full script for my enumerators. I’m pretty sure I could train a smart dog to administer it (although he would need thumbs to operate the data recording device, hence I discounted that possibility). It took but a few days to train a few enumerators—and we were off!

The other happy side-effect of using local research assistants is they know where to go. More specifically, they can turn up at people’s houses and expect to be let in; they know everyone in the area, so if Auntie Precious is likely to be busy on Tuesday, well they can collar her on Monday at 4pm when she’ll be getting back from the market. Incentivised by a pay-per-survey system and aided by their intimate knowledge of their own community, my RAs have been pretty effective. Within a few days of data collection starting, I threw my original timelines out of the window: they were getting twice the data, in half the time. And boy—it has been pouring in, both from Shompole and Amboseli, a second site, which I set up shortly afterwards.

So. I have a perfectly imperfect survey, but hopefully one that will yield useful results. It’s in the field, getting responses. Lots of them. I just have to sit down and figure out how to turn responses into some kind of answer to my PhD questions. And since I’m not a statistician, I haven’t decided yet what I want the answers to be…

†Hahahah I joke I have to do it at least 4 more times to other equally baffling languages.


Recent Posts

See All