Link Search Menu Expand Document
Mobile Data Collection toolbox

5.2 General form design principles


Now that you have a clearer picture of your basic data needs, and only now, however much you wanted to before (!) you can take a sheet of paper and start listing the questions that are needed to fill in your desired indicators. Think them through carefully! Make sure that your survey flows logically from one question to another and also from one group of questions to another.

To check

  • Don’t hesitate to try the exercises 4 - 2 Design a Bad Survey and 4 - 3 Best Practices for Designing Surveys of the Module Getting The Data We Need of the IFCR Data Playbook. Exercise 4 - 7 Household survey scenario is the following step to experiment survey design through a more concrete and realistic use case.
  • The Kap Survey model - Knowledge Attitude and Practices on point 1C Steps in preparing the questionnaire can also guide you in this task of survey construction. It provides on page 26-27 advantages and disadvantages on using certain type of question.
  • You should also review this ACAPS’s Questionnaire Design For Needs Assessments in Humanitarian Emergencies Summary, or the more detailed ACAPS’s technical brief on questionnaire design for needs assessment.

TABLE OF CONTENTS


Checklist for form design

Some of the following might seem obvious but it can sometimes be a little hard to take the necessary distance when one is conceiving a survey. Therefore, here is a checklist to keep in mind when one thinks out a survey’s questions and answers:

What Examples Why
Keep them understandable, with as little technical jargon as possible “NFI” can mean “No Freaking Idea” for a lot of people and not “Non Food Items”!
  • This will avoid misconceptions and help with enumerators joining the project later on for example, to reduce training time by making the survey self-explanatory to a certain extent.
  • In multiple language contexts it will also facilitate translation. Don’t hesitate to add hints when explaining necessary definitions, or replace the text values by pictures when it can be useful. During the training use concrete examples, role playing or standardization-of-answer tests to make sure than everyone has the same understanding of strategic questions.
Refrain from merging questions to reduce your overall number of questions. It complicates things for the enumerators “Are there any persons with disabilities and / or elders and if yes how many?” This double-barreled question makes the query ambiguous – stay simple: having two questions: “how many people with disabilities are there?” and “how many elders are there?” will not unduly lengthen your survey…
Avoid free text like the devil – except if you are certain that the data will be useful and analyzed individually or if you have no other choice (e.g. some key informant surveys…) “Please describe as lengthily as you want the reasons why you are unsatisfied with the water supply in the camp”
  • Little global analysis possible, risk of error, etc.
  • As Marc Bekoff said: “The plural of anecdote is not data.”
  • The limits of Mobile Data Collection can rapidly be attained if it is not possible to analyse results rapidly and efficiently. You can avoid this by standardizing the answers and adding an “if other, please specify” option for exceptions. Always ask yourself if mobile data collection is the best solution if most of your questions are non-structured open-ended questions.
Limit questions with multiple answers when possible (at least if you will need to use the results for analysis) “What seeds have you received?” Rice, Maize, Sorghum, Sesame, Cowpea, Mung bean Facilitates analysis as multiple answer questions often necessitates a significant number of data operations to be exploitable. Favor ranking questions, or else multiple single answer questions where you can make a table with a Y/N answer for each.
Keep the question neutral: don’t try to influence the person surveyed “Do you ever dispose of your child’s stools in the open air, which is a very very very dangerous practice for the community’s health?” Will obviously bias results by encouraging an answer that might not be true. Why not ask “how do you dispose of your child’s stools?”, and make sure you have a list of possible answers that the enumerator does not prompt.
Contextualize questions that can affect the beneficiaries directly “Are you satisfied with the measures that have been taken to support your family?” If you don’t explain that the answer to this question will have no impact on potential extra help that you might give, you might get a seriously underestimated result…
Beware of culture sensitivity and bias “Why on earth do you keep up the physical contact when you mourn your dead in an Ebola context?” Can annoy and demobilize the person answering the questions and therefore can be very counterproductive for the quality of your data!
Keep your answers consistent Having “road in good condition” and “paved road” in the same list of possible answers Can reduce data quality and the relevance of your analysis.
Forget about questions that may be useful to you in future years (at least when you are not participating in a frequently used and carefully developed survey). Hmm, check out the last survey you set up, I’m sure you can find at least one question like this
  • Apply the principles of data minimization and data proportionality.
  • With the exception of a structured survey that is conceived to be done frequently

Avoid making mandatory question a bad habit…

Making questions mandatory is always a big debate and the only answer one can give is that it depends on the context. Some surveys can have most or all questions mandatory without problem (for example a KAP – Knowledge, Attitude and Practices – survey that has a scoring system to evaluate a health center’s progress over time). However, always think about it first to make sure it’s an informed decision, as there are different reasons as to why you might sometimes deliberately want not to make a question mandatory:

  • If for technical reasons there might be situations where the data can’t be captured (e.g. GPS points can sometimes be problematic independently of your enumerator…);
  • If you are not sure that your possible answers to a question are comprehensive – although this can be bypassed by adding options such as “don’t know”, “none of the above”, “other”, “N/A” that fit the case. Don’t forget to plan for the future if this is a long-term survey, by leaving ways of opting out: for example, a list of enumerators or beneficiaries can change – if you have not worked out an adequate coding system, make sure that you have an “other” option that a new staff can tick to be identified easily;
  • If the filling of your survey depends on different people, that cannot all always be available at the proper moment, which can be frequent during Household surveys (another solution is to make sure that your tool settings render it possible to skip a mandatory question at least until you validate the form, so as to be able to save it and come back later);
  • People who run into issues with particular questions, especially those required to complete the survey, may provide false information just to get past your survey error.

If for a given question that you can’t in fact make mandatory for one of the reasons above you want to encourage the enumerator to get an answer nonetheless, you can also add a mandatory question like “does the household have a hand-washing station?” before asking a mandatory “Please take a picture” with an adequate skip pattern (if you have an enumerator that can’t be bothered to answer a question it’ll make it harder for him to skip a question if he deliberately has to give a false answer)

Do what you can to make the unique identifier intelligible

A lot of MDC tools have an automatically created unique identifier. This can be sufficient in some type of surveys, when you have no need to link back towards secondary data or an external list of elements. When you do, having a humanly-intelligible identification system to avoid duplications of data that will completely bias your analysis and also to double check through triangulation can be very interesting. This one might already exists in your secondary data (using P-codes to identify villages for example – or else an existing list of beneficiaries that you or one of your partner has) or else, if it does not (which can be the case for beneficiaries, households, key informants, points of interests that are sometimes only identified by normal text information…) take the necessary time to set up a system before starting the survey, making sure that you have done your utmost to reduce the risk of errors in the data capture.

Here are a few elements to keep in mind:

What not to do What to do
Use a code that has no meaning for the enumerators Use a meaningful, short code that can be understood by the enumerator to facilitate data capture
Use free text to capture a unique identifier (be it a name or a code) Use drop down lists or even better, set up a barcode system where the code is scanned from a card or a printed Excel table. If you don’t use barcodes, don’t hesitate to have the code created automatically based on previous question answers, and to have a prompt informing the enumerator of the calculated code for confirmation
Put in a list of hundreds of codes that can be picked from in a drop-down list Set up filters on the drop down lists (e.g. filter by geographic area, by point of interest, etc.)

Give your enumerators the incentive to collect data of good quality!

It makes perfect sense to do everything you can for the survey results to be as accurate as possible. Rather than making your survey completely independent of your enumerator by closing it up as much as possible in terms of content, you must learn to adapt the survey also to the type of context that the enumerators will be working in. As they will be key actors in the success of your survey, involve them in the content also, in clarifying definitions or questions and take into consideration their comments and suggestions before finalizing the forms.

One must therefore work on data quality and enumerator satisfaction at the same time, like for example by informing the enumerator if he has done an error when writing down a number or email (in XLSForm this can be done with the following formulas: regex(., ‘[0-9]{0,15}’) or regex(.,‘[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+.[A-Za-z]{2,4}’); for more information see section 5.6.7 The XLSForm Cheat Sheet) or else using pictures rather than text in some questions to make his capture easier. However it is important for them to continue to view the survey and data quality checks as a help and not a hindrance. This is particularly the case when you have profiles of enumerators that have been specialized in a given field for numerous years, working perfectly well without MDC, who must not be made to feel that the mobile tool replaces their skill but adds to it. For example, lists of things to check before a cash transfer can be done can nearly be humiliating for some highly qualified social workers.

This can be done for example by favoring interaction rather than technology creating a barrier in the communication established with the person surveyed. Further than explaining why one uses mobiles rather than paper forms (which is essential to create trust in most contexts), some NGOS, that are advanced in the use of MDC, in particular with children, can use the phone or tablet as a way of drawing or showing what a child feels or describing their environment rather than talking about it. Although this is not something that can be generalized to all survey types, finding the balance between open and closed survey processes is essential for a survey where both human and technological intelligence are necessary. Any social worker will tell you that discussing the question of vulnerabilities whilst answering a survey using a mobile phone is completely out of place - technology needs to be put to good use and used only when it really adds to the data collection process.

To conclude on this, it is capital to show experienced data collectors the added value of MDC (how easily the results can be visualized, how the standardization of the responses facilitates the analysis, how their photos can help understand a situation better than words, how GPS points can help make beautiful maps…) and how it will not be a hindrance to their work in the field. Proving to them that their role and the quality of their data collection is even more important now that the data is used more thoroughly empowers the staff using MDC and makes them deliberate actors of the survey’s success.