5.3 How far can the analysis be taken?
Data collection in the aid sector almost always aims at informing the design and implementation of programs, even if we are talking about monitoring and evaluation.
Most of the time, analysis really should aim to support the development of recommendations with regards to program implementation.
However, that is not always easy due to the issues stated above; data needs to be reliable, and the limitations of your data and your biases must be acknowledged. Please refer to section 5.2 Understanding potential biases.
5.3.1 Different levels of analysis
One thing that should be clear to you is also the level of analysis you are looking for, as this will help you determine how ‘far’ you need to go.
For further descriptive elements on levels of analysis, you can refer to the ACAPS documentation on the matter here.
Bear in mind that the quality of the data you get necessarily influences the type of analysis that can be done, as well as the reliability of the analysis. Focusing on data quality through tool design and data cleaning are therefore essential to allowing you to create conclusions from your data.
5.3.2. How to get insight? Know your context
Knowing the different levels of analysis does not make it easy to move from description to explanation to interpretation. (We’ll leave anticipation aside for now). We will state the obvious: understanding your context better will help you evolve from descriptive analysis to interpretation.
You may want to compare your results to what you think should be the “norm”, which could be of interest. The ‘norm’ could be based on standards, such as core humanitarian standards or any standard applying in the country.
In the case study, for example, we could compare the percentage of people with an improved water source to the national average, which is a common and available national statistic (‘Population using improved drinking water sources (%)’. In our sample, we see roughly 2/3rds of the respondents have access to an improved drinking water source. Does this differ significantly from the national or regional average in the country?
You may also want to question or try to explain any results that would confirm or deny your assumptions. For example, in our case study we wanted to analyze if the different FCS thresholds (‘Poor’, ‘Borderline’ and ‘Acceptable’) were tied to whether or not households received food security assistance. Our assumption was that households who received food security assistance would have higher FCS scores (as seen below).
Row Labels | ACCEPTABLE | BORDERLINE | POOR |
---|---|---|---|
Household receives assistance | 0.00% | 48.65% | 51.35% |
Household does not receive assistance | 85.45% | 5.45% | 9.09% |
Grand Total | 51.09% | 22.83% | 26.09% |
However, we can see from our results that in fact our assumptions were incorrect, and this is where ‘knowing your context’ comes in. FCS scores are significantly worse for households who received food security assistance, as seen through a larger percentage of households in the ‘Poor’ and ‘Borderline’ categories.
Although initially seeming illogical, the finding could also indicate effective targeting of food security assistance for households who most need humanitarian support. Additionally, the finding could indicate that the last time food security assistance was provided was a long time ago (longer than the food assistance was intended to last).
However, be careful with this approach as standards can also be questioned, for example as they could be resulting from biased pre-analysis, or there could be a context specific situation with regards to the topic / area of interest in which the data has been collected. In order to account for this issue, there are a couple of things you can do to grasp a better understanding of the context:
- Secondary data review is a mandatory prerequisite of analysis, and is even necessary in order to help you draft the questions of your survey.
- Baseline, background or comparative data (for example a population census/survey), collected by government(s) or research bodies.
- Contextual information (for example an updated livelihood study), conducted by external actors. In the aid sector, you could rely on data provided through the clusters, or key actors in each region.
- Lessons learnt from previous data-collection exercises, previous analysis done by your organization.
- Relevant public data shared across the community (for example, data available on HDX).
- Show your results to the target population to understand what they have to say about it, it could really enrich your analysis, and help you understand the parts that are unclear to you in the findings.
- For instance, it is possible to organize focus group discussions around your results, which can also be used as qualitative data to raise questions around areas unclear from the quantitative analysis.
- If you are not an expert yourself in the field that is being studied, make sure to involve your expert colleagues. For instance, if you are an M&E person conducting the analysis, the sectoral managers or referent should be involved in interpreting data, having the knowledge and experience to draft recommendations through discussion of the findings.
- Always review the both the questionnaires (piloting and translations!) as well as the findings with the enumerators. Were there any questions that were unclear to them or the respondents, potentially introducing bias? Once these questions were understood by all parties, are there more culturally appropriate ways we can ask the question to really understand the relationships present in our research questions?