1Whatever the setting in which it takes place, the training of employees must take a number of contingencies into account. Organizations see training as an investment, and after making such an investment, they hope to retain their employees; therefore, they usually train permanent employees with the stated goal of improving productivity.
2The training of interviewers working in survey research often takes place in an employment context with few training incentives. Generally, there is a high level of turnover, and many employees work part-time or on call. Interviewers may work for more than one employer and go from project to project. To a certain extent, they may be able to negotiate how many hours they work and when. In a sense, their situation is similar to that of contractual employees.
3In the context of social research, interviewers are asked to carry out an activity that is considered to be scientific in nature. However, in private firms, there is a trade-off to be made between quality and cost-effective production; employees must be competent and flexible but not overly expensive, keeping in mind the costs associated with recruitment and training. Against this background, the research team devised a short training session that focused on improving cooperation rates and that complemented the initial training provided by a private survey firm specialized in social research.
4A review of the literature specific to interviewer training turns up a surprisingly low number of publications dealing with the question. A number of authors (Mayer and O’Brien, 2001; O’Brien et al., 2002; McConaghy and Carey, 2004; Cantor et al., 2004) have experimented with training along the lines presented by Groves and McGonagle (2001). This type of training, called ART (avoiding refusal training), focuses on tailoring and maintaining interaction. Briefly, the various phases consist in gathering householders’ concerns from expert interviewers, classifying these concerns by theme, examining with experienced interviewers how to respond to the different types of concerns, and writing a “manual” that serves as the basis for training. The training itself lasts about eight hours. Interviewers are taught to identify concerns, classify them appropriately, learn the best reaction to each type of concern and, finally, actively practice how to react. There are variations among researchers, but the basic pattern remains the same. Some researchers incorporate measurement instruments (pre- and post-training evaluation, personality tests, observation scores, etc.) to better evaluate what was learned during training and relate it to subsequent performance.
5The two experiments presented by Groves and McGonagle (2001), conducted by telephone, were follow-ups, one among contact persons in businesses, the other among farm operators. Interviewers were experienced, permanent interviewers working for governmental organizations (U.S. Bureau of Labor statistics, National Agricultural Statistics Services). Groves and McGonagle reported a 14-percent improvement in cooperation following training; however, this increase was not clearly related to performance in training. They also concluded that they were “skeptical of a naive application of these findings to RDD household telephone surveys because of the tendency for radically reduced length of interactions prior to a householder’s decision.”
6However, Mayer and O’Brien (2001) did exactly that; i.e., they followed up on Groves and McGonagle (2001) in the context of an omnibus RDD telephone survey of the general population. They had 24 interviewers, with between five months and five years of call-centre experience, distributed among three groups: a control group, which received no training; a “before” group, which received the eight-hour training session before the beginning of the project (two two-week surveys); and a “between” group, who received the training session between the two surveys. They reported an improvement of between 3 and 7 percent in first contact cooperation rate for interviewers who participated in the training and up to 14 percent over time compared with those who did not receive the training. They recommended that follow-up research use a) a larger sample of interviewers, b) a longer data collection period, c) baseline data for participant groups, and d) the design of an interviewer evaluation as a management tool. Their subsequent experiment (O’Brien et al., 2002) was carried out in a face-to-face context (National Health Interview Survey). They concluded that the training did have an effect, but they did not find significant relationships between post-training evaluation scores and first contact cooperation rates.
7McConaghy and Carey (2004, 2005) also followed up on Groves and McGonagle (2001) in an experiment with face-to-face interviews conducted for the General Household Survey in the UK’s Office of National Statistics. Low performers were targeted for either an ART training or a placebo-like “Day Away” intervention. They obtained results similar to Groves and McGonagle (2001): some improvement in response rates (around 9 percent). However, the ART trainees could be divided into improvers (n=9) and “no change” (n=6). The improvers experienced a 15-percent improvement in response rate following training; response rate tended to decrease slightly over the long term but remained higher than before training. Those who improved most during training tended to achieve a better response rate after training. Comparing the ART training with the placebo “Day Away” group confirmed that ART training was more efficient than a mere placebo intervention. Finally, Cantor et al. (2004) converted this training program into a computerized format. They conducted their experiment in the context of a follow-up of non-respondents to a mail survey. Although they found differences between the control group and the two experimental groups, none of these differences were significant.
8In summary, authors using the ART program usually found some improvement in performance among interviewers following the training, but these differences were not always statistically significant and could rarely be attributed to knowledge acquired during training. There is also the possibility of a Hawthorne effect; i.e., that the mere attention paid to interviewers explains the subsequent improvement in performance. However, McGonaghy and Carey’s (2005) design allows to rule out this possibility in their case. Only one of the experiments was conducted in the context of a RDD telephone survey (Mayer and O’Brien, 2001), and in most experiments, the pool of interviewers was rather homogenous, experienced, and small. Finally, in one case (Groves and McGonagle, 2001), interviewers did not seem to enjoy the training while in another case (McGonaghy and Carey, 2005), they did.
9Other attempts at improving training have been carried out mostly in Belgium and in the Netherlands in varying contexts. Carton (2000) studied training in the context of five different projects carried out by ISPO (KU Leuven). She concluded that a) there was in fact very little difference in performance between interviewers working on the same project, b) interviewer performance may vary between projects, and c) training should be done differently and separately for experienced and newly hired interviewers, building on acquired experience with the former, while starting with theoretical explanations before going on to practical exercises with the latter. She also advocated that in the selection process, interviewers be better informed of what is expected of them from a “cognitive, communicational and social point of view.”
10Dikstra and Smit (2002) recorded interactions in an attempt to understand what works and what does not. The interviewers were all female students in a department of communication with no interviewing experience, making the generalizability of the data doubtful. However, it is interesting to note that a) they saw no difference in efficiency between tailoring and persuasion attempts, and b) maintaining interaction was considered effective only when sustained by the respondent, rather than the interviewer. Finally, Van der Zouwen, Dikstra and Smit (1991) reported that when interviewers are trained to adopt a more formal style, they tend with time to return to a more personal style.
11Summarizing these various experimental results, one could conclude that the impact of training appears to be generally low. This could be due to the content of the training and the methods used. Or it could be due to an inappropriate setting such as insufficient variance in the performance measured due to type of interviewers (experienced and generally high performers), type of project (easy follow-ups as opposed to RDD surveys of the general population) or project length (too short to enable selection of interviewers, training and assessment of impact). In addition, the measure of performance generally used -- cooperation rate at first contact -- may be unreliable, thus preventing an effect from being identified (Durand, 2005). Finally, it is rather surprising that no longitudinal analysis of the evolution of performance has been performed in such contexts. Before concluding that performance in this field is a question of predispositions that interviewers either have or lack and which cannot be improved by training, it appeared necessary to tackle these four possible reasons for the low impact of training; i.e., setting, content and methods used, measurement of performance, and analysis of performance.
12The goal of this study was to devise a short complementary training of interviewers in the context of a social research and of a RDD-based telephone survey. At the same time, we wanted to be as close as possible to the situation faced by private firms; i.e., a substantial proportion of new interviewers, a high turnover, a financial context that prevents devoting much money to training. Inspired by literature on empowerment and self-efficacy, and taking into account the means at our disposition, we decided to devise a cognitive training focused on knowledge acquisition. This training should be viewed as a first experiment that ought to be replicated and improved.
- 1 Montreal has a modest English-speaking population but a fair proportion of bilingual people with F (...)
13The experiment took place in the context of the Canadian Addiction Survey of 2003–2004. The survey used a stratified sample of the provinces and territories of Canada. It was conducted from a single site located in Montreal, Canada,1 by a private survey company. The fieldwork began on December 16, 2003, and ran for one week; after the Christmas break, it started up again on January 9 and finished on April 19, 2004. Close to 90 percent of the interviews were conducted in English, the rest in French. The response rate (AAPOR RR3) was 47 percent for a total of 13,999 completed interviews. The average interview length was 24.6 minutes.
14Overall, 79 interviewers worked on the project for a total of 2,436 interviewer-days; interviewers worked a maximum of 65 days, and the survey was in the field for 109 days. Since the firm usually conducts most of its interviews in French, it had to recruit a substantial number of new, bilingual or English-speaking interviewers for the project.1 A questionnaire pertaining to interviewers’ attitudes, behaviours and characteristics (Lemay and Durand, 2002) was distributed to interviewers as soon as they had worked at least 20 hours on the project. In all, 72 interviewers worked at least 7 days on the project; 57 of them completed the questionnaire, and 54 gave their ID codes, allowing us to match their questionnaire with performance data.
15Table 4 paints a portrait of the interviewers who answered the questionnaire. One-third of the interviewers were younger than 30 years old, while 39 percent were 40 years or older, with the mean age being 37 years old. There was an almost even split between men and women, and 75 percent had French as their only mother tongue. Interviewers had nearly three years of experience on average as interviewers. However, 72 percent (n=41) were newly hired by the firm, and 14 of them were new to the job of interviewing. Most interviewers (84 percent) considered themselves regular employees, split almost equally between full-time and part-time. More than one-third (36 percent) worked 25 hours or less per week, while one in four (23 percent) worked more than 35 hours a week.
16The literature on training in organizations and in the survey field, as well as observations within the firm, directed us toward a cognitive type of training that would complement the basic training given by the firm which was more practical and concrete. The assumption was that knowledge would translate into the development of abilities because interviewers would better understand why they were asked to do within-household selection, convince selected respondents, and perform refusal conversion, and what happened when they tried to carry out these tasks. The goal was to increase interviewers’ self-confidence and intrinsic motivation in order to help them find their own solutions to the problems they faced; i.e., to tailor their methods according to their own personal style of interaction. We postulated that tailoring was related not only to respondents but also to interviewers’ habits in dealing with people. If interviewers’ reactions to respondents seemed artificial or pre-programmed, it may not work.
17Two types of information were identified as lacking and possibly helpful to interviewers. First, concrete information on sampling in general, and on the particular sampling frame used for surveys of the general population, was meant to improve interviewers’ understanding of why they have to select respondents and convince those selected to answer the survey. The second type of information, related to why people refuse to participate in surveys and how experienced interviewers deal with refusals, should allow for a better understanding of the social interaction and of what might be happening in the household they are calling. The stated goal was to reduce the stress associated with “taking it personally,” i.e., taking too much responsibility for the refusals and therefore becoming paralyzed by stress. We rejected practice because of the time needed to use that type of training in an efficient way and because of the “factice” nature of practice in the context of telephone surveys where interviewers have to improvise and react very quickly.
18One consideration in setting up the training session was deciding who to train. Since training is likely to be more efficient and relevant with low performers (Groves and McGonagle, 2001) and since the focus of training should differ according to performance and experience (Carton, 2000), it was decided to focus on low performers and newly hired interviewers.
19The second decision concerned a feasible setup for the training. We decided to offer short training sessions because in the context of a private firm’s activities, this seemed more realistic to implement in various situations. In addition, a short session has the advantage of making it easier to keep interviewers concentrated and interested. It was thus decided to hold three one-hour sessions during the same afternoon and to target the low performers and newly hired interviewers working on that particular day. The other low-performers and newly hired interviewers would act as a control group. This way of selecting appeared the most feasible way to proceed since the characteristics of interviewers are not normally related to which day they work. When the training sessions were scheduled, one-third of the fieldwork remained to be done, and the type of work left was becoming more specific (mostly follow-ups on appointments and refusal conversion).
- 2 This is, the file produced by the CATI software that gives the basic information on all the calls (...)
- 3 A value of one on the NCPI is equivalent to having completed interviews of average length (24.6 mi (...)
20The first step in the process was to identify the low performers. During the first weeks of the fieldwork, the administrative database2 was collected every two weeks to monitor the evolution of performance. Performance itself was measured using an index of the net contribution to performance (NCPI3: Durand, 2005). This index takes into account all the tasks performed by interviewers, including follow-ups on appointments and refusal conversion; it can be computed whatever the task performed, contrary to cooperation rate at first contact (Durand, 2005). Semi-parametric group-based trajectory analysis (Nagin, 1999) was used. This method allows for the identification of different trajectories of the evolution of performance. Two groups, which may be termed high and low performers, were identified. Figure 1 illustrates the evolution of performance from day 7 in the field (after the Christmas break) up until two weeks before training, when interviewers were selected. The analysis identified 25 interviewers in the low-performance trajectory and 42 in the high-performance trajectory. The trajectories of low versus high performers differed in the average starting level (-.87 compared with -.40). The average rate of improvement was similar (.0159 per day in the field) but the average daily performance of the high performers appeared to become more stable over time.
Figure 1: Trajectories of low and high performers two weeks before training
- 4 The analysis according to day in the field rather than interviewer experience on the project was u (...)
21The field director was asked to similarly classify the interviewers into two groups. There is an inter-rater agreement of .58 between the two classifications. The classification of performance trajectories was used to decide who to train.4 In addition, 10 interviewers, hired between the identification phase and the training session, were added to the group to be trained. Table 4 shows that there is little difference in the characteristics of respondents to the questionnaire given to interviewers when comparing the newly hired, the low performers and the high performers or the trained and untrained interviewers. The newly hired tended to be younger and were more likely to be women.
22All the identified low performers and newly recruited interviewers (n=19) working on the day chosen for training were invited to participate in one of the training sessions. These sessions were scheduled to accommodate the different work schedules of interviewers. All the selected interviewers accepted the invitation and were paid during the training. One interviewer was absent and one was late and transferred to the next session. Therefore, the three sessions were given to a group of six (five low-performers and one newly-hired), a group of five (two low and three new) and one of seven interviewers (six low and one new). The first one-hour session was followed by a one-hour break, during which the two assistants and the principal researcher/trainer had a debriefing. This discussion allowed for an adjustment of the two following training sessions, which were held back-to-back.
23The first part of the training pertained to sampling. To liven up the training, bags of M&Ms were used (Auster, 2000) to explain different notions in a concrete manner: a) What is a sample? -- each M&M bag as a sample, b) What is the effect of non-random non-response on the sample? -- example with refusals and with selection within household.
24The second part of the training was less lively. No screen or LCD projector were available. Therefore, reproductions of transparencies were distributed. These transparencies presented results from research on refusers (mostly from Goyder (1988), The Silent Minority) and information gleaned from interviews of high performers on how to deal with refusals. This part essentially comprised presentation of information and discussion.
25The trainees seemed to appreciate the first part and asked questions such as: What is a sample, really? How is the sample produced? How are the telephone numbers selected? Why can’t I interview any person who is willing to answer? Why do I have to interview a person who is not interested? Why ask an elderly woman if she has ever smoked marijuana? We answered these questions, sticking to the M&M analogy and the percentage of yellow candies in the bags. Since the interviewers were working on the Canadian Addiction Survey, the analogy was made relating the yellows to marijuana smokers in order to explain that the proportion of marijuana smokers in the population would not be correct if the non-smokers were not interviewed. As for the second part of the presentation, some interviewers seemed very surprised by the information, which went opposite to what they believed. Finally, the next day, the field director told us that everybody was talking about the M&M experiment and that it was a “critical success”, which is, at least pedagogically, a good sign.
26Training can improve performance, which is the first goal usually pursued by organizations who use it. It may also have other, less spectacular but nonetheless substantial positive effects. For instance, it may help retain personnel who would otherwise have left. It may also improve self-confidence and reduce job-related stress, therefore helping to maintain a good work climate and eventually improving the performance of interviewers as a whole. These potential impacts are assessed below.
27In order to assess the impact of the training, a questionnaire was distributed to all the interviewers still with the firm two weeks after the training. Version A, for interviewers who were not invited to a training session, comprised eight questions pertaining to beliefs regarding reasons for refusals, perception of control, self-attribution for performance, and perception of their understanding of sampling and within-household selection. Version B, distributed only to the trained interviewers, had an additional section concerning the evaluation of the training per se; i.e., whether the training helped them understand sampling and selection and find ways to convince respondents. In all, there were 18 interviewers in the trained group, but one did not return after the training (13 respondents); 21 in the control group, but only 7 still working on the project on training day (4 respondents); and 42 in the high performers group, with 30 still working on the project on training day (26 respondents).
28Table 1 presents the evaluation of the training. Items can be divided into three blocks.
The highest agreement block: Respectively 12 and 10 interviewers (of 13) fully or somewhat agreed that the training helped them understand sampling and within-household selection. This refers to the part where we used the M&M experiment.
The second-highest agreement block: Respectively nine of 13 and eight of 12 respondents fully or somewhat agreed that the training helped them understand the reasons for refusals and why it is important to convince potential respondents to cooperate.
The low agreement block: Respectively seven and six interviewers of 12 respondents fully or somewhat agreed that the training helped them find arguments to convince respondents or feel more comfortable with convincing. This part refers to the transfer of acquired knowledge into practice. This aspect was not part of the training per se.
Table 1: Evaluation of the Training Session
- 5 Measured by an additive scale of five related items (Cronbach alpha=.81).
29Table 2 shows the distribution of answers to questions pertaining to the interviewer’s role and to knowledge regarding sampling and within-household selection. The distribution of answers is highly skewed in favor of the two most favorable categories. Table 2 shows that the levels of high agreement for the three first interviewer-related items varied between 48 percent (20 of 42 felt they controlled the situation during interviews) and 60 percent (26 of 43 fully agreed that people accept because the interviewer is self-confident). In addition, 33 percent (14 of 43) said they were capable of finding arguments most of the time, and 60 percent that the role of the interviewer in the decision to participate is very important. A scale of these items may be computed.5 When the trained interviewers were compared to the high performers group -- interviewers from the control group being too few to make a significant comparison -- on this scale, the high performers appeared more likely to be self-confident and attribute cooperation to their own work (F=6.23, p=.017) than the trained interviewers.
Table 2: Interviewers’ Self-perception of Their Role, Behaviour and Knowledge
- 6 Measured by an additive scale of the two items (Cronbach alpha=.58).
30Forty-nine percent (21 of 43) said they knew very well how telephone numbers are selected and 81 percent (35 of 43) knew very well why within-household selection is done. Analysis showed that high performers and trained interviewers did not differ in their self-reported knowledge6 of sampling and selection (F=.446, p=.51).
31Finally, trained interviewers were more likely than high performers to declare that their self-confidence had improved since the beginning of their work on the project (Lr test=6.47, p=.039). All the trained interviewers said their capacity to convince had improved much (7 of 13) or somewhat (6), while 19 of 26 high performers said their performance had improved. The rest (7) stated that it had remained stable.
32Training may also have an impact on employee retention. Low performers and newly hired interviewers usually find the work they do rather difficult and stressful at first. Training may encourage them to persevere: it sends the message that the employer cares about them and wants to retain them, and it stresses -- in the case of our experiment at any rate -- the importance of interviewers in the survey process, while at the same time trying to convey that interviewers should not take all the blame for refusals.
33Figure 2 shows the distribution of the number of days the interviewers still with the firm on training day stayed afterwards. Only one of the 18 trained interviewers left the firm immediately after training. The 17 others worked an average of 18 days (17 days on average if the interviewer who left is included) after the training, with a minimum of four days and a maximum of 34. Of the control group, only seven of 17 were still working on the training date: ten had left between the identification phase and the training day. Among those who could have been trained and were not, the average number of days worked after the training was 11.4, with a minimum of two days and a maximum of 20. In comparison, the 30 high performers who were still working on the project worked an average of 17 days with a minimum of one day and a maximum of 29. However, these differences are not statistically significant (F=1.35, p=.27).
Figure 2: Employee retention according to training group
34One key element of interest is whether the training had any impact on interviewer performance, restricted in this case to the ability to convince. In order to assess this impact, it was necessary to have a reliable measure of performance and to examine the evolution of performance throughout the project, in order to estimate how the performance evolved after the training session. Performance may be seen as nested in interviewers so that multilevel growth models may be used.
Table 3: Results of Longitudinal Multilevel Analysis of Evolution in Performance
- 7 A variable that has significant random effects does not act in the same way for different intervie (...)
35A classical approach to analyzing this type of data is multilevel longitudinal analysis (Singer and Willett, 2003; Hox, 2002; Raudenbush and Bryk, 2002; Snijders and Boskers, 2000) in which the two levels are the interviewer and the daily performance. The analysis did not use the data for each interviewer’s two first days of work, since performance is not reliable enough during the initial learning period. As with trajectory analysis, time itself (day in the field) was used as the time variable instead of interviewer experience on the project. Table 4 shows the results for the analyses used to identify a parsimonious and theoretically sound model. The following effects are tested using nested models. First, an unconditional growth model is fit to the data in order to estimate the overall effect of time: a linear effect of time is entered and, since performance can only improve to a certain optimal level, a quadratic component in the effect of time is added. Then, a conditional growth model is tested in order to a) control for the membership in the control group and b) estimate whether training modified the evolution of performance in the trained group. . The final model is parsimonious and keeps only the significant random effects.7
Table 4
36Model 0 includes only the linear effect of time. It shows that there was as much variance in performance between interviewers (random variance of the intercept = .09368) as within interviewers between days (.10287). It also shows that the base level of performance improves on average by .0099 per day and that this effect varies between interviewers (since the random effect of time (.00001) is significant). Model 1 adds the quadratic effect of time. This effect is also significant but does not vary between interviewers. Since the variance within interviewers increases to .09818 with this model, it is used as the base model against which the following models are compared. Therefore, the unconditional growth model includes both a linear and a quadratic effect of time.
- 8 Interaction effects of group membership on both the linear and quadratic effects of time were test (...)
37In Model 2, membership in the trajectories used for selection is entered in order to control for the pre-training initial status. The between-interviewer variance decreases from .09818 to .01996, meaning that the categorization accounts for 79.7 percent of the variance between interviewers. The intercept (i.e., the starting performance) of the high performers is .446 higher, on average, than that of the low performers.8
- 9 This variable has a value of zero for all interviewers before training and increments from 1 to 45 (...)
- 10 Therefore, the equations for the final model are:
38The following models test the effect of training. This effect can be modeled in different ways (see recommendations by Singer and Willett, 2003, chap. 6). Training may have an effect on the intercept – a jump in performance following training – or on the post-training evolution of performance. Multiple tests lead us to the final model presented in this paper. In Model 3, the linear effect of time after training is entered.9 Model 3 shows that the net improvement of performance after training is .00789 per day (p=.011). However, model 4 tells a slightly different story since the quadratic effect of time after training is significant and the linear effect of time is adjusted accordingly. The new model shows a linear effect of .02607 per day after training, to which is added a quadratic negative effect of -.0005, both highly significant at p=.000. This means that there is a linear improvement in performance after training but that the performance eventually reaches a plateau and decreases at the end of the fieldwork. This plateau may be due to the type of work remaining. Final tests of the significance of random effects lead to the final model, in which only the intercept and the overall linear effect of time have significant random effects,10 which means that there is still some unexplained variation left in the average performance of interviewers and in the average effect of time.
39Figure 3 illustrates the change in the estimated average performance of the three groups; i.e., the high performers, the untrained low performers and newly hired interviewers (control group), and the trained interviewers over the field period, without the first and last weeks, those not being typical. It illustrates the improvement in performance that occurred after training.
Figure 3: Estimated change in performance from day 8 to day 102 for the three groups
40In summary, the post-experiment questionnaire reliably reflected the content of the training. Trained interviewers acknowledged that the training improved their knowledge of the sampling and selection processes and helped them understand the situation they face. They did not feel that the training itself helped them transfer the acquired knowledge into practice, but they were more likely than high performers to feel their performance had improved. High performers were more likely to feel self-confident but did not feel more knowledgeable than trained interviewers.
41Since the post-training questionnaire was distributed two weeks after the training, two biases may have occurred. One stems from the fact that trained interviewers may have discussed the information they received with the other interviewers, so that the training also had an influence on the other interviewers. The other is that the delay between training and the distribution of the questionnaire could have introduced non-response bias, since some interviewers, with specific characteristics, left in the interval. However, because trainees tend to rate training very highly immediately after the training for a number of reasons (they are taken away from work, cared for, stimulated, etc.), it is important to determine what the trainees retained two weeks later.
42Employee retention is an issue that should be followed up on in subsequent research. For private pollsters, it may constitute a by-product of training that is almost as important as improved performance, since high turnover is very costly, particularly when time and money has been put into training and when training is effective.
43Finally, the results showed a substantial effect of training on performance. However, the effect was not sufficient to raise trained low performers to the level of high performers. The effect also tended to plateau with time. This was a first experiment. Improvement in the training could bring about a more substantial impact. It should be stressed, however, that it was possible to measure this effect using the NCPi as an index of performance, but not using cooperation rate at first contact, because the latter measure cannot be calculated when interviewers are working on appointments or refusal conversion.
44The experimental one-hour training sessions appear to have had some significant effect on two of the three factors we were examining; i.e., attitudes and performance. However, as with any experiment, one must ask whether a Hawthorne effect was at work, whereby merely giving interviewers attention creates a subsequent improvement in performance. While this could be the case, all the results point in the same direction and, if the experiment did induce a Hawthorne effect, this one had the advantage of placing relatively few demands on the organization and of being inexpensive.
45It is also obvious that the experiment should be repeated and improved in order to seriously prove its impact. Improvements should, in our view, go in the following directions:
46- The one-hour format should be retained as much as possible, but the content should be split into two one-hour sessions. This would allow for more interviewer input and questions during training.
47- The use of M&Ms has many advantages. It allows for a lively and flexible presentation. However, it could be improved in order to systematically go through all the relevant information related to sampling and within-household selection. Training should also include a clear explanation of how phone numbers are generated: trainees asked this question frequently.
48- The training related to prevention of refusals should be made more lively and should better encourage the transfer of knowledge into practice without resorting to a “drill and practice” approach (too long and not always very appreciated).
49- Training should include an explanation of where the data are going; i.e., into percentages of people doing such and such activity. This also implies that eating areas for interviewers should contain books, research reports, etc., which would give them a better idea of their work’s end product. These types of interventions convey to interviewers the message that they are important, knowledgeable employees.