Home | Current | Archives | About | Submissions | Contact | Search
 Methodological Innovations Online > Vol. 2, No. 3 (2008) open journal systems 

Researching Cultural Capital: Complexities in mixing methods

 

Elizabeth B Silva, Open University, UK

David Wright, Open University, UK

 

Abstract

 

This paper reflects on the relationships between methods and meaning-making in social research. It focuses on two core issues: (1) the problems with the generation of data inherent in quantitative and qualitative methods themselves, accentuated and revealed by processes of mixing them, and (2) the implications of the asymmetrical relationship between research categories used and the lived experience of the investigation. These foci inform an exploration of the processes of the construction of the research object, as implied in the work of Pierre Bourdieu, and of more recent concerns about how research makes sense of the complexity of social worlds within social sciences. The paper engages with empirical and theoretical aspects of researching cultural capital in contemporary Britain, as part of the Cultural Capital and Social Exclusion project, a large scale, mixed-method empirical inquiry into the nature of cultural capital in the UK.

 

Keywords: mixed methods, quantitative survey, qualitative interview, cultural capital, complexities.

 

Introduction

 

In fieldwork research it is crucial to pay attention to the connections between researcher and research participants and to recognise their respective social locations both to ensure the quality of the material generated and to underpin the interpretations made by the investigation. Projects involving qualitative methods in particular have been increasingly required to reveal and reflect on relations between interviewers and interviewed, as well as to account for the situatedness of talk as part of the assessment of reliability and validity. Quantitative research, based on survey questionnaires, does not raise similar preoccupations and there is less concern with the moment of research and the revelation of differences in social position between interviewer and respondent or the implications of their personal connections in the process of data collection. In quantitative methods social baggage is generally not perceived to stick to the information gathered, and this is seldom considered in the research analysis in contrast to apparently more subjective accounts like those emerging from qualitative interviews and focus groups. Despite some radical critiques that have raised the social constructivist approach to knowledge creation and pointed to strong similarities between qualitative and quantitative methods in these respects (Irvine et al., 1979; Oakley, 2002; Parr and Silva, 2005; Savage and Burrows, 2007) the similarities between them as different technologies for the production of knowledge are less pronounced in methodological discussions within the social sciences than discussions about presumed differences between them. Recent work, notably by John Law (2004), drawing on theoretical narratives which emerge from science and technology studies (Latour and Woolgar, 1986, among others), emphasises the extent to which the methods of social science act in and on the social world. On the one hand, they capture and render knowable aspects of a complex world. On the other hand, the relatively limited battery of methods available to the social researcher and the respective histories of quantitative and qualitative camps, which still dominate the ways social scientists are trained and the ways in which research projects are imagined, act as blinkers (Law, 2004: 143) in shaping our approach to the problems of capturing the ephemeral and messy nature of contemporary reality.

 

 If one shares a view that social science methods help to make the social world (Osborne and Rose, 1999; Law and Urry, 2004), it is relevant to explore how far different methods and the different practical and operational complexities inherent in different methodological approaches may impregnate meaning-making in the processes of fieldwork and interpretation. Research on cultural capital, which has at its core an engagement with issues of social classification of things and people in relation to processes of the creation or maintenance of social hierarchies, appears particularly fruitful for a reflection about these issues, and more broadly about how research methods enact and enable social realities how they tidy up the mess of the social world.

 

This paper aims to contribute to a reflection on the increased use of multiple mixed methods and discusses the processes of researching cultural capital in Britain, in a major empirical project, Cultural Capital and Social Exclusion (CCSE).[1] In the first section we briefly outline the data generated, the rationales for our methodological approaches in combining quantitative and qualitative data and how relations between them are envisioned, both in our project and in the very notion of mixing methods. In the second section we explore and outline errors/mistakes made in the processes of data collection, coding and analysis of our project and examine their possible effects. Practical errors often put question marks over the value of data. In survey analysis these mistakes can be statistically accounted for through inferential statistics and various methods designed to deal with errors, whether random or in structural equation modelling. But, in generating complementary qualitative samples, we argue, these errors can have important effects. The errors are thus significant in the context of the multiple-method approach. In the third section our concern is with research categories and lived experience, both in quantitative and qualitative approaches. Here we reflect on how categories shape, as well as capture, the actual practices of individuals. Measuring preferences, and degrees of preferences, for various types of activity is a necessary simplification in an attempt to capture the world of cultural participation. No list of potential activities is long enough to encompass the full breadth of activities and preferences. The boundaries of fields and assumed relations between activities, alongside the practical need to build research instruments within the constraints (financial, temporal and geographical) of a large-scale empirical project necessarily take precedence over the experience of life in processes of simplification and classification. These are present in both quantitative and qualitative methods. We conclude the paper with a reflection on our research concerns and their bearing upon the choices we made in the research process and about the roles of social inquiry in making, and in describing, social realities.

 

Using quantitative and qualitative data together

 

Our concern with methods in this paper derives from our engagement with Pierre Bourdieus work, and in particular from Distinction, his major empirical investigation carried out in the late 1960s in France (Bourdieu, 1984). This study, which might be characterised as a mixed methods enquiry avant la lettre, used a combination of a major survey alongside qualitative interviews and various sorts of textual analyses to identify and interrogate patterns of distribution of taste in terms of economic, social and cultural capital. The CCSE study critically engages with Bourdieus work interrogating the utility of cultural capital as a relevant concept for understanding patterns of social positions in the context of contemporary Britain. Part of this critical engagement is in the area of method, and in particular the relations between methods. Our critical appraisal of Bourdieus work implied a change in the use of the original concept of capitals. We differentiated between taste, knowledge and participation as components of cultural capital (cf. Bennett et al., 1999) and we tailored classifications of genres and fields to the current British context, noting in particular, for example, the complex genre make-ups of the contemporary musical and literary fields, as well as giving more prominence to television, an activity almost wholly absent from Bourdieus account of the France of the 1960s.[2] Most importantly, we gave particular prominence to gender and ethnicity, which were similarly absent from Bourdieus original preoccupations. Chiefly this resulted in the design of a multiple-methods project attributing a pivotal initial role to focus group discussions in the clarification of less visible aspects of British cultural life, followed by a wide-ranging survey asking about activities across seven cultural subfields (music, reading, visual art, television and film, eating out, sport and leisure) as well as household interviews with participants in the survey-phase and, where appropriate, their partners

Whilst the CCSE study makes use of qualitative interview techniques alongside a large-scale national random sample survey, it has not been the projects aim to privilege qualitative approaches, nor is this our concern here. We do not intend to consider the interview as the authentic voice of research participants. Paul Atkinson and David Silverman (1997) make relevant points about the mistaken authenticity of the qualitative interview within a wider cultural preoccupation with the interview and personal revelation as a technology of biographical construction (p. 306). As they argue, personal narratives expressed in interviews are not any more authentic than any other socially organised set of practices. The interview has a particular history as a technology of data collection and one which, as Ann Oakley (2002: 24) remarks, seeks the advantages of connected as distinct from separated knowing, grounding knowledge in concrete social contexts and experiences.  With different concerns, Mike Savage and Roger Burrows (2007) argue that the rise of the interview is coterminous with prevailing narratives of individualisation in Western societies. Regardless of attachments to the superiority of quantitative or qualitative approaches, both are, in Laws (2004) terms, inscription devices and, as such are involved in making up, as well as reflecting, social realities. Law draws on Bruno Latour and Steven Woolgars (1996) assertions that the scientific phenomena constructed in the processes of laboratory experimentation do not exist without the processes themselves. The qualitative interview itself is a particular set of practices and, in the case of the CCSE investigation it depended on the a priori construction, isolation and categorisation of participants for the qualitative phase of the study through their responses to a mechanically gathered quantitative survey. This involved processes of isolation, detection and categorisation, the implications of which will be reflected upon below.

 

Similarly, what we asked and how we asked it involved a particular orientation to our participants and assumed some shared languages and orientations (Silva and Wright, 2005). Whilst the careful, and sensitive, phrasing of questions is central to the design of surveys, accounts of what questions are asked are oddly absent from methodological narratives of qualitative research processes being replaced by accounts of power relationships and processes of access negotiation, for example. The quote from an interview, rather than patterns emerging from relationships between variables, is conceptualised as a natural or true account by the participant telling it like it is, without adequate accounts of the construction of what participants are being asked to respond to in making their qualitative narratives. Wendy Hollway and Tony Jefferson (2000), in their analysis of the implications of the involvement of different subjectivities in processes of knowing propose the notion of the defended subject, the crucial elements of which are that participants

 

may not hear the question through the same meaning frame as that of the interviewer; are invested in particular positions in discourses to protect vulnerable aspects of the self; may not know why they experience or feel things in the way that they do; are motivated to disguise the meaning of at least some of their feelings or actions.(Hollway and Jefferson, 2002: 26)

 

These defences equally apply to the interviewer who may not hear the answer because of particular feelings about the subject matter, for example. Such issues might apply most strongly when interviewing across genders, classes and ethnicities, which requires explicit recognition of ones own class, race and gender positions in making the interaction of the interview (Seiter, 1990) and, in reflecting about a mixed-methods enquiry such as CCSE, should apply to both quantitative and qualitative research moments.

 

Mistakes in the data: classifying, understanding, hearing, reading

 

Data entry or coding errors may occur at any stage of data gathering. These can be trivial a mistaken selection by the researcher from the menu of available answers provided by survey forms, or a mis-click on a laptop, for example. But they have serious consequences as errors in coding, or in the interpretation of coding can lead to the mis-direction of the resources of the investigation as well as the misrecognition or mis-categorisation of participants. This section begins with examples of mistakes of classification, which became apparent through the interaction between the survey and qualitative interviews, before exploring examples of the different lenses through which researchers and those researched experienced the categories of our study, firstly in terms of misunderstanding of categories, and secondly by mis-hearing of categories.

 

Focusing a methodological discussion around the question of mistakes is something of a professional risk. The revelation of the processes by which knowledge is created is an area open to judgement and academics are deeply aware of the possibility and implications of being wrongly understood (Bourdieu, 1988). It also implies a recognition that things can be done better. Our intention is to reflect upon how errors affect meanings in the process of fieldwork and analysis and we use our experience to reveal interesting tensions to be negotiated in research processes. Whilst tensions between quantitative and qualitative data are a consistent and enduring feature of methodological discussions, our aim is to reveal tensions between the aims of research and what Law (2004) terms the hinterland of the research process. This refers to the practical and pragmatic decisions which transform research questions into on-the-ground interviews and lead - alongside or ahead of theoretical concerns - to the generation of forms of knowledge about the social world.

 

Classification problems between quantitative and qualitative approaches

 

The relations between quantitative and qualitative methods in our investigation comprised three distinct phases of data generation. Firstly 25 focus groups, formed around various socio-economic variables, ethnic and sexual identities, were organised to gather information about items of cultural taste and participation for inclusion in the survey, as well as to explore engagements with a series of issues of cultural life experiences. Participants for these groups were selected drawing on a variety of approaches including snowballing from personal or organisational contacts and public places. Data analysis was assisted by NVivo (Silva and Wright, 2005). Secondly, an extensive survey questionnaire was applied to a nationwide (England, Wales, Scotland and Northern Ireland) representative sample of adults (aged 18+) resident in Britain. A total of 1781 respondents were recruited, made up of a main UK sample of 1564 supplemented by an ethnic boost sample of 227 from the three main ethnic minority groups (Indian, Pakistani and Afro-Caribbean). In total 191 interviewers were employed, all briefed by members of the research team in conjunction with staff from the agency leading the survey fieldwork. The questions were grouped under 29 different headings of a 72-page document. Data entry was coded directly in the Computer Assisted Personal Interviewing (CAPI) system employed as part of the interview process done with a laptop (Thomson, 2005). SPSS was the first and basic instrument for analyses of the multiple-choice questions and various other more sophisticated statistical analyses followed (Bennett and Silva, 2006). The third phase consisted of qualitative interviews and participant observation in households. The sample was determined on the basis of a theoretical frame which identified individuals according to household type (including sole person households, heterosexual couples, households with children and extended families), ethnicity from white and minority ethnic backgrounds, and cultural capital determined at this stage by a relatively crude classification of people into high (educational qualifications of degree level or above), medium (A-levels/GNVq or equivalent) and low (GCSE level or no qualifications). The combination of these amounted to some forty-six potential categories, and informed the selection of participants from the survey sample for the home-based qualitative interviews. Survey respondents had been asked about permission to be re-contacted and this provided a first screening. Some participants who had granted permission to be re-contacted did not provide a telephone number, making their involvement impractical. Our choice of sample had additionally to take account of the geographical location of a team of nine researchers interviewing for this phase of work, who were spread around the country. We interviewed for this phase 28 respondents from the survey, two from the focus groups and, where relevant their partners making a total of 44 interviews. The interviews were semi-structured and explored seven main themes related to the quantitative survey. They were tape recorded, debriefed, anonymized, transcribed and coded for use with NVivo, (Silva, 2005).

 

The relationship between these three phases of the investigation was particularly close since we intended to explore in depth the key issues regarding cultural participation, knowledge and taste that emerged in the quantitative survey, penetrating further into areas that were difficult to gather information about. Of particular relevance were issues concerning same sex households. The identification of these was more complex than we expected. In the design of the survey we had chosen not to include a question which allowed survey participants to identify a sexual identity. From a brief analysis of the frequencies of the responses to the survey we expected that the useful questions for identifying these household types would be the opening 'sex of participant' (with the options of male and female) and 'sex of first/second/third/fourth person in house', together with relationship to 'first/second/third/fourth person', with 8 potential relationships identified and ten potential answers, including refusal and dont know. 'Partner' was a response only up to 4th person identified in the house. This screening was not as fruitful as we anticipated. Initially we identified two same-sex partner households in the main survey sample of 1564 respondents. One proved un-contactable because it had no telephone number available. The other turned out to be a married heterosexual couple, in Scotland. A simple error of data entry at the survey stage (the clicking of male, instead of female) by the survey interviewer, produced a quite different picture of the respondent. During the process of re-contact for the qualitative phase and access negotiation this discrepancy was not revealed. Only when the researcher arrived at the house of the participant to carry out the qualitative interview was the mistake discovered. This clearly necessitated a re-ordering of the quantitative sample. This interview was used to check and correct the quantitative data, in the same way that other errors identified qualitatively were fed into corrections making the survey results more robust. More important for our concerns in this paper, however, is that this example demonstrates the extent to which neither our survey respondents nor our interview participants are, nor can they be, pictured as necessarily identical individuals as the answers to the questions that they give us differ, at times, significantly. These answers are always a process of reduction and representation, a means of tidying up the complexity of individual lives and relationships.

 

As well as providing a check on the quantitative data, the qualitative interviews were expected to effectively re-elaborate upon lives necessarily simplified by the production of survey data. Processes of re-coding are essential to quantitative work, allowing, for example, in the tradition of survey work, meaningful comparisons between data collected in different contexts and with established theoretical categories. For work in the Bourdieusian tradition, forms of class-based classification are clearly central. Survey participants were asked to classify themselves in class terms from a menu of choices. They were also asked to identify their occupation according to a list presented on a card, which was subsequently re-coded into more standard NSECA classifications. Each step of this process, all of which are necessary, further removes the data from the immediacy of the lived experience of the participant. Whilst the probability of errors can be accounted for in the management and analysis of statistical data, returning to the raw data embodied in the individual interview is more problematic. In this case, the qualitative interview, whilst meant to elaborate or re-elaborate tends instead towards emphasising and marking the distance between a categorically created world and an individual lived experience. For instance, among those categorized in the survey as being in managerial/professional occupations, we found a hospital consultant, a part-time manager of a council-run meals-on-wheels scheme, a policeman, a reconciliations clerk, an assistant manager in a carpet-warehouse and a former part-time post-office counter assistant. This range of diverse jobs identified by the qualitative interviews raises significant potential difficulties in analysing research subjects, envisioned and assembled quantitatively, then further explored qualitatively. In quantitative work alone this diversity can be rigorously tested if this emerges as an analytically relevant concern. In combination with qualitative work, though, in the context of a large-scale research project of the kind of CCSE the operational necessities of constructing broad analytical categories are combined with some ephemeral errors that occur in the moment of the qualitative interview. It is to these we now turn.

 

Misunderstandings of categories

 

In the CCSE survey questionnaire, based in part on consideration of preferred forms of cultural participation, respondents were presented with lists of genres of music, visual art, film, television channels and programmes, and books and asked to determine their favourite ones in various ways (i.e. select most favoured and least favoured from a menu list, or express their degree of preference on a scale of 1 to 7). Whilst respondents were offered a dont know category, there is no scope within the electronic recording of the survey for any lack of certainty about the meaning of particular categories or genres. While the menu in this context acts as a mediating structure that protects against anxieties that might arise from the burdens of wider choices (Korczynski and Ott, 2006), it also limits the field of engagement of research participants and the depth of information available to researchers. For instance, the notion of genre is clearly contested within academic research on, for example, the literary, film or musical fields, but within empirical quantitative inquiries such as ours, in this aspect, there is a necessary requirement to assume a coherence between the meanings and interpretations of genre as classified in the survey, and by the respondent to the survey. This necessity is informed by practical constraints the contested complexity of genres does not fit well with the need to produce surveys which last a specific amount of time, and impinge upon limited budgets.

 

The qualitative interviews demonstrated, however, that this assumed coherence was not absolute. This can be illustrated by reactions to the genre film noir on the list of films. A complex category, encompassing a narrative, political and visual sensibility, as well as implicated in altered conceptions of high art and popular culture (see Cook, 1998: 93 for a definition), film noir was ostensibly chosen as a category to tap into forms of specialist knowledge about cinema. The recognition of the genre itself was limited and, as the exchange below suggests, problematic.

 

Interviewer: Again in the survey you said that the type of film you like the least was Film Noir, I dont know if you remember saying that?

Surbhitra: Which film

Interviewer:  Its Film Noir , its kind of old black and white movies        

Surbhitra:  It must be my husband saying that.  But sometimes I do watch,

I mean last Sunday, two weeks ago, there was a nice black and white coming.  My husband fell asleep and I watched it, I enjoyed it

Interviewer: So you wouldnt say that was a type of film that you particularly disliked?

Surbhitra: Oh no, no, no, no, if its a good story line then I will watch it, yes But Im not really keen on this cowboys things, John Wayne, Im not, no, no.  And my husband is.  Because he enjoys the wild life and the sceneries, he enjoys that, so he watches that one. 

 

Here the interviewee, who had chosen film noir as her least preferred genre on the survey, is unclear precisely what it is she dislikes. A simplified attempt by the interviewer to explicate the genre, moves this exchange on. The interviewers its kind of old black and white movies replaces any meaningful attempt to define the genre in the kinds of specialist language that would be recognised by film scholars or fans. It is an attempt to get the interview back to the terms of the interviewees discourse, though it allows the participant to shift her position and also reveal the possibility of her survey answer representing the tastes of her husband, as much as her own. Then the category is mixed in with other old films, such as cowboy/John Wayne films.

 

Other misunderstandings of film noir, as a category, occur across the qualitative interviews. A warehouse manager in South London defines the category as foreign, while a retail worker from Leicester suggests, thats a bit posh, why dont you just say black, or cant you say that?, implying and rejecting a degree of political correctness in the choice of the label. These kinds of misinterpretation appear at first hand frustrating to the interviewer and the research analyst, an indication of lack of communication, but they are also themselves indicative of differing levels of cultural capital which would not be grasped from the survey analysis alone. Whilst a low frequency of liking or disliking can be interpreted in particular ways on the basis of survey data, the quality or make-up of this liking/disliking is revealed more readily in the qualitative analysis and this kind of misinterpretation is an important element of this. These kinds of error, or differences in understandings, are less easily controlled for in quantitative analysis, as are those made through the physical interactions between individuals in the interview moment and between researchers and their mediators, or research instruments.

 

Incorrect hearing or reading of choices

 

The interactions between the survey respondent and the survey questionnaire, the survey interviewer and the responses as coded data, and the person of the qualitative interviewer and that of the qualitative interview participant are each liable to forms of error. Such mistakes are an inevitable part of the research process, but they provide interesting material for reflection about the different picture of subjects that emerge from interviews and from the survey analysis. To take another apparently trivial exchange, still in the context of the same interview referred to above.

 

Interviewer: OK if we move on to, we have some questions about places to eat and we asked where you particularly liked to eat out and I dont know if you remember but you said you particularly liked going to Italian restaurants.

Surbhitra: Did I say that!  Or Indian.

Interviewer:  It was Italian, is that not true?  I mean this is one of the things that we are here to find out, where would you say?

Surbhitra Italian, no.

Interviewer: You dont like Italian food?

Surbhitra I havent been to Italian restaurant, no.  We dont eat out a lot, we dont.  But if we do eat out, we do go out it will be only Indian restaurant.  I mean the last time we ate out was I think two years ago, so we dont - I think my husband is not very keen on eating out because he is a supervisor at the neighbourhood, thats where he works, thats where meals are made, he supervises it.      

 

Such an error can be put down to mis-pronunciation, mis-interpretation or mis-entry in the physical interaction between the survey interviewer and the respondent, and the interviewer and the laptop used for recording the answers to the survey questions. What they reveal, though, is the extent to which the reality of a respondents life is not determined by her choices among the survey alternatives and that the accumulation of these answers are as much a recording of the moment of the research the fragile moment of the delivery of the survey - as an insight into particular categories of experience.

 

Similar errors occur in reading the data on behalf of interviewers. The qualitative interview schedule was based upon questions designed to allow participants to expand upon their answers, in terms both of broader social position (their feelings about specific issues concerning their work and home situation, for example) and specific questions about their tastes. Given the symbolic violence that surrounds and constrains expressions of taste, as they are conceptualised by Bourdieu, these have the potential to be sensitive issues. However, there were occasions where the misreading of the SPSS file, on which the survey respondents data were held, by the qualitative interviewer resulted in the presentation of a persons answers incorrectly. Again, this might be easily rectified, though it becomes an element of the interview process. In an interview with a young father, an electrician from Oxfordshire, the interviewer identifies heavy metal, electronic and urban as the kinds of music the participant prefers. In fact his answer to the survey had been rock music as the preferred category and the interviewee corrects the interviewer.

 

Interviewer: You mentioned that you like electronic music, I think was your favourite kind.  Electronic, heavy metal, urban, these are the kind of things, does that seem right?

Joe:  No, definitely not heavy metal, I dont like heavy metal at all.

Interviewer: I probably just read the form wrong, but electronic?            

Joe:   Not really, sort of rocky music

Interviewer:  Rock music?

Joe Yeah, I quite like a bit of that, and a bit of pop music, chart music is OK, its mainly like 80's music now really I suppose is my sort of age era.

 

Similarly, in an interview with a professional from the heritage industry the interviewer cites who-dunnit and religious books as the participants two favourite, though the survey has who-dunnits and self-help books as scoring 1 and religious books scoring 6 in a scale of 1 to 7 where 1 means like very much and 7 not at all.

 

Interviewer: And sort of books that you said you liked reading were whodunit and religious books.

Cherie No, not religious books

Interviewer:  I dont know where that came from, thats what it said on the [sheet]. 

Lets go back

Cherie: I think thats a mistake, I dont think I said that

 

Confused as to the direction of scale in reading the survey data, the interviewer seems to hedge her bets in beginning the exchange, picking two categories from opposite ends of the scale as favourites. Here Cherie, an educated professional woman has, like Joe above, the confidence in her own experiences and her memory of the survey process to clarify her position, re-asserting a kind of ownership over her survey answers. The interviewees reclaim ownership over the way they are represented in the data that reflects their survey responses.

 

A contrasting exchange emerges from an interview with Rita, a secondary-school teacher from Scotland. The interviewer, reading her preferences from survey data prompts her to explore her tastes for reading.

 

Interviewer You also went on to specify self help books as being not hugely stimulating.                                                                

Rita No I suppose its not, I dont suppose Ive ever really sort of felt the need for anything like that so, you know, Ive never really kind of you know, I know a lot of people do read them and do find them helpful and whatever but Ive never really I suppose felt, I think you know, you probably need to be in a particular set of circumstances maybe to be interested in reading something like that. 

 

Of interest here is that the interviewer either mis-read the survey (which actually scores self-help books as Ritas favourite genre, with the score of 1 I like it a lot) or the initial survey input was incorrect and this mistake has been fortuitously corrected by the interviewers mistake. A further interpretation might be that, in the moment of the interview the participant, for whom the experience of the survey and its topics were perhaps less pressing, is prepared to accept that she expressed the opinion that she did not like self-help books, for the reasons she gives, given that this is what she is told by the keeper of her data. Given Ritas response, though, in a mixed methods inquiry her qualitative interview data is much less useful for analysts attempting to know why self-help books are popular or not, or to establish the social characteristics of their readers.

 

Regardless of the particular design of the survey, the likelihood of this kind of practical error, even if it is only occasional, puts important question marks over the coherence of data generated by this process. Whilst such errors can be tested and compensated for in analyses of the survey itself, and statistical measures exist to correct these, and in our case have been applied to the quantitative sample, in the consideration of the qualitative data they make clear that survey participants are not precisely what they appear to be from the operation of the survey alone. Such exchanges might result from mis-reading, mis-interpretation or mis-speaking in the moment of the survey interview, and these again remind us of the importance of communication and the fragility of the process of data gathering/recording and analysis (Lee 2004). More importantly, they emphasise the extent to which the ownership of the answers is a contested terrain. The survey data, embodied in the qualitative interviewers interpretation of it, are not the authoritative, definitive, version of these research subjects. A more correct, or complete, version of the data emerges in the interaction between the respondent and the survey, the qualitative interviewer and the survey data, and the qualitative interviewer and the participant and can only, therefore, ever be partial.

 

Research categories and lived experience

 

The final aspect we propose to discuss in this paper concerns the fit between the reduction of the survey into categories of participation in culture and the lived experience of cultural participation in the everyday lives of the survey respondents. Our survey recorded and measured preferences and degrees of preference for various types of activity. In practice, categories were imposed because answering the questions forced participants into preferences, or degrees of preference, they might not necessarily have held. This is illustrated in an interview with a full-time bar manager and part-time sports coach from the East Midlands. In this case the participant repeatedly suggests that he doesnt like or dislike the things the questionnaire had given him to choose between but he resigns himself to a choice because you have to have an answer.

 

Elleray: Soap operas yeah, I dont dislike them, theyre just there and I dont agree to me.

Interviewer:  Again on the survey we asked you about types of film you didnt like and you said you didnt not like any of them!

Elleray: Yeah, see thats the same thing. I said to the chap that did it, when you dont watch TV, not TV-controlled, its hard to say you dont dislike something because its just not in you to - its like with the soap operas, you have to answer, you have to have an answer and thats why the next questions the same. Its because youre not geared around TV. Its hard to say you dislike something, you dont sit and watch it and say oh thats rubbish or oh thats good or thats rubbish. Its off!

 

This kind of exchange represents a reflexive critique of the survey process itself. The interviewee recognises the extent to which he is forced to take a position by the structure and design of the question. Whilst the survey question contained an option for none of these as a preferred choice of film genre, this does not necessarily account for the kind of indifference being exhibited here in the qualitative interview setting. By asking me these questions, the participant seems to be suggesting, you are generating a picture of me as either an enthusiast or a refuser of various cultural forms. In fact these things are not part of my life in a way that corresponds to the importance you appear to attach to them.

 

Similarly, in an interview with a creative writing tutor from Scotland, the participants survey choice of wrestling as her least favourite sport is interpreted by the interviewer as her having a particular dislike of wrestling. In fact she has no such dislike but interprets her response as a result of the suggestion of the list of categories offered she would never have picked it out of [her] own head.

 

Interviewer: Still on the theme of sport, you mentioned that you had a particular dislike of wrestling

Jenny: Did I?

Interviewer: Im not sure how it would have come up

Jenny: I wouldnt have singled it out

Interviewer:  That must just been seen it on the telly or something you would turn off rather than turn on.

Jenny: Yeah

Interviewer: But its not significant.

Jenny: I dont have a phobia, a wrestling phobia, but it is pretty brutalIt must have been the way the question was worded

Interviewer: Maybe it was but

Jenny: Because I would never have picked it out I dont think, out of my own head, it must have been on the list, it must have been suggested in a way.

 

These exchanges are a result of a necessary for research purposes - simplification of the world of cultural participation into lists of genres, activities, and so on. No such list would really be long enough to capture the breadth of activities and preferences. But this simplification, whilst necessary, has important effects.

 

A further illustration of the limits of this is in an interview with the manager of a steel moulding business in the West Midlands. Reading the survey data, guided by the demands of the interview schedule the impression one would get of this participant is markedly different from the impression gained from the qualitative interview. In the survey the respondent appears to dislike all types of books apart from religious books, dislikes all television programmes except news and current affairs, and dislikes all musical forms and all genres of film. Combined with the difficulty of arranging this interview (recorded in notes of participation observation) and the refusal of the respondent to be interviewed, as requested for the study, in his home, this generates a picture of an individual with no real interest in engaging with the investigation, grudgingly accepting involvement on his own terms to, perhaps, get rid of a persistent researcher. One can well imagine approaching this contact with some trepidation. In fact the qualitative interview reveals an engaged and engaging participant, generous with his opinions. Moreover, despite the lack of positive preferences expressed in his survey data, he is extensively engaged in cultural life, particularly in the literary field, as a published, though amateur, writer in his spare time and a former member of a semi-professional bhangra band. Vasudhev describes his writing activities, which have taken him to conferences around the world and their relation to his professional work:

 

Vasudhev: No, no, no, business is nothing. I have no satisfaction in business, its the writing, just one good reader or one good appreciation that gives you a lot of pleasure than the whole of this life that we have spent.

 

This lack of fit, between the vision of the individual profile that emerges from the answers inscribed in the survey and the one that emerges from the qualitative interview might be explained by the fact that this participant reads (one might even say studies) and clearly likes books but simply does not like any of the genres we asked about, or at least is not prepared to fit his experiences of reading into the boxes which we have given him. He describes, for example, reading biographies of Nehru and histories of British imperial policies. Without the qualitative interview, if the survey answers alone were used as a guide, the picture of this subject would clearly be very different, and his contribution might be lost. In fact this might have quite serious effects in term of reductive conclusions in relation to ethnicity and cultural disengagement (see Hage, 2000).

 

Conclusion

 

Our aim has been to reflect on the tensions negotiated in the research process between the inevitable practical errors that might appear in any investigation and the effects of that which may be missed out in the application of a particular method. We have also been concerned about the general issues of research validity and reliability assessment, which we have addressed via our description of the practical and operational complexities of the empirical investigation, situating our choices and the specific related data.

 

The research processes for both the quantitative and qualitative approaches involved the performance of processes of simplification, or complication. In the initial processes of communication and gathering of information we reduced the social world into measurable categorical, operational variables compatible with the formulation of the survey questionnaire. We had to limit the choices and force respondents to choose. We were concerned with placing individuals in subfields of culture, in genres, and the practical determinations of these were based upon registers of already classified hierarchies of cultural legitimacy. Adopting a logic of symbolic classification we deployed categories that unavoidably may have erased some possible variations between cultural subfields. In completing the questionnaires, we reduced survey respondents to a series of clicks on a laptop mouse. Following the coding and processing of the information by means of the SPSS package, we read them as particular kinds of subjects according to the conglomeration of the original clicks on the computer. Some of these were, unavoidably, misplaced either because of error or misinterpretation by the survey interviewer or the respondent. Our awareness of these limits of the survey method, and desire to deepen our understanding of the workings of cultural capital in Britain, made it imperative to combine our quantitative approach with a purposeful rigorous application of focus groups, semi-structured interviews and participant observation methods. When doing a household interview, we had prior knowledge of the participants from the original survey data. We coloured in the survey picture through a more extended physical interaction and a schedule of participant observation. We took this talk, semi-structured, audio-recorded, annotated and transcribed, and again reduced it through similar categorical analyses of the transcriptions. Such are the practical requirements of a large-scale empirical project. But, as the examples of mis-reading by interviewers and misinterpretations outlined above demonstrate, none of these methods are devoid of limitations. We argue that the kinds of errors we reflect about in this paper, whilst inevitable are also instructive and constitutive of our objects of research, hovering between the reality of the lives of research subjects and their representation as data.

 

In a recent article Law and John Urry (2004) discuss the thesis that social inquiry is creative, helping make social realities. In this context, they argue that the differences between research findings produced by different methods or in different research traditions have an alternative significance. No longer different perspectives on a single reality, they become, instead, the enactment of different realities (p. 397). This statement is important in reflecting about the different pictures of the subject achieved through our different methods. John Osborne and Nikolas Rose (1999) stress the role social sciences play in making the world up, using routinised technologies. They point out that when asked questions by pollsters and others, people have to know what to do; they need a sort of political education in the expression of opinions; people need to know how to create the phenomenon called opinion (p. 387). Without the knowledge of how to engage, that we have called elsewhere knowing the rules of the method (Silva and Wright, 2005) the many hours spent crafting and designing surveys and questions in interview schedules are routinely lost in the moments of research because of the different frames of knowledge through which researchers and researched experience the research moment or interaction, be it quantitative or qualitative. Mistakes in these processes accentuate these different frames, but they also serve to reveal them. By asking about knowledge, taste and forms of participation we, as researchers, need at the very least to be conscious that the ways these questions, and the ways in which these inquiries, are designed and carried out serve to reproduce and even produce knowledge, preferences and forms of participation which could either reveal or obfuscate the complexity of the social world in very particular ways.

 

Notes

 



[1]This paper draws on data produced by the research team for the ESRC project Cultural Capital and Social Exclusion: A Critical Investigation (Award no. R000239801). The team comprised Tony Bennett (Principal Applicant), Mike Savage, Elizabeth Silva, Alan Warde (Co-Applicants), David Wright and Modesto Gayo-Cal (Research Fellows). The applicants were jointly responsible for the design of the national survey and the focus groups and household interviews that generated the quantitative and qualitative date for the project. Elizabeth Silva, assisted by David Wright, co-ordinated the analyses of the qualitative data from the focus groups and household interviews. Mike Savage and Alan Warde, assisted by Modesto Gayo-Cal, co-ordinated the analyses of the quantitative data produced by the survey. Tony Bennett was responsible for the overall direction and co-ordination of the project.

 

[2] See Bennett and Silva (2006) for emerging findings from this study.

 

 

 

 

References

Atkinson, P. and Silverman, D. (1997), Kunderas Immortality: The interview society and the invention of the self, in Qualitative Inquiry, 3 (3), pp. 304-325.

 

Bennett, T., Emmisson, M. and Frow, J (1999), Accounting for Taste. Oxford: Oxford University Press.

 

Bennett, T. and Silva, E.B. (eds) (2006), Cultures, Tastes and Social Divisions in Contemporary Britain, Special double issue of Cultural Trends, 15 (2/3), pp. 58-59

 

Bourdieu, P (1984), Distinction. A social critique of the judgement of taste. London: Routledge

 

Bourdieu, P. (1988), Homo Academicus. Cambridge: Polity Press.

 

Cook, P. (ed.) (1998), The Cinema Book. London: BFI.

 

Hage, Ghassan, (2000), White Nation. Fantasies of White supremacy in a multicultural society. New York and London: Routledge.

 

Hollway, W. and Jefferson, T. (2000), Doing Qualitative Research Differently. London: Sage.

 

Irvine, J., Miles, I. and Evans, J. (eds) (1979), Demystifying Social Statistics. London: Pluto Press.

 

Korczynski, M and Ott, U. (2006), The Menu in Society: Mediating Structures of Power and Enchanting Myths of Individual Sovereignty, Sociology, 40 (5), pp. 911-928.

 

Latour, B. and Woolgar, S. (1986), Laboratory Life: The Construction of Scientific Facts. (2nd ed.), Princeton: Princeton University Press.

 

Law, J. (2004), After method: Mess in Social Science Research. London: Routledge.

 

Law, J. and Urry, J. (2004), Enacting the social, Economy and Society 33 (3), pp. 390-410

 

Lee, R.M. (2004), Recording technologies and the interview 1920-2000, Sociology, 38(5), pp. 869-889.

 

Lee and Fielding (1996), Qualitative Data Analysis: Representations of a Technology: A Comment on Coffey, Sociological Research On-line, 1(1)

 

Oakley, A. (2002), Experiments in Knowing. Gender and Method in the Social Sciences. Cambridge: Polity Press.

 

Osborne, T. and Rose, N. (1999), Do the social sciences create phenomena?: the example of public opinion research, British Journal of Sociology, 50 (3), pp. 367-396.

 

Parr, J. and Silva, E. B. (2005), Quantitative sociological research in Redman, P, Silva, E.B. and Watson, S. (eds) The Uses of Sociology. Milton Keynes: The Open University.

 

Payne, G and Williams, M (2005), Generalization in qualitative research, in Sociology, 39 (2), pp. 295-314.

 

Peterson, Richard, A. (1992), Understanding Audience Segmentation: From Elite and Mass to Omnivore and Univore., Poetics, 21, pp. 243-58

 

Savage, M. and Burrows, R. (forthcoming 2007), The coming crisis of empirical sociology, in Sociology

 

Schwarzer, R.A. and Holt, D.B. (1997), Distinction in America: Recovering Bourdieus theory of tastes from its critics, Poetics, 25 (2), pp. 93-120.

 

Silva, E.B. (2005), Household Study: Technical Report. CCSE document, available at http://www.open.ac.uk/socialsciences/cultural-capital-and-social-exclusion

 

Silva E.B and Wright D. (2005), The judgement of taste and social position in focus group research, Special issue on Focus Group Methodology, Sociologia e Ricerca Sociale, 76-77, pp. 241-253.

 

Seiter, E. (1990), Making distinctions in TV audience research: case study of a troubling interview, Cultural Studies, 4 (1), pp. 61-84.

 

Williams, M. (2003), The problem of representation: realism and operationalism in survey research, Sociological Research Online, 8 (1)

 

 

 




Research
Support Tool
  For this
peer-reviewed paper
  Context
  Action





Home | Current | Archives | About | Submissions | Contact | Search

Methodological Innovations Online. ISSN: 1748-0612