1 / 14

The Analytical Journey: A research perspective on managing data.

Explore the importance of data management in research and how it facilitates the analytical process, enabling long-term use and re-use. Learn about the four stages of the analytical process and discover strategies for planning, generating, organizing, and representing data.

jasonn
Download Presentation

The Analytical Journey: A research perspective on managing data.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Analytical Journey: A research perspective on managing data. Professor Bren Neale University of Leeds

  2. Overview • Data Management serves two purposes at once: • Facilitates longer term use/re-use by wider pool of researchers – not an altruistic process as this brings benefits to the primary researcher • Facilitates - is an inherent part of - the researcher’s analytical journey – vital for good research practice. Analysis is not a discrete stage of a research study but builds and accumulates through the research process. • Four key stages in the analytical process • Planning for DM • Generating Data • Organising Data • Representing Data.

  3. DM: Facilitating use and re-use • Vital that researchers engage with this process, particularly where research council funded. • Archiving data for sharing and re-use is not altruistic, brings benefits to the primary researcher: • A dataset is an important output from a study. Archiving creates a published dataset, which is kite-marked for citation by re-users and eligible for entry into the REF by which research is evaluated. • Archiving enables a dataset to become more visible, the quality of evidence verified – fundamental part of transparency of our research endeavours, and a study to be more widely promoted with increased impact

  4. DM: facilitating use and re-use • A re-used dataset is evidence of its longer term utility and value • This may occur down the line, by future generations of researchers, but it may also occur concurrently in a collaborative process of data sharing • Following young Fathers study (ESRC): two affiliated researchers: LinziLadlow a doctoral student is using the dataset to boost her sample of young parents and their housing needs. Dr Anna Tarrant, a Leverhulme fellow, is re-analysing data from the project on fathers and caring practices and linking it to data in a related dataset. • We are not giving data away but enhancing the analytical possibilities of our data – generating fresh insights through new theoretical and substantive lenses – qualitative researchers rarely manage to fulfil the analytical potential of their data • We can increase the utility of our research by sharing and extending the scope for analysis.

  5. DM: facilitating our own analysis • Central to the analytical process, which has two key dimensions • Intellectual task: an intuitive process of interrogation, dissection and synthesis: An interpretive process, not an objective or perfect science. No tool of the trade will do this for you • Technical task: managing and organising data so that we can systematically search, compare, sift, label and order data for analysis. crucial process of condensing and re-ordering data to make it manageable for analysis. • The vital infrastructure/foundation upon which analysis is built

  6. The technical infrastructure • Drowning in Data? • Despite small sample sizes, we create huge datasets when we gather rich situated data • Each new generation of researchers discovers that • ‘Data mount astronomically and overload the information processing capacity of even a trained mind’ (Van de Ven and Huber 1995: x111). There is the ever present danger of • ‘Death by Data Asphyxiation: the slow and inexorable sinking into the swimming pool, that started so clear and inviting, and now has become a clinging mass of maple syrup’ (Pettigrew 1995 111). • Smith (2003) reports that the project on young people’s citizenship generated over 4,000 pages of interview transcripts • Good data management and organisation is a vital precursor to analysis – no one right approach, but should fit the logic of enquiry and the nature of the data and create a dataset fit for analytical purposes.

  7. How do we do it? Four dimensions • Planning for Data management and use • Generating and Transcribing Data • Organising Data (file structures) • Representing Data

  8. 1. Planning • Attend to data management as an integral part of a study and plan for it from the outset: • Devise a data management plan that will address our two broad aims: • support our own analysis over time • allow for archiving and preserving a dataset for wider and historical use. • For example, need to seek permission for archiving at the outset, as part of informed consent. • Need to identify where and how data will be stored to meet both methodological and ethical requirements. • Guidance from UKDA and also from specialist archives such as the Timescapes Archive.

  9. 2. Generating/Transcribing data • Ensure high quality recordings (technical and content) – may involve review of equipment and training in how to use it. • Format digital recordings to ensure they are future proofed • Recommended file formats are available on the UK Data Service website: www.data-service.ac.uk/create-manage/format/formats-table. • Transcripts:Devise a template to ensure transcription is carried out consistently, and to create a free flowing text, while preserving the integrity of the original speech.

  10. 3. Organising data: Filing system • Set up a secure, online filing system on a password protected drive, to store and organise your digital files. • Tag your different files to enable easy identification and retrieval Create electronic file names to indicate what kind of file it is (FYFAdamwave2Int.doc or FYFAdamwave4timemap.jpg). • Label and Document each file: good practice to attach a front sheet to each data file (and audio tags to recordings) that identifies and contextualises the file and summarises the thematic content (e.g. through key words). • File structure: a crucial step: there are many ways to organise data but the structure needs to reflect the logic of your enquiry. • For example, in QL the logic involves working across case, theme and wave of data and the filing system needs to reflect that to ensure data can be retrieved and searched by case, theme and wave. • Chronology – case – file description (Saldana 2003) • Case – Chronology – file description – better for building a holistic understanding of individual cases over time (Neale 2017).

  11. 4. Representing Data • Fashioning a digital dataset is not a technical task. It is a major organisational and interpretive task (McLeod and Thomson 2009: 133) – it is best to see it as part of the first stage of the production of your data – you are the author of your dataset, creatively fashioning and crafting it. • Consider producing two digital versions of your dataset: unabridged version (for historical use and for own reference) and an anonymised version to meet ethical requirements that can form the basis for dissemination and sharing in the here and now. Ensure these versions are clearly distinguished and filed separately. • What files will you include in your full dataset?: transcripts, other data files (diaries, time maps) audio files, field notes, contextual files, interview schedules, consent forms, contact details and so on? What will you include in each version of your dataset?

  12. Representing Data: confidentiality • Seeking to maintain confidentiality is a cardinal research principle • Confidentiality can never be fully guaranteed, so we should not promise it when we seek consent for archiving data (see Corti et al 2014) • Anonymising is not the same thing as confidentiality: it is a tool, and a somewhat imperfect tool, to address the principle of confidentiality. • One option is light touch anonymising to ensure we don’t strip data of its integrity and meaning, and that we don’t simply air brush people out of the historical record (Moore Sociological Review 2012). • Another way is to build in controls on access and re-use via the archive – this keeps data safe- well preserved, safe from loss and mis-use, and through which we can set up protocols and mandates for the way data are accessed and re-used, and by whom.

  13. To sum up… • Data management is not simply an administrative process that we tack on to our research • Good data management is vital for the process of conducting high quality research, an integral part of the analytical journey, as well as a process to maximise the historical value and potential of our research data, which ultimately benefits us all.

  14. Sources/references • Bishop, L. and Neale, B. (2012) Data Management for Qualitative Longitudinal Researchers, Timescapes Methods Guide Series no. 17. www.timescapes.leeds.ac.uk/resources • Corti, L et al (2014) Managing and sharing research data: a guide to good practice, London, Sage. • Neale, B. Proudfoot, R. et al (2016) Managing Qualitative Longitudinal Data: A practical guide for longer term data use and deposit in the Timescapes Archive, Leeds, University of Leeds Institutional Repository, May. Available at www.Timescapes_Archive.leeds.ac.uk • Pettigrew, (1995) in Van de Ven and Huber • Van de Ven, A. and Huber, G. (1995) ‘Introduction’, in Huber and Van de Ven, (eds) Longitudinal Field Research Methods: Studying processes of Organisational Change, London, Sage.

More Related