1 / 11

General good advice on data handling

General good advice on data handling. Peter Shaw. Introduction. We have spent the last 11 weeks engaged in picking up some technical details about various aspects of data handling and analysis.

tam
Download Presentation

General good advice on data handling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. General good advice on data handling Peter Shaw

  2. Introduction • We have spent the last 11 weeks engaged in picking up some technical details about various aspects of data handling and analysis. • This week I do not intend any new names or techniques (unless you specifically ask..), just to round off with a few unifying thoughts and snippets of good advice.

  3. Project design • Get it right before you start!! • It is not hard to get a balanced design, though you may well have to make some sacrifices about the number of treatments / sites / replicates etc. • Check your design with staff - that’s what they’re paid for! • It can be impossible to fix a bad design: Rothamsted once had to throw away 50 years of meticulously collected data because the faulty experimental design made the data useless.

  4. Data Collection • Keep a notebook, and write things down as you go along (dating each entry). • This is best done on the spot - by the time you get home you will have forgotten some important details. • Often you have to fall back on Operational Taxonomic Units (OTUs: Sp1, Sp2, small pink thin species, etc). Fine - this is more honest than trying to shoehorn an unfamiliar specimen into a known species. • Make sure that you keep such specimens carefully for ID, and that these Ids are recorded in the relevant lab/field notebook. Take it from me - trying to fathom out how to decode entries like “?blue-brown oddity: 2 specimens” after a year’s absence is playing Russian roulette with your datamatrix!

  5. Once data are written down.. • You need to transcribe them into a PC. • This procedure is easy to skimp on, as you look forward to the analyses ahead! • GIGO - Garbage In, Garbage Out! If you allow errors to creep in at this stage, all subsequent analyses will be suspect if not downright invalidated. • Entering species data into spreadsheets is particularly tedious due to the predominance of zeroes. 2 0 1 0 0 0 0 1 4 0 1 2 0 0 0 5 6 0 1 0 1 0 0 1 0 0 2 0 0 2 0 1 0 0 0 0 2 1 0 0 2 0

  6. Metadata • These are data about data: information the set the actual measurements in context. • Some forms of metadata are essential for analysis and must be held within the datamatrix: date, depth, sample number, time, observer, plot number etc. • Others are immaterial for the analyses but crucial for write-ups and replicability: details of methods used, site location etc. The notebooks that hold these data are essential documents in archives. Raw species data Metadata site, date, plot etc Raw site data pH, elevation etc Log-trans data etc 4-10ish 10-100 10-100 6ish

  7. Debugging and verifying • Once data are in, go back and check every entry against the notebook. • I find it helpful to photocopy notebook pages, so I can cross out or highlight entries once validated against the data file. • Even then, don’t believe the data! Use boxplots to check for outliers. What are your units? Often you need to convert raw data into a derived format (densities per unit area, mg 100g-1 etc). Don’t change source data but create new variables, and ensure that each variable is unambiguously labelled.

  8. Outliers 1 • These are datapoints which “clearly” lie outside the range of the rest of the dataset, and show up on boxplots or scattergraphs. • Always eyeball the data, and check outliers. Usually they result from a typing mistake and are easily remedied. • Sometimes they are clearly an error in the notebook - how you sort this out depends on your judgement, experience and intuition. If in doubt ask!

  9. Outliers 2 • Then you get the awkward sort! The notebook is adamant and the entry looks plausible, but the datapoint looks odd. Now what? • It is legitimate to exclude such points from further analysis, although you should record this fact in your methods section. Be careful, as you may be removing the most interesting observation!

  10. Multivariate techniques.. • Are especially sensitive to outliers: watch as one data point has its decimal place entered one place out:

  11. Missing data • These are sadly common. You knocked the tube on the floor, you lost the sample… • Don’t put a zero (-1, etc) there! This is tantamount to saying that you actually measured this value. • SPSS has a specific solution to missing data - enter a “.” (full stop, decimal place etc). That data point will be excluded from analyses. • Check the options in each technique used to see how missing values are handled. They cause insurmountable difficultties for many analyses, and either the variable or the observation will have to be excluded.

More Related