MEarth data processing

For data taken 2008-2010 and 2010-2011 with the original D02 housed detectors, and all other seasons post 2011 September with the D09 housed detectors. Differences highlighted as appropriate.

Overview

Data reduction for MEarth follows the general philosophy described in Irwin & Lewis (2001) and Irwin et al. (2007) (see also Irwin 1996 for a more general overview of CCD reductions for wide-field imaging). However, there are a number of instrument-specific problems detailed below that require additional processing steps or modifications to the standard procedure. These are listed in roughly the order they are done by the software.

Other issues

Example calibration curves/frames

All taken from tel05 on 2011-02-21 and 2011-10-22 (nights of), where the former date was before the housing change, and the latter after the housing change. In cases where the calibration has changed significantly after the housing upgrade, both are shown (before the change, then after the change; most browsers should display them side-by-side with before on the left, if the window is wide enough), otherwise only the one from before the housing change. Only the bias, dark, and high frequency component of flat are derived per night, all other calibrations were those considered current when that night was reduced.

Non-linearity

The quantity on the vertical axis in this plot is the ratio of measured sky level to the reference sky level, scaled by the ratio of exposure times. The reference frames were exposed around 13,000 ADU and interleaved between the target frames to track variations in the illumination of the roof. Counts in this diagram are raw, not de-biased, but were corrected for shutter shading. The curve is a simple polynomial:

actual = raw (1 + c_1 * raw + c_2 * raw2 + c_3 * raw3)

This assumes the two (should) agree at 0. It doesn't matter exactly where things are normalized, as this is removed by the photometric calibration, and is factored into the gain measurement by computing the gain from dome flats that have already had the non-linearity correction applied.

Please note that saturation on the CCD (the full-well capacity) was below 65535 counts on many of the detectors before the housing upgrade, and all of them afterwards.

Bias

Dark

Shutter shading

(derived from twilight flat sequences; low-pass filtered to remove noise; note change in number of shutter blades post-upgrade)

Flat

Raw

After glow removal

(Elliptical Gaussian fit and subtracted, 2008-2010 and 2010-2011 seasons only)

Low frequency structure removed

(2-D median filter with median box=171, linear box=131)

The first of these illustrates a known issue with the glow removal, due to the unfortunately placed dust doughnut near the center of the field.

Illumination map (raw)

Or, "photometric flat"; determined by dithering a star-field around the detector, by random offsets, many times, and doing photometry of all the stars. This image contains a Gaussian scaled proportional to the measured flux ratio wherever there was a measurement.

Illumination map (smoothed)

(2-D median filter with median box=301, linear box=211)

Final

Fringe (North only)

Also filtered, low-pass to remove noise, and high-pass to remove scattered light. Most of the change below results from adjustments to the filtering parameters, and not in the device itself.

Bad pixel masks

Masks were derived by hand. They are not shown here because the majority of the features are too small to show up in a binned map. The bad pixel masks are used by the source detection and all later stages in the processing.

Image example

(t05.obj.20110221.00022.fit, before on the left, after on the right)

Notice particularly two clearly visible defects that are not fixed by the standard processing. (1) the "glow" in the centre of the field, caused by scattered light. (2) the "pulldown" in rows containing bright stars.

If we had not removed the "glow" from the flat, we would have divided most of it out of the image, so it would look flat, but would not actually be flat (in a photometric sense). The "glow" (scattered light) is thought to be an additive effect, not a multiplicative one (essentially, a fraction of sky is scattered into a Gaussian-like image on-axis).

Below is the same image after sky background removal, as used for source detection and photometry (plotted with detected sources overlaid on the right):

(see Irwin 1985 for details of how this works; sky background following box size was 64 pix)

The source classification has been used to colour the symbols overlaid on the image, and is used to select potential comparison stars. This essentially works by determining the locus of stellar sources in flux vs flux ratios between different sized apertures, and then folds in ellipticities. In this image, red = stellar (PSF-like), blue = non-stellar (i.e. galaxies), yellow = blended (or more accurately, sources with overlapping isophotes), and green = junk-like (usually "cosmics" or hot pixels, essentially sources that are too sharp to have been through convolution with the PSF as determined from the stellar images).

No attempt has been made to fix the "pulldown", due to concerns about making the correction robust against real sources in the same lines, given that there is no overscan region to use for a line-by-line overscan subtraction (or similar). The Southern detectors do not seem to show the "pulldown" so this particular issue only affects Northern data.

Persistence (2008-2011 Northern seasons only)

No corrections are attempted for this effect (it depends on the full illumination history of the pixel, including the field acquisition exposures, which are not saved).

The quantity on the vertical axis of this plot is the fraction of the initial counts (shown at the top) that accumulate in the dark current per second of integration time. This is the integral over a photometric aperture (it was done on stars) so in practice includes a "flux-weighted" range of illumination levels. The model fit assumes the flux decays by a simple exponential law exp(-t/τ). This τ value (20 minutes) is fairly typical of our detectors. The level might seem small, but 43 ADU x 60 seconds (typical exposure) is a quite detectable source above typical sky.

Note that both the normalization, and timescale, in this plot are thought to be temperature dependent, in the sense that with decreasing device temperature, the normalization decreases, and the timescale increases.

During the 2011 summer monsoon, the detector housings were upgraded to Apogee's "high cooling" D09 housing, which allowed a lower operating temperature (-30C) and also added a preflash feature using IR LEDs, which is used on all data taken since the upgrade. The combination of these has rendered persistence to be practically no longer a concern for all data taken 2011 October onwards (it would not be true to say it removed persistence, of course, because we have really done the opposite - the preflash floods the detector with a big persistent image, but in doing so makes it stable; the persistent image then behaves simply as if it was an elevated dark current and is removed by dark frame subtraction).

Astrometric calibration

Our standard astrometric analysis now uses the UCAC4 catalogue, and thus is tied to ICRS. Previous releases used 2MASS.

There is relatively little of note for the calibration itself. There appears to be almost no radial distortion in the data, as expected given the optical system. The derived values vary from telescope to telescope, but they are all at negligible levels and do not need to be corrected. A standard Gnomonic (RA---TAN, DEC--TAN) projection is used.


Document prepared by Jonathan Irwin (jirwin at cfa.harvard.edu).

Substantial contributions and assistance from Mike Irwin, Christopher J. Burke, Philip Nutzman, and the entire MEarth team are gratefully acknowledged.