1 / 30

Lecture 17

Lecture 17. Reprise: dirty beam, dirty image. Sensitivity Wide-band imaging Weighting Uniform vs Natural Tapering De Villiers weighting Briggs-like schemes. Reprise: dirty beam, dirty image. Fourier inversion of V times the sampling function S gives the dirty image I D :

Download Presentation

Lecture 17

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 17 • Reprise: dirty beam, dirty image. • Sensitivity • Wide-band imaging • Weighting • Uniform vs Natural • Tapering • De Villiers weighting • Briggs-like schemes

  2. Reprise: dirty beam, dirty image. • Fourier inversion of V times the sampling function S gives the dirty image ID: • This is related to the ‘true’ sky image I´ by: • The dirty beam B is the FT of the sampling function: • (Can get B by setting all the V to 1, then FT.)

  3. Reprise: l and m • Remember that l = sin θ. θ is the angle from the phase centre. • For small l, l ~ θ (in radians of course). • m is similar but for the orthogonal direction. Direction of phase centre. Direction of source. l θ

  4. Sensitivity • Image noise standard deviation (for the weak-source case) is (for natural weighting) • N here is the number of antennas. • Note that Ae is further decreased by correlator effects – for example by 2/π if 1-bit digitization is used. • Actual sensitivity (minimum detectable source flux) is different for different sizes of source. • Due to the absence of baselines < the minimum antenna separation, an interferometer is generally poor at imaging large-scale structure.

  5. Wide-band imaging. How can we increase UV coverage?…we could get more baselines if we moved the antennas!

  6. …but it is simpler to change the observing wavelength. λ eg λ/2

  7. With many wavelengths… …we have many baselines, and, effectively, many antennas.

  8. A simulated example. The full visibility function V(u,v) (real part only shown). A familiar pattern of ‘sources’ Red positive; blue negative. (I’ve taken some liberties here – obviously the stars of the Southern Cross are not strong radio sources – I’ve also rescaled their angular separations.) 21/43

  9. ‘Snapshot’ sampling of V is poor. Antenna spacings from KAT-7. 22/43

  10. Aperture synthesis via the Earth’s rotation. For this technique to work perfectly, all sources must be constant over time. Antenna spacings from KAT-7. Dirty image D is the true sky brightness map I, convolved with the dirty beam B. 23/43

  11. Frequency synthesis. For this technique to work perfectly, all sources must not only be constant over time, but must also have the same spectra. Antenna spacings from KAT-7. Bandwidth 5 to 6 GHz. The final image is still not as ‘clean’ as we would like… 24/43

  12. Narrow vs broad-band: UV coverage 16 x 1 MHz 2000 x 1 MHz Merlin, δ=+35° eMerlin, δ=+35°

  13. Narrow vs broad-band - without noise: 16 x 1 MHz 2000 x 1 MHz

  14. Narrow vs broad-band - with noise: 16 x 1 MHz 2000 x 1 MHz SNR of each visibility = 15%.

  15. Weighting: or how to shape the dirty beam. • Why should we weight the visibilities before transforming to the sky plane? • Because the uneven distribution of samples of V means that the dirty beam has lots of ripples or sidelobes, which can extend a long way out. • These can hide fainter sources. • Even if we can subtract the brighter sources, there are always errors in our knowledge of the dirty beam shape. • If there must be some residual, the smoother and lower it is, the better.

  16. Weighting • There are usually far more short than long baselines. The distribution of baselines also nearly always has a ‘hole’ in the middle. Baseline length

  17. Weighting • A crude example: This bin has 1 sample. This bin has 84 samples.

  18. Weighting • What do we get if we leave the visibilities alone? • The resulting dirty beam will be broad ( low resolution), because there are so many more visibility samples at small (u,v) than large (u,v). • BUT, if the uncertainties are the same for every visibility, leaving them unweighted (ie, all weights Wj,k=1) gives the lowest noise in the image. • This is called natural weighting. • The easiest other thing to do is set Wj,k=1/(the number of visibilities in the j,kth grid cell). • This is called uniform weighting. • Then optionally multiply everything by a Gaussian: • Called tapering.

  19. Natural vs uniform: Natural weighting Uniform weighting

  20. The resulting dirty images: Natural weighting Uniform weighting

  21. But if we add in some noise... Natural weighting Uniform weighting SNR of each visibility = 0.7%.

  22. Tradeoff • This sort of tradeoff, between increasing resolution on the one hand and sensitivity on the other, is unfortunately typical in interferometry.

  23. Some other recent ideas: • Scheme by Mattieu de Villiers (new, not yet published SA work): • Weight by inverse of ‘density’ of samples. • My own contribution: • Iterative optimization. Has the effect of rounding the weight distribution to ‘feather out’ sharp edges in the field of weights. • Haven’t got the bugs out of it yet. Ideal smooth weight function (Fourier inverse of desired PSF) Densely packed samples are down-weighted. Isolated samples get weighted higher so that the average approaches the ideal.

  24. Weighting schemes: Simulated e-Merlin data. 400 x 5 MHz channels; νav = 6 GHz; tint = 10 s; δ = +30° Iterative best fit out- side 20-pixel radius Uniform Tapered uniform

  25. ‘Dirty beam’ images (absolute values). 20 Iterative best fit out- side 20-pixel radius Uniform Tapered uniform

  26. Comparison – slices through the DIs: Natural Uniform Optimized Natural (narrow-band) Natural Uniform Optimized for r>10

  27. More on iterated weights: r = 10

  28. But real data is noisy… SNR of each visibility = 5.

  29. One could think of other ‘feathering’ schemes. • Multiply visibilities • with a vignetting • function of time and • frequency, eg 2. Aips task IMAGR parameter UVBOX: effectively smooths the weight function. See also D Briggs’ PhD thesis.

  30. MeerKAT tapering schemes

More Related