Finite precision arithmetic in Polyphase Filterbank implementations
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
- Authors: Myburgh, Talon
- Date: 2020
- Subjects: Radio interferometers , Interferometry , Radio telescopes , Gate array circuits , Floating-point arithmetic , Python (Computer program language) , Polyphase Filterbank , Finite precision arithmetic , MeerKAT
- Language: English
- Type: text , Thesis , Masters , MSc
- Identifier: http://hdl.handle.net/10962/146187 , vital:38503
- Description: The MeerKAT is the most sensitive radio telescope in its class, and it is important that systematic effects do not limit the dynamic range of the instrument, preventing this sensitivity from being harnessed for deep integrations. During commissioning, spurious artefacts were noted in the MeerKAT passband and the root cause was attributed to systematic errors in the digital signal path. Finite precision arithmetic used by the Polyphase Filterbank (PFB) was one of the main factors contributing to the spurious responses, together with bugs in the firmware. This thesis describes a software PFB simulator that was built to mimic the MeerKAT PFB and allow investigation into the origin and mitigation of the effects seen on the telescope. This simulator was used to investigate the effects in signal integrity of various rounding techniques, overflow strategies and dual polarisation processing in the PFB. Using the simulator to investigate a number of different signal levels, bit-width and algorithmic scenarios, it gave insight into how the periodic dips occurring in the MeerKAT passband were the result of the implementation using an inappropriate rounding strategy. It further indicated how to select the best strategy for preventing overflow while maintaining high quantization effciency in the FFT. This practice of simulating the design behaviour in the PFB independently of the tools used to design the DSP firmware, is a step towards an end-to-end simulation of the MeerKAT system (or any radio telescope using nite precision digital signal processing systems). This would be useful for design, diagnostics, signal analysis and prototyping of the overall instrument.
- Full Text:
- Date Issued: 2020
CubiCal: a fast radio interferometric calibration suite exploiting complex optimisation
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
- Authors: Kenyon, Jonathan
- Date: 2019
- Subjects: Interferometry , Radio astronomy , Python (Computer program language) , Square Kilometre Array (Project)
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/92341 , vital:30711
- Description: The advent of the Square Kilometre Array and its precursors marks the start of an exciting era for radio interferometry. However, with new instruments producing unprecedented quantities of data, many existing calibration algorithms and implementations will be hard-pressed to keep up. Fortunately, it has recently been shown that the radio interferometric calibration problem can be expressed concisely using the ideas of complex optimisation. The resulting framework exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares algorithms. We extend the existing work on the topic by considering the more general problem of calibrating a Jones chain: the product of several unknown gain terms. We also derive specialised solvers for performing phase-only, delay and pointing error calibration. In doing so, we devise a method for determining update rules for arbitrary, real-valued parametrisations of a complex gain. The solvers are implemented in an optimised Python package called CubiCal. CubiCal makes use of Cython to generate fast C and C++ routines for performing computationally demanding tasks whilst leveraging multiprocessing and shared memory to take advantage of modern, parallel hardware. The package is fully compatible with the measurement set, the most common format for interferometer data, and is well integrated with Montblanc - a third party package which implements optimised model visibility prediction. CubiCal's calibration routines are applied successfully to both simulated and real data for the field surrounding source 3C147. These tests include direction-independent and direction dependent calibration, as well as tests of the specialised solvers. Finally, we conduct extensive performance benchmarks and verify that CubiCal convincingly outperforms its most comparable competitor.
- Full Text:
- Date Issued: 2019
Advanced radio interferometric simulation and data reduction techniques
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
- Authors: Makhathini, Sphesihle
- Date: 2018
- Subjects: Interferometry , Radio interferometers , Algorithms , Radio telescopes , Square Kilometre Array (Project) , Very Large Array (Observatory : N.M.) , Radio astronomy
- Language: English
- Type: text , Thesis , Doctoral , PhD
- Identifier: http://hdl.handle.net/10962/57348 , vital:26875
- Description: This work shows how legacy and novel radio Interferometry software packages and algorithms can be combined to produce high-quality reductions from modern telescopes, as well as end-to-end simulations for upcoming instruments such as the Square Kilometre Array (SKA) and its pathfinders. We first use a MeqTrees based simulations framework to quantify how artefacts due to direction-dependent effects accumulate with time, and the consequences of this accumulation when observing the same field multiple times in order to reach the survey depth. Our simulations suggest that a survey like LADUMA (Looking at the Distant Universe with MeerKAT Array), which aims to achieve its survey depth of 16 µJy/beam in a 72 kHz at 1.42 GHz by observing the same field for 1000 hours, will be able to reach its target depth in the presence of these artefacts. We also present stimela, a system agnostic scripting framework for simulating, processing and imaging radio interferometric data. This framework is then used to write an end-to-end simulation pipeline in order to quantify the resolution and sensitivity of the SKA1-MID telescope (the first phase of the SKA mid-frequency telescope) as a function of frequency, as well as the scale-dependent sensitivity of the telescope. Finally, a stimela-based reduction pipeline is used to process data of the field around the source 3C147, taken by the Karl G. Jansky Very Large Array (VLA). The reconstructed image from this reduction has a typical 1a noise level of 2.87 µJy/beam, and consequently a dynamic range of 8x106:1, given the 22.58 Jy/beam flux Density of the source 3C147.
- Full Text:
- Date Issued: 2018
Link between ghost artefacts, source suppression and incomplete calibration sky models
- Authors: Nunhokee, Chuneeta Devi
- Date: 2015
- Subjects: Interferometry , Calibration
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5556 , http://hdl.handle.net/10962/d1017900
- Description: Calibration is a fundamental step towards producing radio interferometric images. However, naive calibration produces calibration artefacts, in the guise of spurious emission, buried in the thermal noise. This work investigates these calibration artefacts, henceforth referred to as “ghosts”. A 21 cm observation with the Westerbork Synthesis Radio Telescope yielded similar ghost sources, and it was anticipated that they were due to calibrating with incomplete sky models. An analytical ghost distribution of a two-source scenario is derived to substantiate this theory and to seek answers to the related bewildering features (regular ghost pattern, points spread function-like sidelobes, independent of model flux). The theoretically predicted ghost distribution qualitatively matches with the observational ones and shows high dependence on the array geometry. The theory draws the conclusion that both the ghost phenomenon and suppression of the unmodelled flux have the same root cause. In addition, the suppression of the unmodelled flux is studied as functions of unmodelled flux, differential gain solution interval and the number of sources subjected to direction-dependent gains. These studies summarise that the suppression rate is constant irrespective of the degree of incompleteness of the calibration sky model. In the presence of a direction-dependent effect, the suppression drastically increases; however, this increase can be compensated for by using longer solution intervals.
- Full Text:
- Date Issued: 2015
- Authors: Nunhokee, Chuneeta Devi
- Date: 2015
- Subjects: Interferometry , Calibration
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:5556 , http://hdl.handle.net/10962/d1017900
- Description: Calibration is a fundamental step towards producing radio interferometric images. However, naive calibration produces calibration artefacts, in the guise of spurious emission, buried in the thermal noise. This work investigates these calibration artefacts, henceforth referred to as “ghosts”. A 21 cm observation with the Westerbork Synthesis Radio Telescope yielded similar ghost sources, and it was anticipated that they were due to calibrating with incomplete sky models. An analytical ghost distribution of a two-source scenario is derived to substantiate this theory and to seek answers to the related bewildering features (regular ghost pattern, points spread function-like sidelobes, independent of model flux). The theoretically predicted ghost distribution qualitatively matches with the observational ones and shows high dependence on the array geometry. The theory draws the conclusion that both the ghost phenomenon and suppression of the unmodelled flux have the same root cause. In addition, the suppression of the unmodelled flux is studied as functions of unmodelled flux, differential gain solution interval and the number of sources subjected to direction-dependent gains. These studies summarise that the suppression rate is constant irrespective of the degree of incompleteness of the calibration sky model. In the presence of a direction-dependent effect, the suppression drastically increases; however, this increase can be compensated for by using longer solution intervals.
- Full Text:
- Date Issued: 2015
- «
- ‹
- 1
- ›
- »