Research Summary

Galaxy Formation and Evolution

My main scientific goal is to understand the physical processes that power galaxies, and how they evolve in the universe. Galaxies form through the gravitational collapse of gas, and grow through merging and by accretion of material from their halos. The formation releases vast gravitational potentials, which can manifest in a starburst and/or power an active galactic nucleus (AGN). The most active starburst galaxies (and AGNs) are heavily obscured by dust at optical and UV wavelengths making them difficult to observe with conventional telescopes. Most of their light is re-emitted as cold (~40K) thermal radiation in the far-infrared and submillimeter bands. As a result, we must use the cold dust continuum (and the associated cold molecular gas) emission to best study these luminous objects.

I. Submillimeter Galaxy (SMG) Populations

Millimeter and submillimeter selection curves for typical starburst galaxies.
Figure 1. Redshift selection curves at various (sub)millimeter wavelengths.
Interestingly, the (sub)millimeter brightness of a dusty galaxy does not diminish with distance, making unbiased studies possible across much of the volume of the universe. Since the discovery of the submillimeter galaxy (SMG) population in 1998, we have catalogued tens of thousands of such galaxies from a combination of ground-based surveys and the Herschel space telescope.

However, the interpretation of these surveys requires the association of counterparts to SMGs at other wavelengths and/or determining distances (redshifts) – both are extremely challenging at present. Furthermore, because of the limited spatial resolution of most (sub)millimeter telescopes, we do not know their close clustering properties, which is necessary to determine number counts (vs. brightness) with confidence. Such counts could provide direct constraints for cosmological models.

To overcome these challenges, my main goals are:

  • Pioneer new ways to obtain redshifts for a representative sample of SMGs. (Currently, we can do this only for a very small, biased subsample.)
  • Discover the small angle (<20 arcsec) clustering properties of SMGs, and with these determine their true number counts (vs. their brightness).
  • Improve our theoretical/empirical understanding of dust spectral energy distributions (SEDs) such that we can extract more physical information about the processes that heat the ISM.
  • Better understand the cross-band properties of SMGs to more confidently identify counterparts at other wavelengths. Some of the relevant knowledge may come from the detailed studies of local starburst (and other) galaxies.

II. Submillimeter Technologies

A SuperSpec test device under the microscope. Mosaic image courtesy of E. Shirokoff.
Figure 2. A SuperSpec test device under the microscope. Mosaic image courtesy of E. Shirokoff.
Arguably, the most effective road to learning about galaxy formation and evolution is via new instrumentation for the submillimeter wavelengths.

My concept for a lithographic mm-wave spectrometer (Kovács & Zmuidzinas, 2010) resulted in a 100-fold shrinking, in all 3 dimensions(!), of a medium resolution (R~1000) spectrometer device for the millimeter-band. Now, the capabilities of Z-Spec (or better) can be fitted on a focal-plane pixel on a thin piece of Si wafer. The technology opens the possibility of true multi-object spectroscopy, and with it, large-scale redshift surveys directly in the (sub)millimeter bands.

We are close to demonstrating a prototype device (SuperSpec), under the lead of Matt Bradford at Caltech. We hope, that by the time CCAT is operational, we can supply it with a spectrometer camera consisting of 100 or 1000 spatial pixels, each with 1000 spectral channels covering an octave bandwidth or more (X-Spec concept), to conduct large scale redshift surveys of the SMG populations. The goal is to measure tens of thousands of CO/C+ redshifts, and trace the chemical evolution of the diffuse ISM of galaxies (collectively) as a function of cosmological distance.

The both imaging and spectroscopic cameras are being planned with a hundred thousand or more detectors, posing a serious challenge for signal processing. We need to keep the associated cost of readout electronics below $1M, and the total power consumption below a few kW, for it to be practical. As such, we need to provide a readout solution, which can process a few thousand detectors multiplexed inside of a few 100 MHz bandwidth, at a cost of $1/detector (or less) and power consumption of 10 mW/detector (or less).

My approach for reaching this goal is to use commercial hardware (which gets cheaper, and less power hungry all the time -- see Moore's law). Graphical processors (GPUs) can perform FFTs on millions of points, several thousand times per second. While more expensive and power hungry than the alternative (FPGAs), a GPU-based readout allows for far more processing power and flexibility. One well-suited application is chirp-pulse readout, with real-time resonance fitting, which would be impossible (or at least extremely difficult) to implement with FPGAs.

III. Data Reduction Development for Imaging Arrays

NGC 253 at 850um reduced by CRUSH
Figure 3. NGC 253 at 850um reduced by CRUSH
The third pillar of my research activity is pioneering new ways to collect and reduce data from ground-based imaging arrays (and background-limited instruments in general). The challenge here is that strongly varying atmospheric and instrumental signals can be orders of magnitude brighter than the faint objects we seek to observe. Detecting a sub-mm galaxy at 350um is analogous to trying to see a 17-magnitude star (or spot a camp fire on the Moon) during broad daylight.

Making it possible involves a combination of innovative observing strategies and a novel data reduction approach. Starting from my PhD thesis at Caltech, I have become a leader in both of these fields. The Lissajous observing pattern, which I originally introduced at the CSO, is now used at most major sub-mm telescopes (e.g. APEX, IRAM, ASTE etc.).

My data reduction software CRUSH is the de facto standard for the latest generation of large bolometer and KID arrays. It is currently used for 9 different instruments at 4 telescopes, and will provide the imaging capabilities for SOFIA/HAWC+ and GISMO-2. Others have drawn on its ideas to build their own custom pipelines, like MPIfR's BoA package for APEX, or the official SCUBA-2 data reduction software (SMURF).

A new challenge ahead is to adapting CRUSH to serve the enormous data rates (100 GB to 10 TB per hour!) of future 100 kilopixel to megapixel arrays. Answering this challenge requires both more parallel deployment (e.g. on computing clusters or GPUs), and faster, better algorithms.

If you are interested in more about my work, you can follow me on ResearchGate or follow CRUSH on facebook.