Current Projects

I am equally interested in developing new technologies (which invariably end up producing exciting new science), and pursuing an observational program in extragalactic astronomy. Nowadays, the emphasis is more on the technical side, so that's where I shall start...

I. Submillimeter Technologies

GPU-based Detector Readout

What prevents us from building megapixel submillimeter cameras today is that we cannot currently handle the multitude of signals they would produce. The readout challenge has to be solved before we can realize large-format submillimeter arrays.

To realize submillimeter cameras with 100 thousand or a million pixels, we need increased multiplexing of the detectors. At minimum, a few thousand detectors need to share a single readout line, for a total cost (production + readout) of $1/pixel or less.

Picture of a GPU chip.
Figure 1. An example GPU chip. Not the actual chip we use...
Fortunately, Kinetic Inductance Detectors (KIDs) are relatively cheap to fabricate, and they are naturally suited for dense multiplexing in frequency domain since they operate in narrow bands. We can spread the resonances of a ~2000 KIDs over an octave before collisions between them become problematic. With the fabrication process and costs more or less under control, the true challenge of making large detector arrays is squarely in the readout electronics. We need to process an octave bandwidth at a few 100 MHz (where KIDs typically operate) with high resolution (R~105 – 106) to faithfully extract the information encoded therein.

My approach is unique in its use of commercial electronics and exploiting the processing power of graphical processors (GPUs). The advantage of commercial components, as opposed to custom-made electronics, is that they get faster, better, and cheaper (see Moore's law) at no cost to us. At the same time, the GPUs are already powerful enough to allow complete spectral processing (FFTs) of millions of points, at several thousand times a second.

With our current system (PC + digitizer & signal-generator + GPU at around $20K), we can fully process an octave bandwidth up to 250 MHz today, and operate it either with a discrete set of tones, or via chirp-pulse excitation with full-spectrum readout. Unlike the FPGA alternatives, the GPU-based system makes no compromises. The chirp-mode readout is potentially a unique advantage of the GPU system over FPGAs, and allows far more sophisticated real-time signal processing, for example, on-the-fly resonance fitting several thousand times a second.

If we can reduce the cost of the system substantially over the coming years, then our GPU-based approach might be the one that enables the megapixel submillimeter cameras of the future.

CRUSH

Solving the readout challenge (above) will enable a new generation of large-format submillimeter cameras. The next challenge is: how do we analyze/reduce the resulting volume of data (100 GB to 10 TB per hour!) in real-time, or preferably an order of magnitude faster?

CRUSH logo
Figure 2. CRUSH is the most widely used data reduction package for ground-based submillimeter imaging arrays.
CRUSH, is both a pioneer and a leader among submillimeter data reduction packages. Not only it provides the most complete statistical analysis of the data, but it does it faster than any of the similar packages (e.g. sharcsolve, BoA, SMURF). Currently (as of 2013), it is able to fully reduce (consisting of around a 100 individual operations) a few GB of raw data per minute on a modern multicore PC. As such, it is ready to handle the low-end of the expected data rate from future large arrays in real time. However, it will require 1 – 3 orders of magnitude boost in speed before astronomers can re-reduce datasets many times over (e.g. to optimize) at the highest data rates.

Achieving such a dramatic enhancement in speed will likely require a three-pronged approach: (1) computing hardware gets faster, with speeds doubling every 18 months (roughly) – so we can expect an order of magnitude increase in speed by 2020; (2) More parallel deployment, e.g. on the pooled computing resources of a lab, or on graphical processors (GPUs) can result in an enhancement of speed by an order of magnitude or more; and (3) cleverer, faster algorithms can provide further improvements by up to a factor of a few.

Therefore, my goal over the next few years is to adapt CRUSH for massively parallel platforms (both GPUs and clusters!), and continue the search for powerful new algorithms at the cutting edge of data reduction science. For further information on CRUSH, please check out the CRUSH website.

SOFIA/HAWC+

SOFIA in flight
Figure 5. The SOFIA airborne observatory.
The collaboration led by C. D. Dowell was granted the only 2nd-generation instrumentation upgrade for the SOFIA airborne observatory. The HAWC+ camera will bring much improved far-infrared imaging with a larger, more sensitive array (by NASA/Goddard under J. Staguhn) than the first generation HAWC, and provide additional polarimetric capabilities (by NASA/JPL under C. D. Dowell). The polarimetry, especially, is a unique feature that will make SOFIA stand out in its incessant comparison to the Herschel space observatory.

My role in this project is to provide the imaging data reduction facility (CRUSH, see above), the in-flight real-time instrument diagnostics, and the data analysis for detector testing and development. It is also possible that CRUSH will eventually enable a rotating waveplate, scan-mode polarimetry – like G. Siringo and I have demonstrated for PolKa on APEX – although probably not before commissioning of the instrument.

SuperSpec / X-Spec

SuperSpec schematic
Figure 4. Transmission-line filterbank schematic of SuperSpec.
My concept for a lithographic spectrometer allows the 100-fold shrinking (in all dimensions!) of medium or low resolution spectrographs (R ≤ 1000). This is because it uses transmission-line (electronic) filters, rather than relying on optical dispersion, to separate incident light into discrete channels. Thus, the Rλ size limit (1 m in free space for an R=1000 spectrograph at 1 mm wavelengths) of dispersive spectrometers along 2 or 3 dimensions (depending on whether using a 1D, such a Z-spec, or the more traditional 2D grid) does not apply. The capabilities of Z-Spec (R~250, in a 60cm × 60cm brass waveguide enclosure) can be fitted onto 1 mm2 of thin-film (~400 nm) layers on a Si wafer(!). This allows for a fully featured R ~ 1000 spectrograph on a focal plane pixel, and thus creates the possibility of fully-sampled spectrometer cameras consisting of 100 – 1000 spectroscopic pixels. In fact, we are no longer limited by the size of the spectrometer itself, but by the real-estate occupied by the detectors required to collect radiation from ~1000 channels.

Matt Bradford is leading the collaboration to realize such devices at Caltech/JPL (SuperSpec). We are close to demonstrating a prototype device and we hope to deliver single-pixel spectrometer for the LMT within a year, matching or exceeding the capabilities of Z-Spec. In the longer term, we hope to build a 10 to 100 pixel spectrometer camera (X-spec concept) to performs redshift surveys of the submillimeter population on a large mm-wave telescope, or to provide spatially unresolved C+/CO itensity mapping, and trace chemical evolution, of the universe on a smaller dedicated survey telescope.

GISMO-2

GISMO-2 schematic
Figure 6. Schematic of GISMO-2 (Staguhn et. al. 2012).
Building on the highly successful development of the first 2-mm camera for the IRAM 30-m telescope at Pico Veleta, by Johannes Staguhn and his outstanding team, we are continuing with GISMO-2. We are using everything we learned from GISMO to build the best 2-mm camera possible, while adding a 1.2 mm array also for dual-band imaging. GISMO-2 will fill the focal-plane with more sensitive detectors, and provide superior optical performance and stray-light rejection. Its goal is to become the first truly background-limited 2-mm camera, with an NEFD ≤ 2 mJy/beam s0.5 (on the 30-m telescope).

My role in GISMO-2 began with suggested improvements (e.g. stray-light rejection, detector wiring), and I will provide its real-time diagnostics, and data reduction facility with CRUSH (see above). GISMO-2 may make its debut on the JCMT in 2016-2017. Stay tuned!

II. Observational Astrophysics

Deep Fields

P(D) of the LABOCA/CDFS field
Figure 7. P(D) analysis of the LABOCA deep field in the CDFS (Weiss et al. 2009). All models converge to an unbroken powerlaw for the observable range of fluxes.
My main astronomical interest has been, and continues to be, the submillimeter galaxy population. I brought the first far-infrared characterization (luminosities, temperatures, radio-FIR correlation) for this population (Kovács et al. 2006). A. Weiss and I lead the LABOCA 850 μm survey of the CDFS South (Weiss et al. 2009), for which I have developed a powerful new P(D)-type number-counts analysis approach, as well as a statistically robust source extraction algorithm.

Nowadays, I am mainly involved in the deep-field science through the GISMO 2-mm camera. We are publishing the first ever 2-mm deep-field (J. Stagughn et al. 2013, submitted), for which I provided the optimal the data reduction, source extraction, flux deboosting, statistical analysis, and SED (Spectral Energy Distribution) fitting. As we continue with this work, I will focus on bringing the first number-count distribution of 2-mm sources(!), and we pursue a number of follow-up studies for our new detections. Secondly, we are continuing with the degree-scale mapping of the COSMOS field.

Dust SED Models

Our understanding of the physical characteristics of star-formation and dust heating is only as good as the models we use to interpret the observations.

SED plots
Figure 8. My multi-temperature SED model describes a range of extragalactic objects: local starburst galaxies (top row), z~2 bumpies (bottom-left), and quasars (bottom-right).
The most important information we can get out of submillimeter and far-infrared observations of galaxies is the physical characterization of these objects – especially their dust temperatures, masses, luminosities. While the thermal radiation emanating from galaxies is essentially greybody-like, it cannot be sufficiently modeled as a single temperature radiation.

The most popular approach today to capture the complexity is to derive spectral energy distribution (SED) templates by radiative transfer modeling. Such template fitting has become the bread-and-butter of far-infrared characterization of extragalactic sources. However, radiative transfer models have many shortcomings and pitfalls.

First of all, they assume over-simplistic geometries (spherical, centrally illuminated gas, or slab of gas uniformly illuminated from outside, etc.) Secondly, they have lots of hidden parameters, such as density profiles or composition. Third, they account only for radiative heating, but not for heating of gas by shocks or collapse (the latter is especially important, since dust plays a crucial role in star-formation in radiating heat in the formation process, before stars can ignite!).

Yet, the greatest of all problems with radiative transfer models is that they cannot, by their design, describe an entire galaxy comprehensively. Galaxies comprise of a multitude of gas clouds, each with different geometries, turbulence, different sources of heating (embedded stars, YSOs, collapsing cores, turbulent shocks, cosmic rays, and/or externally illumination). No single radiative transfer model, assuming just one particular geometry and heating configuration, will ever successfully capture this plurality.

My goal, therefore, is to provide an alternative framework for modeling thermal dust SEDs, one which provides just as accurate characterization (or better!), but succeeds where radiative transfer models fail. My emphasis is on the empirical and descriptive, rather then purely theoretical models. I model the thermal emission of galaxies as a powerlaw distribution of discrete temperature components (dM/dT ~ T). Powerlaws are common (e.g. the initial mass function, brightness distribution of sources etc.), and can arise both on the macro and micro scales. We do not have to know exactly where observed power law comes from in order to characterize it – the powerlaw could be a property of individual clouds, or could arise from the distribution of a large number of clouds with different underlying properties, e.g. each with its own geometry and heating. The only thing that matters is whether it accurately describes what we see or not. And, it does the job, quite well.

The powerlaw temperature distribution is also attractive. It has few parameters (just the powerlaw index γ in excess of an equivalent single-temperature model). It allows for easy fitting of all properties (Md, T, β, γ, emission size), and calculating luminosities analytically. And, perhaps most importantly, it does away with all the questionable assumptions that plague radiative transfer models.

Currently, I'm working on releasing my code into in the public domain, for use by astronomers. A future refinement might be to extend the modeling from purely thermal emission to include features, such as PAH, silicate absorption, or even bright line emission.