Search of Nautilus Data


Here presented is an all-sky search for gravitational waves from spinning neutron stars in the data from NAUTILUS resonant bar detector.

NAUTILUS detector is operated by Italian ROG collaboration currently led by Eugenio Coccia. The detector is located in Frascati near Rome.

The analysis is performed by a team consisting of Pia Astone, Kaz Borkowski, Piotr Jaranowski, Andrzej Królak and Maciej Pietka. The search is performed on the basis of Memorandum of Understanding between the ROG (contact person: Pia Astone), Albert Einstein Institute of Max Planck Gesellschaft (contact person: Maria Alessandra Papa) and Institute of Mathematics of Polish Academy of Sciences (contact person: Andrzej Królak). We are analysing 1 year of data collected in the year 2001.


Details of the search

We divide one year of data from NAUTILUS detector into segments, each two sidereal days long. Each segment is analysed coherently.

1. Parameters of the search

Time:           2 days
Bandwidth:      1.22 Hz  starting from  922.10 Hz
1 spin down with minimum spindown age tmin = 1000 yr
Spindown range: -9.19e-008 [s-1 Hz]  -—   0 [s-1 Hz]
Sky position:   all-sky search
2. Search procedure

Grid:
Constraint hypercubic 4-dimensional (frequency, spindown, d, a) grid for minimal match MM = (3/4)1/2
Thickness of the grid = 5.60
Number of filters Nt = 105,748,866
Output of each filter is F-statistic for the whole bandwidth of 1.22 Hz and consisting of 219 bins.

Computation: F-statistic is calculated using transformation to barycentric time by resampling followed by FFT and FFT interpolation resulting in an array of 219 numbers.

Threshold: We set threshold for F-statistic to be 20. Parameters of all triggers crossing the threshold are registered.

Verification: Verification on line is performed only for triggers with F-statistic value >25 so that it occupies no more than a few percents of the total computation time.
There are two verification steps:
1. Fine search of the maximum of F-statistic of the trigger using Nelder-Mead (amoeba) algorithm with initial values from the coarse search.
2. Search of signal in 4-day stretch of data containing the initial 2-day stretch. Search is performed using again the Nelder-Mead with initial values from step 1.

Currently computations are performed on a minicluster in Bialystok and on a cluster in computing center in Warsaw.

The search started 12th of December 2005 when our cluster in Bialystok begun to crunch NAUTILUS data.

For obvious reasons we call our cluster MIKRUS (pronouced as meeckroos). Here is a short description of the cluster with two pictures from the start of the search.

Unfortunely the start was a bit of a false start. The search code still needed some improvemnets. Currently computations are performed on a minicluster in Bialystok and on a cluster in computing center in Warsaw. Each cluster is analysing and verifying a two-day stretch of data.

Here is a diagram progress.gif loaded directly from the Bialystok site alpha.uwb.edu.pl/map. It shows current state of computations in our search with the two clusters and is being updated twice daily.


Progress report. In this graph the abscissa axis is scaled with the consecutive number of a 2-day sidereal time slot and the ordinate axis is scaled with the percentage of computations completed (by the date and time indicated in the lower left corner) for the bins represented by green bars. The red bars stand for corrupt data and white space signify data still awaiting analysis.

Recently we have rewritten our codes so that they can be run on other clusters. Subsequently we have started to use a cluster of the INFN-CNAF in Bolonia (Italy) and the Merlin cluster at the Albert Einstein Institute in Golm (Germany). Presently, using all Merlin processors we achieve a real-time performance, i.e. we analyze two-day data stretch in two days.

Here is summary of our available computing power.

      1. Golm (Merlin)          11   Mfilters/day   (using 100 CPUs)
      2. Warsaw (Halo)           4   Mfilters/day
      3. Bialystok (Mikrus)      3.5 Mfilter/day
      4. Bologna                 5   Mfilters/day
Now the main parameters of our computations are: bandwidth of 1 Hz, 1 spin down (minimal spin-down time equal to 1000 yr), only negative f_dot is searched, grid minimal match equal to sqrt(3/4). This means there are 59 million filters for each 2-day search and each filter gives F-statistic for an array of 219 frequencies.

By December 14, 2006 we have analysed 20 2-day data stretches. There is 70 (good) stretches yet to go (see the chart above). If things go smoothly we shall complete this undertaking within the coming year (presumably by December 15, 2007).

Dec. 15, 2006




The search has ended on August 8th, 2007. We have searched 93 two-day data sets in total. The CPU time used amounted to about 200 years with the following conribution of clusters employed:
59 % — Merlin (AEI, Golm)
15 % — Halo (ICM, Warsaw)
15 % — CNAF (INFN, Bologna)
11 % — Mikrus (UwB, Bialystok)
Now we shall get to checking the candidates.

Last updated: Sept. 20, 2007