To: Ken Young 
Cc: eketo@cfa.harvard.edu, ckatz@cfa.harvard.edu, cqi@cfa.harvard.edu, mgurwell@cfa.harvard.edu, npatel@cfa.harvard.edu,
     jweintroub@cfa.harvard.edu, aargon@cfa.harvard.edu, jzhao@cfa.harvard.edu, rrao@cfa.harvard.edu, qzhang@cfa.harvard.edu,
     tksridha@cfa.harvard.edu, gpetitpa@sma.hawaii.edu, kyoung@cfa.harvard.edu
Subject: Re: Interim Correlator size data file
Parts/Attachments:
   1   OK     83 lines  Text (charset: ISO-8859-1)
   2 Shown   ~98 lines  Text (charset: ISO-8859-1)
----------------------------------------



On Thu, Aug 16, 2012 at 2:58 AM, Chunhua Qi  wrote:


      On Wed, Aug 15, 2012 at 3:32 AM, Ken Young  wrote:
            On Mon, 13 Aug 2012, Chunhua Qi wrote:

            > Hi Taco,
            >
            > For MIR, we need 3x RAM memory to process the data. For this data set, we
           > could only hope to process one sideband a time. CF has servers MARS (96 GB
            > RAM) and NEPTUNE (128 GB RAM). Both servers should be able to load one
            > sideband data in IDL unless there is any restriction of RAM usage from a
            > single user on CF servers. To load in one sideband, e.g. lsb:
            >
            > IDL> readdata,sideband='l'
            >
            > Let me know if it works.

I was able to read the upper sideband into MIR/IDL, on Neptune. � The
readdata command took 4 hours to complete, and over 100 Gbytes of RAM
was grabbed. � �Unfortunately, I could not do much with the data once it
had been read in. � Although the select command worked, as did plot_var,
the apply_tsys, pass_cal and uti_ave_int commands all bombed.

I tried reading a single sideband into Miriad. � Instead of seg faulting,
as it did on Jupiter, it terminated with the message
ERROR: Number of integration scans exceeded the limit 10000.

It's certainly possible that both Miriad and MIR will not process this
data set because of the large number of scans, rather than the size of
the dataset. � So it's a very imperfect test of what we'll get with the
Interim Correlator. � Nonetheless, it is not clear at this point that
we have any data reduction package that will be able to handle the largest
data sets we will be generating when the Interim Correlator comes online.

Taco