Monday 28 August 2023

HTML form won't submit (Angular)

 It turns out if you mix normal HTML forms with Angular ones (i.e. using FormsModule) Angular will disable the default behaviour of forms on submit (so no POST request). To override this all you need to do is add a ngNoForm tag to the form you want to have the normal behaviour.

Saturday 6 May 2023

Why won't my Angular app include cookies in a POST request?

 Recently while working on an Angular web app operating with CORS on localhost against a SpringBoot server I had an issue where my GET requests were fine, but the POST requests (in this case for logout) did not include a cookie. At first I though I had some kind of problem with my CORS config - but this was fine:


        var corsConfig = new CorsConfiguration();
        corsConfig.setAllowedOriginPatterns(List.of(
                "http://localhost:4200",
                "http://localhost:8080"));
        corsConfig.applyPermitDefaultValues();
        corsConfig.setAllowCredentials(true);


It turns out the signature for the POST request use by Angular is slightly different! The second argument is actually for a body. So in my case:

    return this.http.post<void>(this.logoutUrl, null, {withCredentials: true});


This might help someone who is starting to question their understanding of CORS (again)

Sunday 24 May 2020

AWS Keyspaces - Managed Cassandra review

AWS recently went live with Keyspaces, their managed version of Cassandra (https://aws.amazon.com/keyspaces/). This service is primarily aimed at users who have been managing their own Cassandra clusters and are looking to move to a managed solution. Billing is available on an ad-hoc or reserved capacity basis and it's simple to connect using existing Cassandra applications or drivers. However, there are a few issues that I've noticed that make Keyspaces currently a poor replacement for your Cassandra cluster:

  • TTL (automatic time-based record expiry) is currently not supported by AWS keyspaces. This alone makes it difficult to port standard Cassandra data models over.
  • No cross-region support (yet)
  • 1 mb row size limit (similar to DyanmoDb's 400kb item limit). This may be related to the fact that Keyspaces is more closely related to DynamoDb than true Cassandra (as noted by Scilla at https://www.scylladb.com/2019/12/04/managed-cassandra-on-aws-our-take/)
In short, AWS Keyspaces seems to be more of a beta than a real GA release. Once they add support for TTL it looks like a promising service, however it will have to compete with Datastax's own managed Cassandra offering, Datastax Astra (https://www.datastax.com/products/datastax-astra). This was launched only about 3 weeks after Keyspaces (and maybe explaining Keyspaces short preview stage and rush to release while still missing support for fundamental pieces of Cassandra).

Sunday 13 January 2019

Sending an SMS via Intent in Android

Alchemy allows users to donate to their chose charities via text message, which it used to do automatically using Android's SmsManager functionality. Unfortunately, Google have recently updated the Play Store's terms and conditions to restrict SMS_READ and SMS_SEND to applications whose primary purpose is to act as an alternative SMS application on Android (replacing the usual Messages app for example).

I applied for an excemption for Alchemy, but this wasn't granted. This is understandable I think - Android security has a lot of issues, and excess permissions is definitely one of them - Alchemy is a good citizen and only interacts with a single SMS number, but a malicious applicaiton could easily use the same permissions for nefarious purposes.

Without an exemption, there was no choice but to update Alchemy to avoid using SmsManager. This meant instead requesting an existing SMS app to send the donation SMS instead - via an intent. It took me some time to find out exactly how to do this, but the code required to pre-populate an SMS for a user to send is


Uri uri = Uri.parse("smsto:" + charity.getNumber());
Intent intent = new Intent(Intent.ACTION_SENDTO, uri);
intent.setData(uri);
intent.putExtra("address", charity.getNumber());
intent.putExtra("sms_body", keyword);
intent.putExtra("exit_on_sent", true);
if (intent.resolveActivity(getPackageManager()) != null) {
    startActivityForResult(intent, 1);
    donationViewModel.recordDonation(this.donations, charity.getName(), smsKeywordToDonation(charity.getCost(keyword)));
} else {
    Toast.makeText(this, "No SMS provider found", Toast.LENGTH_SHORT).show();
}

The full code can be found on Github. Note that, unfortunately, SMS applications generally do not seem to respect the "exit on sent" request, so the user must navigate back manually. A bonus for this change is that Alchemy now doesn't require any additional permissions from the user. Previous donations must now be stored in Alchemy itself, instead of using SMS history. This may cause some loss of data in the migration, but should result in more robust behaviour from now on.

Sunday 4 March 2018

Faster python for data science and scientific computing



Scientific computing and HPC developers will probably be familiar with  Intel's C/C++ compiler suite, which can be used to compile your C, C++ and Fortran code instead of the free GCC compilers and can often result in significant performance improvements without changing a single line. Further improvements can be made by swapping out (generally fantastic) open source C maths libraries such as ATLAS or BLAS for equivalent functionality in Intels MKL (Math Kernal Language). Again - this is usually simply a matter of compiling your existing code against Intel's library and can result in very impressive speed gains for very little work.

What has this to do with Python? Most of Python's most famous data science and scientific computing libraries are written in C/C++, with a simple wrapper allowing them to be called easily from python. If you've ever wondered why Numpy, SciPy, scikit-learn and pandas are so much faster than trying to write the same code yourself in native Python, it's because all of the work in a function like np.multiply() is actually carried out in C "under the hood".

Previously, if you had a licence for Intel's  compiler suite you could compile these python libraries yourself and take advantage of Intel's speed boost in your python applications, but this required both familiarly with C code compilation, as well as an expensive licence. However Intel have now made available a free pre-compiled Python distribution with all the major packages (numpy, scipy, pandas etc.) based on the popular Anaconda distribution.  According to kdnuggets Intel have also re-written some common functions entirely for further optimization - in particular it looks like numpy and scipy's FFT (Fast Fourier Transform) functions have been enhanced significantly. Depending on your workload, using this distribution could boost the execution speed of these libraries by 10-50% without the need for any code change.

If you're interested in optimizing Python code that you wrote yourself and isn't available in any existing (C-implemented) library check out Cython as a way of implementing the most performance sensitive parts of your code in C. Unlike using the Intel distribution linked above, converting part of your code to use Cython can take some development work, however even when using the free GCC compilers you'll see a significant increase in speed over native python code.

Monday 5 February 2018

Pip unable to download packages running in a Ubuntu docker image on kubernetes

I recently ran into a problem where pip was unable to download packages while running in a docker image on a kubernetes pod. The issue seemed to be that it could not find the actual repo to download from - likely due to some kind of networking issue within either docker or kubernetes. The solution turned out to be to create a file at /etc/docker/daemon.json and enter google's DNS servers as follows:

{ "dns": ["8.8.8.8", "8.8.4.4"] }

I was working from a Ubuntu base image, so I created the file as above before installing and starting docker. Keep in mind that as docker images usually don't contain systemmd, it's not all that easy to restart docker once you have installed it, so creating the configuration first is pretty useful. You can find more information on this at https://development.robinwinslow.uk/2016/06/23/fix-docker-networking-dns/

Wednesday 21 June 2017

Alchemy app released (in beta)

I've finally published the beta version of Alchemy on the play store. It's still in beta, but it seems to be working well. Give it a shot if you're looking for a different way to donate to charity!

Tuesday 20 June 2017

Memory leak with android linearlayout and Picasso

I recently ran into an issue using Picasso with Android's linearlayout. I was using Picasso to load images from a url into a linearlayout, handling errors with Picasso's build object as follows:

Picasso.Builder builder = new Picasso.Builder(this.mContext);
builder.listener(new Picasso.Listener()
{
    @Override
    public void onImageLoadFailed(Picasso picasso, Uri uri, Exception exception)
    {
      picasso.load(mThumbIds[0]).into(imageView); // on failure just load the default image
    }
});
Picasso picasso= builder.build();


I was recycling the images correctly with convertView (using a viewholder class), but I couldn't track down the source of a memory leak which occured every time a new image was loaded - eventually causing the app to crash. The leak however when away when I stopped using Picasso's builder, instead just using a simple try catch setup.

try{
    Picasso.with(mContext).load(logo_url).placeholder(R.drawable.alchemy).into(holder.imageView);
}
catch (Throwable e){
     Picasso.with(mContext).load(R.drawable.alchemy).into(holder.imageView);
}

Sunday 28 May 2017

Installing OpenCV3 for Python 3 on Windows

I need to use python for face recognition as part of the server for a charity app I'm working on. The app is supposed to show charity logos, but in the case of smaller charities, sometimes the google custom search I use to search for the logos returns instead a picture with a person in it. I need to identify cases where this happens and use a placeholder image instead.

My first attempt to install opencv was through pip from anaconda:

pip install opencv-python

However this gave me the following error when I tried to import cv2:

ImportError: DLL load failed: The specified module could not be found.

I uninstalled the anaconda version and found a 64 bit wheel of opencv for python 3.6 at http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv. Downloading this and installing it with pip worked perfectly. I don't know why the original error occured, but it looks like python 3 support might not be great with opencv at the moment.

Thursday 18 May 2017

Using Python 3 to extract files from an encrypted archive with a password

Recently I needed to use python to extract the contents of a password-protected zip archive and I came across a few issues I thought would be good to document.

Python has a build in zipfile library that is really good at handling zip files, but unfortunately has a few limitations when it comes to encrypted zip files. This is how Python3 can be used to extract a file from an encrypted zip archive:


The first issue that I came across was some unclear documentation for the "open" method of zipfile in Python 3. The open method uses the "pwd" argument to pass the password for the file, but in Python 3 you need to convert this to bytes before calling open. Unfortunately, the library only seems to support CRC-32 based encryption  - meaning that the default linux zip encryption will work, but AES will not. I was also unable to get this to work with 7zip and WinZip.

Saturday 22 April 2017

New website design

It had been a while since I updated my website so I found a new template from http://themes.3rdwavemedia.com and redesigned the whole site. I particularly like the github and blog integration with the new theme - hopefully that will encourage me to write more here too, especially as I'm working on the new Alchemy app.

Tuesday 19 July 2016

New LOFAR baseline comparison tool

I've written a new tool for comparing the lengths of different baselines to a single LOFAR station. It should be useful when looking for delay and rate solutions with FRING or a similar task. The tool is located at http://www.colmcoughlan.com/lofar.html.

Tuesday 24 May 2016

Installing AIPS on Ubuntu 16.04

I've recently installed AIPS (version 31DEC16) on Ubuntu 16.04. Here are my notes on the installation experience. Due to recent compiler work on AIPS this was a much better experience than on recent new versions of Ubuntu. Note that this was a "text" (compile-from-source) install - if you just need to use AIPS you will likely get a faster experience using the pre-compiled binaries available from the binary installation (see http://www.aips.nrao.edu/dec16.shtml for full instructions)


As before, the Ubuntu dependencies are:


        libx11-dev and its dependancies
        x11proto-xext-dev
        libxext-dev
        libxpm-dev
        libncurses5-dev
        libncursesw5-dev
        libbsd-dev
        libedit-dev


I use gcc and gfortran to compile AIPS - you need to specify their location in the AIPS installer (/usr/bin/gfortran is both the fortran compiler and the linker for me, while /usr/bin/gcc compiles for C). I also have trouble with the fortran compiler flags -axWPT and -ip options, so I turn them off both for gfortran and gcc. XAS isn't built using these directives however, so you need to go to Y/SERVERS and edit XAS.SHR to remove the use of the axWPT and -ip flags when compiling the TV server.


A new problem with this combination of Ubuntu and AIPS is an errant "-c" ending up in the link command for making XAS after XAS.SHR is run. This means you will get an error that XAS failed to link, but if you open the makefile at Y/SERVERS/XAS/makefile and remove the "-c" option from the LOCALOPTS_LNX64 variable, then you can just type "make" to correctly make and set up XAS (I did this in a separate terminal while the rest of the installation was proceeding). Make sure the executable xas file has been successfully moved to LNX64/LOAD.


Make sure to add


        sssin        5000/tcp    SSSIN        # AIPS TV
        ssslock        5002/tcp    SSSLOCK        # AIPS TV Lock
        msgserv        5008/tcp    MSGSERV        # AIPS Message Server
        tekserv        5009/tcp    TEKSERV        # AIPS TekServer
        aipsmt0        5010/tcp    AIPSMT0
        aipsmt1        5011/tcp    AIPSMT1
        aipsmt2        5012/tcp    AIPSMT2
        aipsmt3        5013/tcp    AIPSMT3
        aipsmt4        5014/tcp    AIPSMT4
        aipsmt5        5015/tcp    AIPSMT5
        aipsmt6        5016/tcp    AIPSMT6
        aipsmt7        5017/tcp    AIPSMT7


to /etc/services to get AIPS to run properly. To do this you may need to open the file as root.


One last thing - add the aips path to your profile so you can run it straight from the terminal. For me the line was ". /home/colm/aips/LOGIN.SH; $CDTST". The CDTST bit at the end is needed if you intend to compile custom AIPS tasks.

Monday 9 May 2016

New website

I've made a website, where I hope to include information about some of the work that I'm doing. You can see it at www.colmcoughlan.com

Thursday 19 November 2015

Negative diagonal elements in the covariance matrix returned by numpy.polyfit

I ran into a strange issue fitting a line to a small number of data points using numpy.polyfit that I thought was worth documenting.

I ran a command of the form:

p, cov = np.polyfit(x,y,1,w,cov=True),

where x, y and w were arrays of length 3.

The command returned the correct slope and y-intercept values, however the covariance matrix, cov, had strictly negative diagonal terms. This is apparantly because numpy scales the covariance matrix as described in here.

The scaling applied is a factor such that

factor = resids / (len(x) - order - 2.0)

If, like me, you are making a first order polynomial fit to a dataset of 3 values, the denominator has the effect of multiplying the expected matrix by -1. If I was unlucky enough to have 4 points, it would have thrown bigger errors.

In my case, looking at the results here, I could recover the correct values just by multiplying the matrix by minus one. This is a strange weighting to apply to a small dataset - I assume it makes sense if you have many points and the developers wanted to keep numpy.polyfit consistent.

Monday 5 October 2015

Fixing AIPS after upgrading to El Capitan on OSX

I recently ran into some AIPS trouble after upgrading from Mavericks to El Capitan. My installation of AIPS complained about X11 and about being unable to find some libraries that normally come with the binary distribution I had actually installed.

The library fix should have been the easiest - AIPS was looking for:

  • libsvml.dylib
  • libirc.dylib
  • libimf.dylib
All of these come with the binary and are located in 31DEC14/MACINT/LIBR/INTELCMP of the AIPS directory (note I'm still using the frozen AIPS from December 2014).

AIPS automatically adds this location to DYLD_LIBRARY_PATH when you run LOGIN.SH, but even though the libraries are on the path, AIPS does not seem to be able to find them when it needs them. I fixed this temporarily by making a simlink for each of the three libraries in 31DEC14/MACINT/LOAD, where all the AIPS executables are stored. Now when the tasks look for the libraries it appears as though they are in the same directory. Unfortunately this means having to launch AIPS from 31DEC14/MACINT/LOAD, but it works for now. I'll update this if I get around to finding out why the libraries on DYLD_LIBRARY_PATH aren't being found.

The second problem was that the El Capitan upgrade broke the X11 installation I had. I reinstalled X11 from http://xquartz.macosforge.org/landing/, but AIPS still complained. Reading the error message, it looked like AIPS was looking for X11 in /usr/X11R6, whereas I had it installed in /usr/X11. Again I fixed this with a simlink (ln -s /usr/X11 /usr/X11R6), but because of some changes in El Capitan I could not edit /usr even with superuser privileges. I restarted the computer in recovery mode (holding cmd+r) and used the terminal to enable changes to /usr with the command "csrutil disable" and rebooted. I then made the simlink, rebooted into recovery mode and re-enabled protected mode with "csrutil enable" (the steps are detailed at http://stackoverflow.com/questions/32590053/copying-file-under-root-got-failed-in-os-x-el-capitan-10-11/32590885#32590885).

AIPS is now working fine on the upgraded OS. It's likely that these issues will be taken care of automatically in a fresh install of AIPS when Eric Greisen updates the Mac binaries for El Capitan, so hopefully these steps will only be necessary if you're keeping an old version.

Tuesday 14 October 2014

Installing LOFAR software on a mac (Mavericks)

I've installed some of the LOFAR data reduction software on a macbook pro running OSX 10.9 (Mavericks). This is probably not the best method in the world to install the LOFAR software, but it works reasonably well and I haven't seen any other documentation for a mac installation so it might be useful to someone.

I roughly followed the same procedure as for Ubuntu, but with a few differences.

First I used macports to download as many of the dependencies I listed here as I could find. The full list of macports I installed is

  binutils @2.24_0 (active)

  boost @1.56.0_1+no_single+no_static+python27 (active)
  bzip2 @1.0.6_0 (active)
  cctools @855_1+llvm35 (active)
  cctools-headers @855_0 (active)
  cfitsio @3.340_0 (active)
  cloog @0.18.2_0 (active)
  cmake @3.0.2_0 (active)
  curl @7.38.0_0+ssl (active)
  curl-ca-bundle @7.38.0_0 (active)
  cython_select @0.1_0 (active)
  db48 @4.8.30_3 (active)
  db_select @0.1_2 (active)
  expat @2.1.0_0 (active)
  fftw-3 @3.3.4_0 (active)
  gcc49 @4.9.1_0 (active)
  gcc_select @0.1_8 (active)
  gettext @0.19.2_0 (active)
  gmp @6.0.0_1 (active)
  hdf5 @1.8.13_0+cxx (active)
  icu @53.1_1 (active)
  isl @0.13_0 (active)
  ld64 @236.3_1+llvm35 (active)
  libarchive @3.1.2_0 (active)
  libcxx @183506_1 (active)
  libedit @20121213-3.0_0 (active)
  libffi @3.1_4 (active)
  libgcc @4.9.1_0 (active)
  libiconv @1.14_0 (active)
  libidn @1.29_0 (active)
  libmpc @1.0.2_0 (active)
  libxml2 @2.9.1_0 (active)
  llvm-3.5 @3.5-r216817_0+assertions (active)
  llvm_select @1.0_0 (active)
  lzo2 @2.06_0 (active).
  mpfr @3.1.2-p10_0 (active)
  ncurses @5.9_2 (active)
  nosetests_select @0.1_0 (active)
  openssl @1.0.1i_0 (active)
  pcre @8.35_0 (active)
  py27-cython @0.21_0 (active)
  py27-nose @1.3.1_0 (active)
  py27-numpy @1.9.0_0 (active)
  py27-pyfits @3.3_0 (active)
  py27-pywcs @1.11-4.8.2_1 (active)
  py27-scipy @0.14.0_0+gcc48 (active)
  py27-setuptools @6.0.2_0 (active)
  python27 @2.7.8_2 (active)
  python_select @0.3_3 (active)
  sqlite3 @3.8.6_0 (active)
  SuiteSparse @4.2.1_3 (active)
  swig @3.0.2_0 (active)
  swig-python @3.0.2_0 (active)
  wcslib @4.23_1 (active)
  xz @5.0.7_0 (active)
  zlib @1.2.8_0 (active)

I then downloaded and compiled FFTW and boost manually. This was required for boost because macports has trouble building boost with the gnu compiler suite and the version of boost built with clang doesn't seem to work with the LOFAR software. I thought something similar was happening with FFTW, but it might have turned out to be a linking error on my behalf - so you can try to link the macports-provided installation of FFTW instead of building your own if you prefer.

The commands I used were:

Boost:

./bootstrap.sh --prefix=/Users/admin/boost --withpython=/opt/local/bin/python2.7 --with-toolset=gcc
./b2

./b2 install

FFTW

./configure --prefix=/Users/admin/fftw --enable-threads
make -j 8
make install
./configure --prefix=/Users/admin/fftwf --enable-float --enable-threads
make -j 8
make install

Issues

There were lots of failures the first time I tried to compile the LOFAR software. Refer to the Ubuntu instructions for the gist of what I was doing. I installed everything LOFAR-related into the /opt/lofar-software directory.

cmake -DBUILD_TESTING=NO -DCMAKE_INSTALL_PREFIX=/opt/lofar-software -DUSE_FFTW3=Yes -DUSE_THREADS=YES -DUSE_HDF5=YES -DUSE_OPENMP=YES -DDATA_DIR=/opt/lofar-software/data -DFFTW3_ROOT_DIR=/usr/local/lib ../..

Installed as usual

export LOFAR_ROOT="/opt/lofar-software"
export PATH="${PATH}:${LOFAR_ROOT}/bin/"
export LD_LIBRARY_PATH="${LOFAR_ROOT}/lib/"
export PYTHONPATH="{LOFAR_ROOT}/lib/python/"

Next is the installation of PYRAP. PYRAP has not been updated in a while and you need to apply the patch indicated here to make sure it installs correctly on a mac (the bug reporter links the patch, the plus lines in the diff file indicate new lines to be added, the minuses lines that need to be taken away). Basically you need to ensure it only tries to build for one architecture. I did this manually by going to the affected setup.py files (you'll spot them if you try to build), but you might be able to do this a lot faster if you know more about patching than me. Then build with

./batchbuild-trunk.py --casacore-root=/opt/lofar-software/ --prefix=/opt/lofar-software/ --python-prefix=/opt/lofar-software/lib/python --extra-root=/opt/local/ --boost-lib=boost_python-mt –universal=x86_64

(You might notice the boost_python-mt library here. That is from the macports boost install (clang) which caused no problems in this step.)

Compiled and installed CASAREST as usual

Added data folder as usual.

I needed to patch some LOFAR files to get them to compile successfully. I now suspect some of these patches might have been unnecessary as I was using incorrect settings.

Necessary patches:

1. LCS/pytools. tconvert.cc needs to have the python.h library included at the start (#include <python.h>.

2. Baselinefitting.cc needs to have sincosf replaced with separate calls to sin and cos (apparently sincosf is not available in OSX's version of the standard math library?).  I commented out the original function and replaced it as follows:


//sincosf(phase_ij_obs - phase_ij_model, &sin_dphase, &cos_dphase);
sin_dphase = sin(phase_ij_obs - phase_ij_model);
cos_dphase = cos(phase_ij_obs - phase_ij_model);

Probably unnecessary patches:

The first three patches are probably redundant if you remember to include the DUSE_SHMEM=OFF compiler option.

1. LCS/common. dlmalloc.c. Comment out reference around Have_usr_include_malloc_h, thus forcing the code to define its own.
2. shmem_alloc.cc comment out def of "union semum".
3. shmem_alloc.h comment out inclusion of malloc.h.

I don't know why these didn't re-occur:

4. image2d.cpp needs to have #include <cstring> added.
5. tickset.h needs to have exp10 functions replaced with pow(10,x) functions.
6. imagewidget.cpp same exp10 patch as above.
7. processcommander.cpp. Add definition for HOST_NAME_MAX (a large long).

Note that the CMAKE/variants/GNU.cmake file needs to be edited to point to the macports compilers as follows:

set(GNU_C         /opt/local/bin/gcc)      # GNU C compiler
set(GNU_CXX       /opt/local/bin/g++)      # GNU C++ compiler
set(GNU_Fortran   /opt/local/bin/gfortran) # GNU Fortran compiler
set(GNU_ASM       /opt/local/bin/gcc)      # GNU assembler

After applying the necessary patches above, the following compilation command should work when called from /build/gnu_opt.


cmake -DCASACORE_ROOT_DIR=/opt/lofar-software/ -DBUILD_SHARED_LIBS=ON -DUSE_OPENMP=ON -DBUILD_PACKAGES="ParmDB Calibration DP3 Pipeline MSLofar LofarFT GSM" -DCMAKE_INSTALL_PREFIX:PATH=/opt/lofar-software/ -DCMAKE_C_COMPILER=/opt/local/bin/gcc -DCMAKE_CXX_COMPILER=/opt/local/bin/g++ -DFFTW3_LIBRARY=/Users/admin/fftw/lib/libfftw3.a -DFFTW3F_LIBRARY=/Users/admin/fftwf/lib/libfftw3f.a -DFFTW3F_THREADS_LIBRARY=/Users/admin/fftwf/lib/libfftw3f_threads.a -DPYTHON_EXECUTABLE:FILEPATH=/opt/local/bin/python2.7 -DF2PY_EXECUTABLE=/opt/local/bin/f2py-2.7 -DUSE_LOG4CPLUS=OFF -DUSE_SHMEM=OFF -DBOOST_ROOT=/Users/admin/boost -Wno-dev ../..

Then run make and make install to finish up.

One final thing to do is to add the following lines to the .profile file in your home directory.

source /opt/lofar-software/lofarinit.sh

export DYLD_LIBRARY_PATH="$LD_LIBRARY_PATH"

This should allow LOFAR commands to be run from any terminal.

Tuesday 7 October 2014

Installing LOFAR software on Ubuntu 12.04

Installing LOFAR software on Ubuntu 12.04

These are some notes on how I installed some LOFAR software on Ubuntu 12.04. They are based on some notes I found on the LOFAR wiki at http://www.lofar.org/operations/doku.php?id=engineering:user_software:ubuntu_12_4.

Install main LOFAR suite
===============================================
N.B. Notes below install to strange directory. Build in one directory, and install everything else into the CASACORE directory (binary)


Dependencies:
libgtkmm-2.4-dev python-matplotlib python-pyfits libatlas-base-dev
mpi-default-bin mpi-default-dev libfreetype6-dev python-setuptools
libxml2-dev libpng12-dev libcfitsio3 libcfitsio3-dev libboost-all-dev
autoconf autoconf-archive autogen automake binutils-dev cmake cmake-curses-gui
cvs doxygen flex gfortran git guile-1.8-dev ipython libblas-dev libblitz0-dev
libboost-all-dev libboost-dev libfftw3-dev libfftw3-doc libgfortran3
libglib2.0-dev libgsl0-dev liblapack-dev liblog4cxx10 liblog4cxx10-dev
libopenmpi-dev libpqxx3-dev libx11-dev mgdiff mpi-default-dev patch pgplot5
python-dev python-numeric python-numpy python-scipy scons subversion-tools
swig bison libbison-dev
tcl tcl-dev tk tk-dev tk8.5-dev tcl8.5-dev
libhdf5-dev (oder: libhdf5-serial-1.8.4 libhdf5-serial-dev)
### "libhdf5-serial" is needed for DAL, it doesn't work with "libhdf5-openmpi"
wcslib-dev liblog4cplus-dev liblog4cplus-1.0-4 cython
###for parmdbplot:
python-sip python-qt4




########## LOFAR software:

Download packages:
##################
  mkdir Downloads
  cd Downloads
  wget ftp://ftp.atnf.csiro.au/pub/software/asap/data/asap_data.tar.bz2

Download and Build Casacore:
############################
  tar -xjvf ../Downloads/asap_data.tar.bz2
(This creates the "data" subdirectory)
  mkdir BuildDir/casacore
  cd BuildDir/casacore
  svn co http://casacore.googlecode.com/svn/trunk source
  mkdir -p build/opt
  cd build/opt
  cmake -DBUILD_TESTING=NO -DCMAKE_INSTALL_PREFIX=/opt/lofar-stuff -DUSE_FFTW3=Yes -DUSE_THREADS=YES  -DUSE_HDF5=YES -DUSE_OPENMP=YES -DDATA_DIR=/opt/lofar-stuff/data  ../../source
make -j12
make install

Install Casacore data:
############################
  cd BuildDir/..
  tar -xjvf Download/asap_data.tar.bz2

Download and Build pyrap
############################
  mkdir /cluster/lofar/BuildDir/pyrap
  cd /cluster/lofar/BuildDir/pyrap
  svn co http://pyrap.googlecode.com/svn/trunk dev-source
  export LOFAR_STUFF_ROOT="/opt/lofar-stuff"
  export PATH="${PATH}:${LOFAR_STUFF_ROOT}/bin/"
  export LD_LIBRARY_PATH="${LOFAR_STUFF_ROOT}/lib/"
  export PYTHONPATH="${LOFAR_STUFF_ROOT}/lib/python/"
  ln -s /opt/soft/lofar-stuff/lib/ /opt/soft/lofar-stuff/lib64
  cd dev-source/
  ./batchbuild-trunk.py --casacore-root=/opt/lofar-stuff --prefix=/opt/lofar-stuff --python-prefix=/opt/lofar-stuff/lib/python

Download and Build casarest
###########################
  mkdir /cluster/lofar/BuildDir/casarest
  cd /cluster/lofar/BuildDir/casarest
  svn co https://svn.astron.nl/casarest/trunk/casarest source
  mkdir build
  cd build
  cmake -DCASACORE_ROOT_DIR=/opt/lofar-stuff -DBUILD_ALL=1 -DCMAKE_INSTALL_PREFIX:PATH=/opt/lofar-stuff ../source
  make -j12
  make install


Download and Build the LOFAR Software
#####################################
Download latest copy of LOFAR software
 mkdir -p build/gnu_opt
 cd build/gnu_opt
 cmake -DCASACORE_ROOT_DIR=/opt/lofar-stuff/ -DBUILD_SHARED_LIBS=ON -DUSE_OPENMP=ON -DBUILD_PACKAGES="ParmDB Calibration DP3 Pipeline MSLofar LofarFT GSM" -DCMAKE_INSTALL_PREFIX:PATH=/opt/lofar-stuff ../../LOFAR/
  make -j12
  make install



Note: I found that the above instructions installed the pyrap python libraries in a separate folder to where LOFAR was looking, so I just found them in lofar-software/lib/python and manually pasted them into the right directory (the one LOFAR regards as PYTHONPATH)

Tuesday 29 July 2014

Harvard-style referencing using Latex and Mendeley

I needed to use Harvard-style referencing for my thesis and ran into some trouble as I was using Mendeley to generate my bibliography. The way that worked for me in the end was to use the natbib latex package with the \bibliographystyle{agsm} command. I copied the local version to my working directory and edited the write URL function to

FUNCTION {write.url}
{
    { skip$ }
}

to get rid of the links to webpages that were appearing in my output. I had a lot of unusual (Russian) names in my references which were giving errors, so I edited the final .bbl file (not the .bib created by mendeley) manually to replace troublesome characters and errors.

Installing AIPS on Ubuntu 14.04

The instructions for installing AIPS on Ubuntu 13.04 (http://astronomicalproblems.blogspot.co.uk/2013/05/installing-aips-on-ubuntu-1304.html) work well for 14.04 too. Warning! see the full edit below.

[EDIT]

Some intel processors were giving a little trouble even when following the method linked above. The single offending file was AU7B.FOR, located in 31DEC14/AIPS/SUB (or the equivalent) and the issue was the MONTH parameter declared at the top of the file.

By deleting the MONTH declaration in the FORTAN code and the initialization of MONTH  to JAN, FEB etc. a little further down and replacing it with

character (len=3), dimension(12) :: MONTH
MONTH(1) = 'JAN'
MONTH(2) = 'FEB'
MONTH(3) = 'MAR'
MONTH(4) = 'APR'
MONTH(5) = 'MAY'
MONTH(6) = 'JUN'
MONTH(7) = 'JUL'
MONTH(8) = 'AUG'
MONTH(9) = 'SEP'
MONTH(10) = 'OCT'
MONTH(11) = 'NOV'
MONTH(12) = 'DEC'

you can get AIPS to compile with gfortan as usual. I think the problem relates to old fashioned fortran code, modern gfortran and intel processors (I don't seem to have the same trouble with AMD processors, though they are of different vintages).

The resulting aips installation does compile - but there is a serious problem with IMAGR, and likely other tasks too. The best bet is to download an older version of gfortran compatible with AIPS and use that to compile for a fully functional AIPS installation.

HTML form won't submit (Angular)

 It turns out if you mix normal HTML forms with Angular ones (i.e. using FormsModule) Angular will disable the default behaviour of forms on...