NCL Script to Get MODIS Latitude/Longitude

The NCAR Command Language (NCL) is a great software in processing and visualizing data in geosciences. If you haven’t installed it yet, go here.

NCL handles many types of data such as NetCDF and HDF. More importantly, the interface to read these files is the same. MATLAB also handles both data sets, however, it has a different set of interface to query different file formats. That’s what I like the most in NCL.

MODIS comes in HDF format. However, the latitude and longitude information is not stored in the data set. Apparently some products also included the lat/lon information, but some products like MOD11A1 or MOD13A2 do not save the lat/lon information. But it is easy and fast to compute the latitude and longitude. You just need to know (1) vertical tile number, (2) horizontal tile number, (3) line Number, and (4) Sample Number. Line number is the row number of the data (latitude) and sample number is the column number of the data (longitude). Both of them can be a floating point.

I needed to generate these lat/lon information to plot some MODIS data and also perform some interpolation. Thus, I ended writing an ncl script to do that. Here is the interface to the script.

procedure MOD2LL( nv,
nh
LineN
SampleN
np
Lat
Lon)

where:

  • nv is the vertical tile number,
  • nh is the horizontal tile number,
  • LineN is the line Number,
  • SampleN is the sample number,
  • np is the number of points in the tile, for example 1200 for 1km resolution products,
  • Lat would store the latitude,
  • and Lon would store the longitude.

Both Lat and Lon have the same dimensions as of LineN and SampleN. All input variables must be provided as floating point, even the tile numbers.

here is the sample output using this function and NCL to generate the plot for surface temperature.
MODIS - H08V05 - DoY:162

If you are interested in the script download it for free by clicking here.

Posted in Script, Tech Tips | Leave a comment

NCL Scripts to Download MODIS Data

MODIS products can be downloaded free of charge from “Land Processes Distributed Active Archive Center (LP DAAC)”, (click here).

There are different products available, with various spatial and temporal resolutions. Some are daily basis, while the others are available on a 8 days or 16 days basis. The pixel size also varies.

There are various methods that you can download these products. However, if you are performing some time series analysis, and interested to download, let say, 10 years of data, you might need a more automated method to download these data sets.

If you know your way around linux (or Mac), you can simply pull whole lot of data from their ftp server using wget command. There are also scripts in R which let you to automate the download and making the mosaic using MRT (click here or here).

A while back I needed to perform similar task. Therefore, I developed some NCL scripts to automate the download procedure. While coding those scripts, I tried to be as general as possible. The base download routine in NCL is

DownloadMODBase( nv [1]:integer,
nh[1]:integer,
syyyy[1]:integer,
smm[1]:integer,
sdd[1]:integer,
eyyyy[1]:integer,
emm[1]:integer,
edd[1]:integer,
StoreDir[1]:string,
ProductName[1]:string,
ftpDir[1]:string)

where:

  • nv is the vertical tile number,
  • nh is the horizontal tile Number
  • syyyy, smm, and sdd defines the starting date of the interval you are interested to download,
  • eyyyy, emm, and edd defines the ending date of the interval,
  • StoreDir defines the location you want the file to be stored,
  • Product Name is the name of the product you want to download (a folder using this name is created in the location defined by StoreDir,
  • and finally, ftpDir is the ftp address of the server that you want to download the data from.

You can find the ftp address from the LP DAAC website; however, I do admit that things can get even easier. Particularly the part that you need to provide ftp address. So, I created some more NCL procedures for the product that I needed to download. These functions already have the ftp address; therefore, you do not need to provide the ftp address. The NCL procedures are called “DownloadXXXX”, where XXXX defines the MODIS product. The interface is the same as DownloadMODBase, except that you do not need to provide product name and ftp address. Current possible values for XXXX are:

  • MCD12Q1
  • MCD15A2
  • MCD43B3
  • MOD11A1
  • MOD12Q1
  • MOD13A2

Well, these were the product that I needed. Feel free to add more product name. Send me the code snippet and I will include it in the NCL script.

NOTE: You need to have wget installed. It is easy:

  • debian base linux: sudo apt-get install wget
  • RedHat base linux: sudo yum install wget (I think it should be like this, never tried it myself)
  • MAC: sudo port install wget
  • Windows: Sorry Folks, generally you are on your own. But on Cygwin, launch the installer and search for wget package. It will install it for you. I did it once, it was easy and surprisingly nothing crashed. ( I think they missed to put the crash code in there ).


Where to download the Scripts?

Now the most important question, i.e. where to download these scripts! I have uploaded the files on http://www.4shared.com. Just click here to access the file. And remember, it is free to download these files.

Posted in Script, Tech Tips | 1 Comment

Installing Overture on a Freshly Installed Ubuntu

Overture is part of Advance CompuTational Software (ACTS) collections, a set of tools mainly funded by Department of Energy (DOE) to make it easier for the end users to write high-performance scientific applications. If you haven’t visited their website; perhaps it is time now to do it (click here).

http://acts.nersc.gov/

Overture focuses on providing a set of object-oriented tools that are particularly useful in Computational Fluid Dynamics (CFD) and combustion problems. Particularly if you are interested to solve these problems in complex domains and perhaps moving geometry, Overture can be quite handy. (click here).

http://acts.nersc.gov/overture/#Introduction

There exists an instruction on how to install Overture on linux and Mac systems. However, depending on your system configuration, you might miss a few prerequisite packages that are necessary to compile Overture or Overture prerequisites.

This post explains how to install Overture on a newly installed Ubuntu 10.04 (fresh out of the box). I personally tested the procedure twice (so it should work). I’ve tried to document all the steps that I took to successfully install Overture and CG. So, I am hoping that I have not missed any steps here in this post. The start point is that you have just finished installing Ubuntu 10.04 LTS and you want to proceed with Overture installations. here are the steps.

Upgrade: First, upgrade everything, Ubuntu will suggest that automatically on the first boot.

Prepare the compilers: Now, you need to prepare your system to compile different packages. A fresh install of Ubuntu contains gcc but not other required compilers. (well, it has some other compilers too). So, the first step is to  prepare your system to configure, make and build packages. Here are the packages that you need.

  • build-essential
  • manpages-dev
  • gfortran
  • autoconf
  • automake
  • mpich2 (for parallel processing, this gives you mpiXXX compilers)

You can either install them on a command line by issueing

sudo apt-get install PackageName

or use the Ubuntu GUI Synaptic Package Manager to install them. Either case is fine. If you are asked to install some extra packages as the prerequisite, well, just allow the installer to do it. I am not listing those packages here.

CSH and/or TCSH: There are some scripts in Overture that requires csh or tcsh. If you want to use those scripts you also need to install these two shell too (using apt-get or the GUI one).

OpenMOTIF: Overture requires MOTIF. You can either follow the instruction on overture website, or even simpler than that you can just use apt-get to install:

  • libmotif3
  • libmotif3-dev

That will take care of OpenMotif pretty much.

OpenGL and other required packages: Now you have to take care of OpenGL. Ubuntu apparently comes with OpenGL already installed; however, what’s available is not enough to compile packages that depend on OpenGL. Also, Overture make use of some X11 packages that you need to install. Well, for various part of the code (overture or overture prerequisites) you need all of the following packages installed:

  • libgl1-mesa-dev
  • libglu1-xorg
  • libglu1-xorg-dev
  • libglut3
  • libglut3-dev
  • x11proto-gl-dev
  • x11proto-print-dev
  • libjpeg62-dev
  • libzip-dev
  • libperl-dev
  • libXpm-dev
  • libXp-dev
  • libxmu-dev
  • libxi-dev

That was a long list. I know. But they are all needed. (and the packages that they are depending, I am not listing them).

Installing HDF5

Before installing HDF5 you need to make a symbolic link to “make”, calling it “gmake”. The easiest way is to just type “which make” and wherever the make is (most possibly in /usr/bin) make a symbolic link to make called gmake by issueing:

sudo ln -s make gmake

Now you are ready to install HDF5. The latest version of HDF5 as of this writing is 1.8.8 and you can obtain it on:

http://www.hdfgroup.org/HDF5/

The link on overture installation instruction didn’t work for me. So, obtained the source codes for HDF5 from the above link. Just follow the instruction that comes with HDF5 and you should be good to go. Here are the steps:

  • unzip the downloaded HDF5 package
  • type ./configure –prefix=$INSTALLDIR
  • make
  • make check (optional but I recommend it)
  • make install
  • make check-install (optional but I recommend it)

INSTALLDIR tells the configure where to install HDF5. I chose to install HDF5 in /usr/local/hdf5. When you are installing a package in a directory where needs a root access, instead of using sudo I have found it quite easier to first make the folder by issueing “sudo mkdir dirname” and then using “sudo chown username:usergroup dirname” give myself the permission to write in the folder. Once I am done with the installation I just revert back the ownership by issuing “sudo chown root:root dirname”. Well, you might find this approach easier like me.

Installing A++:

Now it is time to install A++. Just follow the instruction given on Overture website (click here). It should work. I didn’t installed P++. So, I can’t give you any instruction on that.

Installing LAPACK:

Now you need to install BLAS and LAPACK. The latest version of LAPACK as if this writing is 3.4.0, that can be downloaded from:

http://www.netlib.org/lapack/#_lapack_version_3_4_0_2

Unpack lapack package and change your folder. Inside unpacked lapack package folder type:

cp make.inc.example make.inc

and then

make blaslib lapacklib tmglib

Then copy “liblapack.a”, “librefblas.a”, and “libtmglib.a” to a location that you want them to be. I chosed to put them in /usr/local/lapack/lib. I also dubplicated “librefblas.a” to a file called “libblas.a”. Apparently that’s what most packages look for.

Installing PETSC:

Now it is time to install PETSC. PETSC is optional, but I decided to install it. Download PETSC here. The version I downloaded was petsc-2.3.2-p10. Here is how to install petsc

  • Unpack the package
  • cd to the petsc unpacked package
  • export PETSC_DIR=`pwd`
  • ./config/configure.py -with-debugging=0 -with-fortran=0 -with-matlab=0 -with-mpi=0 –with-shared=1 -with-dynamic=1 –prefix=$PETSCINSTALLDIR –download-c-blas-lapack=1
  • export PETSC_ARCH=”The architecture suggested at the end of configure”
  • make all
  • make install
  • make test (optional but I recommend it)

Make sure you have a write permission on PETSCINSTALLDIR. You can also tell petsc configuration where your blas and lapack libraries are; instead of letting PETSC to download it again.

Installing Overture:

Now you must be ready to install Overture. Just follow the instruction on overture website (click here). The installation instruction should work.

Some note on defenv: If you have followed the same instruction as here you should set XLIBS, MOTIF and OpenGL to /usr. HDF and APlusPlus, LAPACK, Overture, CG, and CGBUILDERPREFIX depends on where you have installed those packages or you want to install Overture and CG. You don’t need to touch anything else in defenv. However, I decided to make everything into bash, so I duplicated defenv to defenv_bash and changed the file accordingly.

After compiling Overture, if you run check.p it will fail. But don’t panic. All you need to do is to issue:

export PATH=./:$PATH

Your installation is fine, but check.p requires ./ to be in the path.

Now it is time to install CG. Before compiling CG you need to edit one of its file. Edit CGDIR/sm/src/getRayleighSpeed.C and change the printf to cout and make the necessary changes for that. Otherwise it will fail to compile. Now just follow the instruction on overture website to compile CG (click here).

Now you can enjoy Overture and CG.

Posted in Uncategorized | 4 Comments

Why moving towards GPUs?

Quite recently I had a discussion with experts in geosciences about using GPGPUs for the computations. One questions that you usually get when talking about GPGPUs is why GPUs and not just invest on supercomputers based on CPUs.

Well, I provided couple of reasons, such as accessibility, power consumption, costs per computing core, etc. I also came upon this article “Why Nvidia’s chips can power supercomputers?” on CNet News (click here).

I found it quite interesting in explaining some of the reasons. I hope you will enjoy it as much as I did.

Posted in Uncategorized | Leave a comment

MATLAB API to CUDA-C Implementation of SEBS on GPU is now ready

Quite recently I announced that the SEBS algorithm, developed by Prof. Bob Su, is implemented on GPU using CUDA-C. Harnessing the many computational cores available on graphical processing units (GPUs) increases the performance of the SEBS algorithm in the total computational time needed to do the calculation. Our tests shows that one can achieve a speed up of up to 200 times on C1060 NVIDIA Tesla card. This is indeed a lot of speedup.

However, more important than the time needed to calculate is how easy it is to use a software or a computer code. I have provided the entire C code which reads the data, calculates everything, and stores the output in NetCDF files; however, I noticed that many researchers and scientists are not willing to get involve with C code. Their concerns is quite valid and I can understand it. C language is not that easy to use, particularly if your focus is not programming. It is much easier to use MATLAB or other software to process, and prepare the input files, and plot the results.

Therefore, I decided to provide a MATLAB API to the CUDA-C implementation of SEBS on GPUs. Every thing related to preparing the GPU and using GPU on MATLAB is handled by these APIs and the user do not need to worry about anything regarding the GPU. The user does not even  need to know how to program on GPU using MATLAB.

To use the MATLAB API to CUDA-C implementation of SEBS on GPU you need to have (1) MATLAB (of course), (2) Parallel Toolbox of MATLAB installed, and (3) a GPU card that MATLAB supports. For more information you can refer to:

http://www.mathworks.nl/help/toolbox/distcomp/

There are 4 functions in total:

  • SEBSGPU_WORLD=InitSEBSGPU_WORLD(): This function does not require any input. It will reads the CUDA-C PTX files, prepares the GPU computing Kernels and some other required information needed to run codes on GPU using MATLAB. That’s all you need to do. Everything is handled in the that function.
  • SEBS_kb_1: This function receives some input and then computes d0,z0h, and z0m. You do not need to worry about the memory transfer between the host and the GPU device. Everything is handled by the function.
  • SEBS_EnergyBalance: The same as above, but calculates the instantaneous values for evapotranspiration and some other outputs.
  • SEBS_Daily_Evapotranspiration: performs daily calculation of ET.

all the inputs are automatically transfered to your graphic card and the calculation is done. Once the GPU is done with the calculation, the results are transfered back to PC and to a regular MATLAB variable that you already know how to deal with.

Every details related to GPU is hidden from the user.

Hope you will enjoy the MATLAB API to CUDA-C implementation of SEBS on GPU.

for more information refer to here.

Posted in Uncategorized | Leave a comment

Harmonic ANalysis of Time-Series (HANTS)

Quite recently, I implemented the HANTS algorithm in MATLAB. HANTS originally was developed at NLR (http://gdsc.nlr.nl/gdsc/en/tools/hants) to remove the cloud effects and temporally interpolate data. The program is available free of charge from the provided link.

HANTS can be used to remove the outliers, smooth the data set, interpolate the missing data, and to compress the data.

For more information on HANTS implementation on MATLAB and some outputs you can click here.

Posted in Uncategorized | Leave a comment

SEBS is now Available on GPU

Global High Resolution Estimation of Evapotranspiration – SEBS on GPU using CUDA-C

I had a chance to come to the Netherlands and collaborate with a highly experienced team of hydrologist from the University if Twente – ITC (www.itc.nl). During my visit, I worked on various projects related to Surface Energy Balance System (SEBS) algorithm, originally developed by Prof. Bob Su (resume). I was working under the supervision of Prof. Su and Dr. Joris Timmermans.

During my stay I implemented SEBS algorithm on Graphical Processing Units (GPU) using CUDA-C. SEBS is an algorithm to estimate Evapotranspiration along with other turbulent fluxes and parameters. The outputs of SEBS algorithm are:

  • Z0m: Roughness height for momentum transfer,
  • Z0h: Roughness height for heat transfer,
  • d0: displacement height,
  • Rn: Net Radiation,
  • G0: Ground heat flux,
  • H: Sensible heat flux,
  • LE: Latent heat flux,
  • Evap_Fr: Evaporative fraction,
  • Re_i: Relative evaporation
  • Ustar: Friction velocity,
  • H_DL: Sensible heat flux of dry condition,
  • H_WL: Sensible heat flux of wet condition,
  • Edaily: Daily Evapotranspiration,
  • Rndaily: Daily Net Radiation.

The CUDA-C implementation of SEBS algorithm achieved a speedup of about 380 times on Tesla M1060 NVIDIA cards, (compiled with -arch=sm11 -use_fast_math).

Further information can be found here.


Posted in Projects | 1 Comment