Opening Multiple Instances of Firefox

Previously, I wrote how to open multiple instances of Safari and Firefox (here). However, the procedure won’t work for the newer versions of Firefox. So, we need a workaround.

Here is the solution. Save the following script to the location of your choice.


#!/bin/bash
export ProfDIR="/Users/$USER/Library/App*/Firefox"
export BINDIR="/Applications/Firefox.app/Contents/MacOS"
#Creating a Randomly named profile
RandomNumber=$RANDOM
$BINDIR/firefox-bin -CreateProfile $RandomNumber
#Copying the default profile over
cp -R $ProfDIR/Profiles/*.default/* $ProfDIR/Profiles/*.$RandomNumber/
# Running the new instance of the Firefox
$BINDIR/firefox-bin -p $RandomNumber
#Removing the created Profile
rm -rf $ProfDIR/Profiles/*.$RandomNumber
#Removing the profile name
head -n 7 $ProfDIR/profiles.ini > tmp
cat tmp > $ProfDIR/profiles.ini
rm -rf tmp

Make sure that you give the executing permission to the script with chmod command. Then, if you call the script above, it will create a new instances of FireFox for you.let’s say we have stored the script in ~/myscripts/firefox. Then all you need to do is to type:

~/myscripts/firefox &

Remember, this assumes the default profile is the only profile present and you are going to start the Firefox with the default profile. So, if you have multiple profile already created then make sure you adjust the code in “Removing the profile name” section. As Always, use at your own risk rules apply.

Hope this will help.

Posted in Uncategorized | 8 Comments

Slicing ArcGIS ASC files into tiles

There are times that you want to slice an ArcGIS RASTER file stored in ASC format into some smaller tiles. This could be due to lack of enough memory, or if you want to perform an operation, such as calculating area solar radiation, on a smaller region of your data set.

One solution is to mask a portion of your data. However, if you want to systematically break your data into bunch of smaller non-overlapping  tiles, it is better to have a program that does it for you automatically. You can still do this manually, probably you need a spreadsheet program, to calculate the coordinates of the lower left corner of all the tiles, and you have to adjust for the number of rows and columns that you need to add to make the ncols and nrows divisible by the number of tiles that you want in each directions. Then you need to go to ArcGIS, and key in all those numbers one by one.

Let say you want to break your data into a 3 by 2 tiles. There are 6 tiles in total. But what if there are 30 by 20 tiles. there are 600 tiles now. Well, now you need a script.

The script is called slice_asc_Platforms, where platforms is mac, win, or linux. The command prompt tool is easy to use:

 slice_asc_[mac,win,linux] InputFile nH nV Output 

where:

  • “InputFile” is, well, the  the input file, i.e. ArcGIS ASC RASTER file,
  • nH is the number of tiles in the horizontal direction, i.e. longitudinal,
  • nV is the number of tiles in the vertical direction, i.e. latitudinal,
  • and “Output” is used in naming the output files.

The out files are named using “Output_hXXXXX_vYYYYY.asc” where XXXXX and YYYYY are the tile number in horizontal and vertical. So the tile (3,4) would be Output_h00003_v00004.asc and so on. The tiles are numbered from left to right and from top to bottom.

If there is a tile that does not contain any data, i.e. all the values are set to NO_DATA field, this tile is skipped and it is not written to the hard drive.

So, where to get the tool? for Mac click here, for linux click here, and for windows click here. As usual, I have not tested the windows version, but the linux and mac versions are tested. There is a cygwin version that can be downloaded here and it is tested.

NOTE: NO_DATA value must be an integer value.

Posted in Program, Tech Tips | Leave a comment

Mounting a Folder through a firewall using SSH tunneling

Quite often it happens that you want to access a station located behind a firewall and use the file and documents that you have stored there. Particularly, if you are a grad student, you want to connect to your university computer, edit your code, or do whatever that you are doing from a remote location, most possibly from your home.

One method is to just ssh to the firewall and from there ssh to your system and enable the X11 forwarding on both of the ssh connection, using -Y or -X. Then you can launch the editor of your choice, or the program that you need to execute and start working remotely. However, depending on how busy the firewall is, and how good your Internet connection is, you might experience different performance and quality. Quite often it is horrible performance.

You can also use different applications such as VNC or Remote Desktop connections. But these all have a big major problem; most of the time, they are blocked on the firewall. One good alternative is using the TeamViewer. It is easy to install, easy to use, and did I mentioned it is also free! (click here.) But again, the performance you get depends on the network connection and your bandwidth.

The better solution is to mount the remote folder as one of your folders and use the editor on your own computer to edit the file. The only time that you need to pass data over the network is when you need to save the file or read the files.

Previously, I mentioned how to do this on MAC systems using MacFusion, (click here). But if you are outside the university perimeter, you are blocked by the firewall and you are not able to mount the require remote folder locally. Here, it is discussed how to use SSH and SSHFS to accomplish this task.

First you need to have ssh installed. Fortunately it is already installed on MAC and most of the Linux systems. Then you need to install SSHFS. On MAC type “sudo port install sshfs” and on Ubuntu Linux type “sudo apt-get install sshfs”. you are ready.

Now you need to make a tunnel between your machine and the end station passing through the firewall. Just type:

ssh -f UserName@FireWall -L PortNumber:EndStation:22 -N

choose a port number that is not being used on your system. You can check the already used one by typing “netstat”.

You will be asked for your password on the firewall. After the tunneling has been established you need to type

 sshfs -p PortNumber username@127.0.0.1:RemoteLocation LocalMountPoint 

remember to use the same PortNumber as the one used in the previous command. RemoteLocation is the folder on the end station that you want to mount, eg. /home/username, and the LocalMountPoint is the address of the local folder that you want it to be mounted to your end station.

Now, the remote location is mounted on your system as if it was one of the folder on your own system. Just execute what ever program that you want and point it to use the files in that folder.

enjoy.

Posted in Script, Tech Tips | 2 Comments

Download MOD13A2, Convert, Graph NDVI using NCL scripts.

It’s been a while since I have started using the NCAR Command Language (NCL). To my opinion it is indeed a great tool to process data in geoscience and automate batch processing.

Quite recently, couple of my friends asked me if I can assist them with processing some data from MODIS. Even on LinkedIn, one person from Brazil was interested in the same procedure. After sending the scripts to all these people separately, I told myself, why not just upload the code here and let every one using it.

Here is what this script does:

  1. It downloads the MOD13A2 data for a given tile and year. You can read the information about the MODDownload script separately here.
  2. This script retrieves the NDVI data out of all of the MODIS files downloaded (23 per year) and scales the data by the proper scale factor and store the results in a NetCDF file with (time,lat,lon) dimensions. So it is ready for further time series analysis.
  3. It graphs all the time series using NCAR graphics generating PS or PNG files.

You can download the script from here. After downloading edit PrepTile and set the tile number and the year of your choice, (currently it is set to download tile h09v05 for the year 2008). Set the locations were you want the files to be stored and then run the script.

Feel free to make any changes that you like to the code in order to make it more suitable for your own needs, and if you thought the changes that you made, can be helpful to others, you can let me know and I will include it in the file for future distribution.

What you need to run the script?

  1. I have tested the code under Linux and Mac; however, there shouldn’t be any problem on cygwin on windows machine.
  2. You need to have NCL installed properly and in the PATH. You can get NCL from here.
  3. You must have wget installed on the system. On ubuntu you just type “sudo apt-get install wget” and on mac you type “sudo port install wget”, (well, if you have mac port installed, you can get mac port from here).
  4. To generate PNG graphics, you need ImageMagick. If you don’t have ImageMagick you can only generate PS graphics using this script. (although if you change the script you might be able to generate other types of graphics, but currently I just made it that way). You can get ImageMagick from here.

Ok, hope you like the script and enjoy using NCL.

Posted in Uncategorized | Leave a comment

NCL Script to Get MODIS Latitude/Longitude

The NCAR Command Language (NCL) is a great software in processing and visualizing data in geosciences. If you haven’t installed it yet, go here.

NCL handles many types of data such as NetCDF and HDF. More importantly, the interface to read these files is the same. MATLAB also handles both data sets, however, it has a different set of interface to query different file formats. That’s what I like the most in NCL.

MODIS comes in HDF format. However, the latitude and longitude information is not stored in the data set. Apparently some products also included the lat/lon information, but some products like MOD11A1 or MOD13A2 do not save the lat/lon information. But it is easy and fast to compute the latitude and longitude. You just need to know (1) vertical tile number, (2) horizontal tile number, (3) line Number, and (4) Sample Number. Line number is the row number of the data (latitude) and sample number is the column number of the data (longitude). Both of them can be a floating point.

I needed to generate these lat/lon information to plot some MODIS data and also perform some interpolation. Thus, I ended writing an ncl script to do that. Here is the interface to the script.

procedure MOD2LL( nv,
nh
LineN
SampleN
np
Lat
Lon)

where:

  • nv is the vertical tile number,
  • nh is the horizontal tile number,
  • LineN is the line Number,
  • SampleN is the sample number,
  • np is the number of points in the tile, for example 1200 for 1km resolution products,
  • Lat would store the latitude,
  • and Lon would store the longitude.

Both Lat and Lon have the same dimensions as of LineN and SampleN. All input variables must be provided as floating point, even the tile numbers.

here is the sample output using this function and NCL to generate the plot for surface temperature.
MODIS - H08V05 - DoY:162

If you are interested in the script download it for free by clicking here.

Posted in Script, Tech Tips | Leave a comment

NCL Scripts to Download MODIS Data

MODIS products can be downloaded free of charge from “Land Processes Distributed Active Archive Center (LP DAAC)”, (click here).

There are different products available, with various spatial and temporal resolutions. Some are daily basis, while the others are available on a 8 days or 16 days basis. The pixel size also varies.

There are various methods that you can download these products. However, if you are performing some time series analysis, and interested to download, let say, 10 years of data, you might need a more automated method to download these data sets.

If you know your way around linux (or Mac), you can simply pull whole lot of data from their ftp server using wget command. There are also scripts in R which let you to automate the download and making the mosaic using MRT (click here or here).

A while back I needed to perform similar task. Therefore, I developed some NCL scripts to automate the download procedure. While coding those scripts, I tried to be as general as possible. The base download routine in NCL is

DownloadMODBase( nv [1]:integer,
nh[1]:integer,
syyyy[1]:integer,
smm[1]:integer,
sdd[1]:integer,
eyyyy[1]:integer,
emm[1]:integer,
edd[1]:integer,
StoreDir[1]:string,
ProductName[1]:string,
ftpDir[1]:string)

where:

  • nv is the vertical tile number,
  • nh is the horizontal tile Number
  • syyyy, smm, and sdd defines the starting date of the interval you are interested to download,
  • eyyyy, emm, and edd defines the ending date of the interval,
  • StoreDir defines the location you want the file to be stored,
  • Product Name is the name of the product you want to download (a folder using this name is created in the location defined by StoreDir,
  • and finally, ftpDir is the ftp address of the server that you want to download the data from.

You can find the ftp address from the LP DAAC website; however, I do admit that things can get even easier. Particularly the part that you need to provide ftp address. So, I created some more NCL procedures for the product that I needed to download. These functions already have the ftp address; therefore, you do not need to provide the ftp address. The NCL procedures are called “DownloadXXXX”, where XXXX defines the MODIS product. The interface is the same as DownloadMODBase, except that you do not need to provide product name and ftp address. Current possible values for XXXX are:

  • MCD12Q1
  • MCD15A2
  • MCD43B3
  • MOD11A1
  • MOD12Q1
  • MOD13A2

Well, these were the product that I needed. Feel free to add more product name. Send me the code snippet and I will include it in the NCL script.

NOTE: You need to have wget installed. It is easy:

  • debian base linux: sudo apt-get install wget
  • RedHat base linux: sudo yum install wget (I think it should be like this, never tried it myself)
  • MAC: sudo port install wget
  • Windows: Sorry Folks, generally you are on your own. But on Cygwin, launch the installer and search for wget package. It will install it for you. I did it once, it was easy and surprisingly nothing crashed. ( I think they missed to put the crash code in there ).


Where to download the Scripts?

Now the most important question, i.e. where to download these scripts! I have uploaded the files on http://www.4shared.com. Just click here to access the file. And remember, it is free to download these files.

Posted in Script, Tech Tips | 1 Comment

Installing Overture on a Freshly Installed Ubuntu

Overture is part of Advance CompuTational Software (ACTS) collections, a set of tools mainly funded by Department of Energy (DOE) to make it easier for the end users to write high-performance scientific applications. If you haven’t visited their website; perhaps it is time now to do it (click here).

http://acts.nersc.gov/

Overture focuses on providing a set of object-oriented tools that are particularly useful in Computational Fluid Dynamics (CFD) and combustion problems. Particularly if you are interested to solve these problems in complex domains and perhaps moving geometry, Overture can be quite handy. (click here).

http://acts.nersc.gov/overture/#Introduction

There exists an instruction on how to install Overture on linux and Mac systems. However, depending on your system configuration, you might miss a few prerequisite packages that are necessary to compile Overture or Overture prerequisites.

This post explains how to install Overture on a newly installed Ubuntu 10.04 (fresh out of the box). I personally tested the procedure twice (so it should work). I’ve tried to document all the steps that I took to successfully install Overture and CG. So, I am hoping that I have not missed any steps here in this post. The start point is that you have just finished installing Ubuntu 10.04 LTS and you want to proceed with Overture installations. here are the steps.

Upgrade: First, upgrade everything, Ubuntu will suggest that automatically on the first boot.

Prepare the compilers: Now, you need to prepare your system to compile different packages. A fresh install of Ubuntu contains gcc but not other required compilers. (well, it has some other compilers too). So, the first step is to  prepare your system to configure, make and build packages. Here are the packages that you need.

  • build-essential
  • manpages-dev
  • gfortran
  • autoconf
  • automake
  • mpich2 (for parallel processing, this gives you mpiXXX compilers)

You can either install them on a command line by issueing

sudo apt-get install PackageName

or use the Ubuntu GUI Synaptic Package Manager to install them. Either case is fine. If you are asked to install some extra packages as the prerequisite, well, just allow the installer to do it. I am not listing those packages here.

CSH and/or TCSH: There are some scripts in Overture that requires csh or tcsh. If you want to use those scripts you also need to install these two shell too (using apt-get or the GUI one).

OpenMOTIF: Overture requires MOTIF. You can either follow the instruction on overture website, or even simpler than that you can just use apt-get to install:

  • libmotif3
  • libmotif3-dev

That will take care of OpenMotif pretty much.

OpenGL and other required packages: Now you have to take care of OpenGL. Ubuntu apparently comes with OpenGL already installed; however, what’s available is not enough to compile packages that depend on OpenGL. Also, Overture make use of some X11 packages that you need to install. Well, for various part of the code (overture or overture prerequisites) you need all of the following packages installed:

  • libgl1-mesa-dev
  • libglu1-xorg
  • libglu1-xorg-dev
  • libglut3
  • libglut3-dev
  • x11proto-gl-dev
  • x11proto-print-dev
  • libjpeg62-dev
  • libzip-dev
  • libperl-dev
  • libXpm-dev
  • libXp-dev
  • libxmu-dev
  • libxi-dev

That was a long list. I know. But they are all needed. (and the packages that they are depending, I am not listing them).

Installing HDF5

Before installing HDF5 you need to make a symbolic link to “make”, calling it “gmake”. The easiest way is to just type “which make” and wherever the make is (most possibly in /usr/bin) make a symbolic link to make called gmake by issueing:

sudo ln -s make gmake

Now you are ready to install HDF5. The latest version of HDF5 as of this writing is 1.8.8 and you can obtain it on:

http://www.hdfgroup.org/HDF5/

The link on overture installation instruction didn’t work for me. So, obtained the source codes for HDF5 from the above link. Just follow the instruction that comes with HDF5 and you should be good to go. Here are the steps:

  • unzip the downloaded HDF5 package
  • type ./configure –prefix=$INSTALLDIR
  • make
  • make check (optional but I recommend it)
  • make install
  • make check-install (optional but I recommend it)

INSTALLDIR tells the configure where to install HDF5. I chose to install HDF5 in /usr/local/hdf5. When you are installing a package in a directory where needs a root access, instead of using sudo I have found it quite easier to first make the folder by issueing “sudo mkdir dirname” and then using “sudo chown username:usergroup dirname” give myself the permission to write in the folder. Once I am done with the installation I just revert back the ownership by issuing “sudo chown root:root dirname”. Well, you might find this approach easier like me.

Installing A++:

Now it is time to install A++. Just follow the instruction given on Overture website (click here). It should work. I didn’t installed P++. So, I can’t give you any instruction on that.

Installing LAPACK:

Now you need to install BLAS and LAPACK. The latest version of LAPACK as if this writing is 3.4.0, that can be downloaded from:

http://www.netlib.org/lapack/#_lapack_version_3_4_0_2

Unpack lapack package and change your folder. Inside unpacked lapack package folder type:

cp make.inc.example make.inc

and then

make blaslib lapacklib tmglib

Then copy “liblapack.a”, “librefblas.a”, and “libtmglib.a” to a location that you want them to be. I chosed to put them in /usr/local/lapack/lib. I also dubplicated “librefblas.a” to a file called “libblas.a”. Apparently that’s what most packages look for.

Installing PETSC:

Now it is time to install PETSC. PETSC is optional, but I decided to install it. Download PETSC here. The version I downloaded was petsc-2.3.2-p10. Here is how to install petsc

  • Unpack the package
  • cd to the petsc unpacked package
  • export PETSC_DIR=`pwd`
  • ./config/configure.py -with-debugging=0 -with-fortran=0 -with-matlab=0 -with-mpi=0 –with-shared=1 -with-dynamic=1 –prefix=$PETSCINSTALLDIR –download-c-blas-lapack=1
  • export PETSC_ARCH=”The architecture suggested at the end of configure”
  • make all
  • make install
  • make test (optional but I recommend it)

Make sure you have a write permission on PETSCINSTALLDIR. You can also tell petsc configuration where your blas and lapack libraries are; instead of letting PETSC to download it again.

Installing Overture:

Now you must be ready to install Overture. Just follow the instruction on overture website (click here). The installation instruction should work.

Some note on defenv: If you have followed the same instruction as here you should set XLIBS, MOTIF and OpenGL to /usr. HDF and APlusPlus, LAPACK, Overture, CG, and CGBUILDERPREFIX depends on where you have installed those packages or you want to install Overture and CG. You don’t need to touch anything else in defenv. However, I decided to make everything into bash, so I duplicated defenv to defenv_bash and changed the file accordingly.

After compiling Overture, if you run check.p it will fail. But don’t panic. All you need to do is to issue:

export PATH=./:$PATH

Your installation is fine, but check.p requires ./ to be in the path.

Now it is time to install CG. Before compiling CG you need to edit one of its file. Edit CGDIR/sm/src/getRayleighSpeed.C and change the printf to cout and make the necessary changes for that. Otherwise it will fail to compile. Now just follow the instruction on overture website to compile CG (click here).

Now you can enjoy Overture and CG.

Posted in Uncategorized | 4 Comments

Why moving towards GPUs?

Quite recently I had a discussion with experts in geosciences about using GPGPUs for the computations. One questions that you usually get when talking about GPGPUs is why GPUs and not just invest on supercomputers based on CPUs.

Well, I provided couple of reasons, such as accessibility, power consumption, costs per computing core, etc. I also came upon this article “Why Nvidia’s chips can power supercomputers?” on CNet News (click here).

I found it quite interesting in explaining some of the reasons. I hope you will enjoy it as much as I did.

Posted in Uncategorized | Leave a comment

MATLAB API to CUDA-C Implementation of SEBS on GPU is now ready

Quite recently I announced that the SEBS algorithm, developed by Prof. Bob Su, is implemented on GPU using CUDA-C. Harnessing the many computational cores available on graphical processing units (GPUs) increases the performance of the SEBS algorithm in the total computational time needed to do the calculation. Our tests shows that one can achieve a speed up of up to 200 times on C1060 NVIDIA Tesla card. This is indeed a lot of speedup.

However, more important than the time needed to calculate is how easy it is to use a software or a computer code. I have provided the entire C code which reads the data, calculates everything, and stores the output in NetCDF files; however, I noticed that many researchers and scientists are not willing to get involve with C code. Their concerns is quite valid and I can understand it. C language is not that easy to use, particularly if your focus is not programming. It is much easier to use MATLAB or other software to process, and prepare the input files, and plot the results.

Therefore, I decided to provide a MATLAB API to the CUDA-C implementation of SEBS on GPUs. Every thing related to preparing the GPU and using GPU on MATLAB is handled by these APIs and the user do not need to worry about anything regarding the GPU. The user does not even  need to know how to program on GPU using MATLAB.

To use the MATLAB API to CUDA-C implementation of SEBS on GPU you need to have (1) MATLAB (of course), (2) Parallel Toolbox of MATLAB installed, and (3) a GPU card that MATLAB supports. For more information you can refer to:

http://www.mathworks.nl/help/toolbox/distcomp/

There are 4 functions in total:

  • SEBSGPU_WORLD=InitSEBSGPU_WORLD(): This function does not require any input. It will reads the CUDA-C PTX files, prepares the GPU computing Kernels and some other required information needed to run codes on GPU using MATLAB. That’s all you need to do. Everything is handled in the that function.
  • SEBS_kb_1: This function receives some input and then computes d0,z0h, and z0m. You do not need to worry about the memory transfer between the host and the GPU device. Everything is handled by the function.
  • SEBS_EnergyBalance: The same as above, but calculates the instantaneous values for evapotranspiration and some other outputs.
  • SEBS_Daily_Evapotranspiration: performs daily calculation of ET.

all the inputs are automatically transfered to your graphic card and the calculation is done. Once the GPU is done with the calculation, the results are transfered back to PC and to a regular MATLAB variable that you already know how to deal with.

Every details related to GPU is hidden from the user.

Hope you will enjoy the MATLAB API to CUDA-C implementation of SEBS on GPU.

for more information refer to here.

Posted in Uncategorized | Leave a comment

Harmonic ANalysis of Time-Series (HANTS)

Quite recently, I implemented the HANTS algorithm in MATLAB. HANTS originally was developed at NLR (http://gdsc.nlr.nl/gdsc/en/tools/hants) to remove the cloud effects and temporally interpolate data. The program is available free of charge from the provided link.

HANTS can be used to remove the outliers, smooth the data set, interpolate the missing data, and to compress the data.

For more information on HANTS implementation on MATLAB and some outputs you can click here.

Posted in Uncategorized | Leave a comment