Thursday, August 2, 2012

Jacob's MATLAB functions

I've used MATLAB for most of my plotting needs for over a year, so most of my "final" plotting scripts are written in MATLAB.  During that time, I wrote a number of custom functions that get shared across all my scripts.  Unfortunately, they aren't included in the scripts themselves, so if you want to use my scripts you'll need to manually add them to your MATLAB path so matlab can see them.  If you don't, you'll see errors like "Undefined function or variable 'strrep_many'" when you try to run my scripts.

These functions are stored on Sox.  They can be found in 3 directories:
  • /Users/oberman/Documents/MATLAB/ColorUtils
  • /Users/oberman/Documents/MATLAB/MiscUtils
  • /Users/oberman/Documents/MATLAB/TimeUtils

If you already know how to add folders to the MATLAB path (and/or already have a folder where you keep custom functions), just take the scripts out of those 3 directories and put them somewhere on your path.  You'll be good to go.


If not, you'll need to either modify your startup.m file or create one if you don't have it.  startup.m is a MATLAB script that runs as soon as the program boots up.  It can execute any commands you like; here, we're going to use it to add my folders to your path so MATLAB can see the functions within them. 

The default location that MATLAB looks for a startup.m file is in the folder ~/Documents/MATLAB/, where ~ is your home directory.  If you don't have a /Documents/MATLAB/ folder, try starting MATLAB and then exiting.  It should exist after you've started MATLAB once.  Create a file in that folder called "startup.m" and in it put the following lines:
addpath('/Users/oberman/Documents/MATLAB/ColorUtils');
addpath('/Users/oberman/Documents/MATLAB/TimeUtils');
addpath('/Users/oberman/Documents/MATLAB/MiscUtils');
Now, next time you start MATLAB, my functions should work just as if they were built in.  You are of course welcome to add other lines to your startup.m (it uses the MATLAB scripting language).  You can also copy my scripts to another location and add that folder instead - this might be helpful if you want to modify or expand on something I did.

Friday, February 17, 2012

To ensure wet deposition --or at least the chance for it-- would be realistically modeled, we investigated different physics settings of WRF. Here are a couple notes about our findings:


(1) Be careful how you plot WRF precipitation. It is accumulated precipitation, written every hour (as currently used, "history_interval" in namelist.input = 60 minutes). Some models will accumulate precip. in "buckets" with the bucket being emptied out every {interval of history write}. WRF is not like that, it has an accumulated precip. bucket that grows with time. WRF accumulated precipitation for any particular time = the accumulated precipitation from the start of the run to that particular time. 

(2) There is a higher agreement between NARR precipitation and WRF precipitation when using the Kain-Fritsch cumulus parameterization (cu_physics = 1 in namelist.input). This is not the same cu. param. that Claus and Caitlin used (Grell, cu_physics = 5).


(3) The sensitivity to land surface model is very small, so continuing to use the Pleim-Xu LSM (sf_sfclay_physics = 7 in namelist.input), as Claus and Caitlin had used, is fine. 


(4) Changing the cumulus parameterization has very little influence on nudged temperatures because they are nudged. Changing the cu. param. is just for precipitation's sake.

Thursday, February 16, 2012

Configure MCIP3.6 to output full layer pressure (PRESF)

To compare CMAQ model output against OMI satellite retrievals using Jacob's processing scheme, it may be necessary to interpolate vertical layers between CMAQ and satellite vertical columns. MCIP v3.6 by default outputs pressure levels at mid-layer (one value for each layer), however pressure levels at the full layer height may be needed for some interpolation schemes.

Here are instructions from Tanya Otte (EPA MCIP developer) to modify MCIP to output PRESF, the full layer pressure values. This was tested in my copy of MCIP 3.6 on nitrate (/home/bickford/MCIP3.6), and worked on the first try.

*************************************************************************************

From Tanya Otte (Otte.Tanya@epamail.epa.gov):

Here are the mods that you need to make to MCIP to add PRESF to the output. As I explained in the reply via m3list, you will only be outputting the levels above the surface (such that the number of full levels output is really one fewer than exist, but matches the number of mid-layers in the model). The pressure at the lowest full-level (the surface) is in METCRO2D in variable PRSFC.

Caveat 1: I have not tested this; it's mostly just looking at what I believe the changes are that need to be made.

Caveat 2: A new version of MCIP will be released with CMAQv5.0 (any day now). You may want to wait for it. **MCIP v4.0 was released Feb 2012**

Caveat 3: If you are using the tipping bucket for precip in WRF, let me know, and I'll give you the mods. The current release of MCIP does not handle the tipping bucket; it's off by default, and it's fairly new.

Also, I think there's a bug in what I put into MCIPv4.0, so I'm working to fix that now. If you have the tipping bucket on, you'll need different code in other parts of MCIP (not related to PRESF).

** NOTE: When I tested this with MCIP 3.6, met was from WRF v3.0, so no tipping bucket**

In mcoutcom_mod.f90:

1. Increase parameter MC3INDEX by 1.

2. Define a REAL, POINTER for "pressf_c" and "pressf_b" (follow x3htf_c and x3htf_b).

3. Add "PRESF" to the array for MC3VNAME.

4. Add units (Pa) to the array for MC3UNITS.

5. Add a text description to the array MC3VDESC.

In alloc_ctm.f90

1. Associate the pointers for PRESSF_C and PRESSF_B under the

allocate statements for MC3 and MB3. Again, follow x3htf_c and x3htf_b.

In dealloc_ctm.f90

1. Nullify the pointers for PRESSF_C and PRESSF_B just before the deallocate statements for MC3 and MB3. Follow x3htf_c and x3htf_b.

In metcro.f90

1. Change the allocate size for DUMARAY0 so that the fourth dimension is changed from "4+" to "5+" in three places.

2. In the triple-loop that fills DUMARAY0, fill dumaray0(c,r,k,5)

with xpresf(c,r,k).

3. Change the filling of XWWIND into the 6th element of DUMARAY0.

4. Change the filling of XTKE into the 6+iwout element of DUMARAY0.

5. Add a call to collapx for xpresf; follow the entry for

xdensaf.

6. Add an entry to fill pressf_c from xpresf; follow x3htm_c. (Note that xpresf is indexed 0:NLAYS rather than 1:NLAYS+1.)

7. Four times: Add an entry to fill pressf_b from xpresf; follow

x3htm_b. (One time for each of the four boundaries.)

8. Refill xpresf with elements from DUMARAY0 using the fourth

dimension of 5.

9. Change the links on the fourth dimension of DUMARAY0 for xwwind and xtke from 5 to 6 and 5+iwout, respectively (toward bottom of code).

Thursday, November 10, 2011

running WRF with NARR--an overview


updated slides from our WRF club/WRF party this week are attached as a movie (!!)
Comment here with questions or additional info. you find ...
because ain't no party like a W R F party 'cause a W R F party don't stop...

(don't want to squint-read or read enlarged blurry text? want to be able to click on the links on the slides? a pdf of slides is also on nitrate: /archive/shared/WRF_club.pdf )

Friday, October 28, 2011

How to compile CMAQ in parallel

Recently, Phil and I put together a parallel compilation of CMAQ on nitrate, the recently purchased Linux box. Direct comparison with a serial compilation on the same machine, with the same input data, reveals that CMAQ scales reasonably well on a small number of cores. We observed a 7x increase in run speed by running on all 8 cores as compared to a single core. However, some key modifications need to be made to the installation scripts for successful compilation. These are listed, to the best of my knowledge, below. Please note that this is NOT meant to be a comprehensive guide on how to install CMAQ, nor can I guarantee that I've caught all the necessary changes, since I'm writing this up about a week after the actual install. That said, it may serve as a useful resource for someone who already knows how to compile CMAQ in serial and wants to get it running successfully in parallel.

1. Before installing
Some libraries that work well for serial compilations don't play nicely with the parallel compilation. To save yourself grief later, link these libraries into the CMAQ libraries location:

libnetcdf.a - Make sure this is compiled without DAP support. I have no idea what DAP support is, but it breaks parallel compilations and we don't use it. There is a flag that can be passed to the configure script when installing netcdf that turns it off.

libioapi.a - Surprisingly, a standard version of IOAPI will work just fine with a parallel compilation. IOAPI has a bunch of parallel IO options that you can set when compiling, but CMAQ doesn't use them. CMAQ (at least 4.7.0) is only parallelized for processing, not for file IO, so just use whatever library you used for the serial compilation. Of course, make sure you've properly included the fixed_src folder as you'll need the contents throughout.

libmpich.a - This doesn't have an explicit folder the way IOAPI and netCDF do in the CMAQ installation, but you'll need it for a parallel installation. If it isn't on your system, download and install it.

2. pario
Installing the parallel IO library (pario) is not necessary for a serial installation, but it is necessary to build CMAQ in parallel. Install it as you would any other component (bcon, icon, etc.) by modifying the library paths and the compiler path/flags.

3. stenex
The stencil exchange library has both parallel and serial components. You can get away with just installing sef90_noop for a serial build (built from bldit.se_noop), but for parallel you'll also need to run bldit.se to generate se_snl. It may be possible to skip the installation of sef90_noop if you want to run strictly in parallel, but I haven't tried. In any case, the only difference in installing these two files is that se_snl needs the mpich header file location.

4. Other components
To the best of my knowledge, the installation for m3bld, jproc, icon, and bcon is all unchanged from serial installation. Build these as you normally would.

5. cctm
This is probably where the largest numbers of changes need to occur. Let's break it down into two categories: building and running

Building cctm:

Make the following changes to the bldit.cctm script:
  • Uncomment the line reading "set ParOpt"
  • appropriately set location of mpich in the MPICH variable. Note that this is the top-level directory, and should have include, bin, and lib directories underneath it
  • Change FC from whatever compiler you were using before to mpif90 (provided it is installed on your system). mpif90 is a wrapper compiler that adds in extra flags as needed for compiling parallel programs. Note that this may not be available for MPI implementations other that MPICH
  • Add a flag to F_FLAGS reading -f90=oldCompiler where "oldCompiler" is the compiler you were using before. This makes sure mpif90 wraps the correct compiler.
  • find the line where the script sets the COMP variable. Comment it out and replace with
    set COMP = "intel"

Running cctm:

Make the following changes to the run.cctm script
  • Change the variables NPCOL_NPROW and NPROCS to reflect the number of processors you would like to use and their organization. There should be an example commented out in the file already. Note that the two values for NPCOL_NPROW should multiply to give NPROCS
  • At the very bottom of the file, comment out the line "time $BASE/$EXEC"
  • uncomment the four lines beginning "set MPIRUN", "set TASKMAP", "cat $TASKMAP", "time $MPIRUN"
  • Change the location of MPIRUN to reflect the actual path to the executable on your system (at the command line, run "which mpirun" to find the executable if you don't know where it is)
Make the following changes BEFORE RUNNING

  • There should be a file in the cctm directory labeled "machines8". Open up this file, erase the contents (they are meaningless) and enter in "sysname:num" on each line, where sysname is the name of the system you're working on, and num is a number starting at 1, and 1 larger each line. Put this string on each line, continuing until you've reached the max number of processors. IE for nitrate, the machines8 file looks something like
    nitrate:1
    nitrate:2
    nitrate:3
    nitrate:4
    nitrate:5
    nitrate:6
    nitrate:7
    nitrate:8
You should now be ready to run CMAQ in parallel!


Thursday, September 8, 2011

How to find my stuff. Chapter 1.

Please let me know if there are any permissions issues or if this info seems incomplete. I tried to make everything public but may have missed things.

The output from SMOKE/emissions input to CMAQ are all on mercury in:
/Volumes/archive/luedke/data/emis/
There is a separate subfolder for each scenario I did (baseline, no HDDV, no onroad, no PTIPM)

The output from CMAQ is on mercury in:
/Volumes/archive/luedke/data/cctm/
Again there's a subfolder for each scenario.

On SOX I have all the data and scripts I used to make plots, and the plots themselves. They are in folders generally pertaining to the type of plot:

/Users/luedke/8hr_max : getting the average monthly 8hr max ozone

/Users/luedke/extreme_plotting : plotting extreme events. some scripts are to plot the frequency and location of these events, and others are for percentage contribution only during those events. what constitutes an "extreme event" can be defined by you.

/Users/luedke/making_averages : here is where i made averages of emissions from the SMOKE output to compare with NEI attributes listed on the EPA website

/Users/luedke/nitrate_plotting : nitrate pm2.5

/Users/luedke/no2_plotting : NO2 plots across the whole CONUS

/Users/luedke/nox_plotting_in_the_east : NO and NO2 across just east US

/Users/luedke/ozone_plotting : monthly average ozone. not as useful as 8hr max so this didn't make it to my thesis.

/Users/luedke/pm25_plotting : total PM2.5, calculated by just adding all the PM2.5 species together (see previous blog post about getting PM2.5 from CMAQ. this is the result of the simplest method from the CMAS presentation in 2010)

/Users/luedke/sulfate_plotting : sulfate pm2.5

On NOX I have all the SMOKE goodies. In the future our group will hopefully be using an updated version of SMOKE and this may not be useful (I used 2.4), but here it is:

/home/luedke/assigns : where my assigns files are. some sectors needed different scripts which was dumb but that's why there are some extras

/home/luedke/go : where my runscripts are for each sector and for merging them

/home/luedke/intermed/2002ac : intermediate data from each sector script. the results of merging in the right combination were moved to mercury.

thank you and good day.

Wednesday, August 31, 2011

Notes on data, scripts, and externals

I am attaching several word documents that will give you all a little more information on my scripts, data, and externals.

In general, all my externals can be found in room 287 on one of the bookshelves. This document describes what is on each external. You might have to do a little "digging" to find exactly what you want, but most of it is explained in the documentation

http://www.scribd.com/doc/63671067/Externals?secret_password=ttur6t1wu39esfd00at

The next two documents are not so detailed. I made a quick "fact" sheet - called DATA- of where important data can be found - this will help instead of "digging" around the external sheet. Mostly it will help for Caitlin's old runs (SMOKE, WRF, CMAQ) and my new CMAQ runs

http://www.scribd.com/doc/63671344/Data?secret_password=2b3o23k974ub33mvi4xb

The last just states where you can find my early NCL scripts, my newer ones for my MS thesis, ioapi scripts, and where I ran CMAQ and CHEMMECH.

http://www.scribd.com/doc/63671304/Script-Notes?secret_password=2e1lrlfyupye3dm1t2xk


If anyone needs to contact me with questions you can contact me at jamorton74@gmail.com
Thanks everyone and it has been my pleasure to work with you all!

J