Monday, November 11, 2013

Cutting Out a Model

Quick Description: When making a new model, start from scratch can be a little difficult. Often, you may find yourself with an existing model that's approximately what you want. Maybe the size is wrong, maybe you want to work in a different direction or with a difference face of the crystal. Here's some general notes on how to correctly cut a model out of an existing system by obtaining the appropriate unit cell and expanding it to the size that suits you. If you are bringing two systems together (i.e. adsorbates on a surface) you will need to optimize the systems separately before bringing them together and optimizing the full system.

The Point: Manipulate an existing model into what you want.

Prerequisites: An existing model close to what you want.

Notes: Mostly in reference to this Matlab code, but even if you're not versed in this language, you might want to read the coding notes to get an idea of the substeps involved. The images aren't actually in the right order, but essentially, I was building a copper slab along the sqrt(3) directions.

Obtaining your true unit cell:

  1. Use existing atoms to mark the boundaries of your unit cell and grab their coordinates.
    • In Matlab, I use "plot_all" to view the system I'm working with and using the tag option and alt+clicking, I can get the rough coordinates of my atoms (rough referring to the fact that I actually grab the xyz values of where I click without autosnapping to the atom center itself).
  1. Use these coordinates to make your abc unit vectors (via subtracting).
    • In Matlab, I color change the rgb values manually and use the "plot_all" function to redisplay the atoms so that I can keep track of which one is which and subtract the correct vectors.
  2. Remember, not all of these atoms will actually be included in your unit cell. Delete the ones that you don't want to keep.
  3. Tile your unit cell to create the super cell you will actually run in VASP.
    • plot_super(atomic info, [a unit vector, b unit vector, c unit vector], [# a repeats, # b repeats, # c repeats]) outputs this super cell and plots it for you.
    • Double check that you're not overlapping copper atoms in the supercell using scatter3 to plot the xyz values of your atomic info. (It just makes it clearer if you have two units in roughly the same position because you didn't do step #2 properly).
  4. Adjust your unit vector.
    • xy coordinates should scale according to the integers you use to build your super cell from your unit cell.
    • z coordinate: The ones you have are currently great for model building, but in your actual POSCAR file you probably will want to have vacuum space.
  5. Make your new POSCAR file.
    • make_pos(atomic info, adjusted unit vectors, string of elements the order that they were used, 1) generates a POSCAR for you

plot_all

This is a Matlab script that I wrote to make a 3D plot of a model.




It requires a matrix containing a vertical list of atoms with the following information (in horizontal order):
  • x coordinate
  • y coordinate
  • z coordinate
  • atomic radius
  • red fractional pixel value
  • green fractional pixel value
  • blue fractional pixel value


Tuesday, November 5, 2013

NERSC Errors

Is your job acting funny? Check the OSZICAR, OUTCAR, execuptionOuput, and any .o<some numbers> or .e<some numbers> files...

Error: *** glibc detected *** gvasp: double free or corruption (out): 0x0000000008b72010 ***
Info: "... indicative of an error when trying to free up memory I think (as in they want to allocate an array or something).  Usually these kinds of errors are programming errors (bugs), though they could also have to do with what compiler was used I suppose.  Here are some people discussing this kind of problem relating to vasp: http://cms.mpi.univie.ac.at/vasp-forum/forum_viewtopic.php?2.5588" - Jon Wyrick
Fix: Modified gqscript to include "module load vasp/5.3.3"

Error: *** glibc detected *** gvasp: double free or corruption (out): 0x0000000008b72010 ***
Info: "... indicative of an error when trying to free up memory I think (as in they want to allocate an array or something).  Usually these kinds of errors are programming errors (bugs), though they could also have to do with what compiler was used I suppose.  Here are some people discussing this kind of problem relating to vasp: http://cms.mpi.univie.ac.at/vasp-forum/forum_viewtopic.php?2.5588" - Jon Wyrick
Fix: Modified gqscript to include "module load vasp/5.3.3"

Error: Stale NFS file handle
Info: "I believe the stale NFS handle is a problem on their end (as in you didn't do anything wrong).  These happen when you try to write data to a shared folder (e.g. our project folder) and for one reason or another the connection doesn't quite go through.  NFS is the file sharing system that is used.  So probably when VASP was trying to write one of its output files (such as OUTCAR or OSZICAR, etc.) which it updates each electronic step, something must have gone haywire." - Jon Wyrick
Fix: Rerun. Potentially you can rerun your job in the scratch folder (but remember to retrieve it as it will get deleted after a period of inactivity) as this will take out the transfer step between NFS and non-NFS folders.

Error: apsched: claim exceeds reservation's node-count
Info: Error in gscript
Fix: See NERSC website on using fewer cores per node

Error: the triple product of the basis vectors is negative exchange two basis vectors
Info: Your unit cell is defined in a left-hand basis set system. VASP will only work with right hand chirality.
Fix: Change your unit vectors (make sure the cross product is positive)

Error: OOM killer terminated this process
Info: You are "out of memory".
Fix: Try a gscript that runs on nodes with more memory. (gscript_high_mem > gscript_med_mem > gscript_long)

Error: Error reading item 'NPAR' from file INCAR.
Info: "...there are most likely "hidden" characters in your INCAR file causing it to fail being read... it happens as a result of translating from windows text files to unix text files - our installation of linux on our cluster doesn't seem to care about it, but whatever unix they have at nersc does care."
Fix: "dos2unix <filename>" for all relevant files in file directory

Other useful commands:
pwd (print working directory)
qstat -u bartels  (see only the jobs from bartels)
qstat -f [job number]   (see full output of a job)
qdel [job number]  (stop a job)

Submitting Jobs to NERSC

Quick Description: Occasionally, for really large jobs, you may find yourself in need of more computing power than we have available to us on our home computers or even through our collaborations. You may have a better chance of running these jobs on government computers. In our lab, we traditionally use our account at NERSC, although there's no reason we couldn't use something else (and should probably look into XSEDE). Using NERSC works a bit differently than submitting a regular job, as described below, and more importantly, since these computers are shared, there's a typical 2 day wait before your queued job will actually run... And if you have multiple high memory jobs, you will also queue behind yourself. Explore the website to learn more.

The Point: Got a big job? Use a big computer.

Prerequisites: Same as for submitting a regular job. Obviously, you need access to the NERSC account (request by email for me to share the login credentials).

To manage the account and apply for more time, see NERSC Information Management: My Stuff > My ERCAP requests > Actions >ERCAP requests > Start New Request (refer to previous pdfs as guides for the new request)

Notes:
  1. SSH into bartels@hopper.nersc.gov
  2. "cd project" (and navigate to your folder...)
  3. Submit using one of the following scripts:
    • regular job "qsub ~/gscript"
    • long job "qsub ~/gscript_long"
    • medium memory job "qsub ~/gscript_med_mem"
    • high memory job "qsub ~/gscript_high_mem"
    • other possible scripts include: gqscript_double, qscript_quick, kscript, kscript_double, kscript_quick, kscript_long - I'm not really sure about the difference between these anymore...
  4. Check your job using "qstat -u bartels"
Tips and Tricks:

Working with NERSC can be tricky, so it's useful to check out some of the known errors for our group. One common one that I'll mention here is that you may need to convert your files from DOS to UNIX format. To check a file format, run "file <filename>". To convert a file, run "dos2unix <filename>".

You can check your job in queue by navigating through the NERSC website: My NERSC > Queues and Scheduling > Queue Look

To copy to our computers: "scp <filename/folder name> bartels@pierce244-12.ucr.edu:/home/bartels/sharedData/<folder/fileName>"

For linux users (on your linux computer) to mount the nersc folder:
sshfs bartels@hopper.nersc.gov:/project/projectdirs/m1260 /home/<your nerscHome folder on your computer>

Simulated STS

Quick Description of STS: As an extension of STM, STS (scanning tunneling spectroscopy) is used to probe the electronic structure of a material by describing the density of the electrons in the material as a function of their energy. During STM, the scanning probe will raster over the sample, using a constant current feedback loop to maintain a set distance away from the sample. As the probe moves in X and Y directions, the voltage controlling the probe's Z position adjusts to accommodate the topography of the surface, information for which is provided by tunneling current). In contrast, during STS, the X and Y positions are held fixed while the voltage on the probe is ramped and the resulting tunneling current is measured, creating an IV curve. The slope of this IV curve - the conductance, first derivative, or dI/dV - corresponds to the local density of states at the electrons positioned at the tip. Creating a log scale plot of current versus voltage reveals the edges of the band structure. Alternatively, plotting the conductance versus the voltage can also be used to determine the band gap.

The Point: As with simulated STM, we hope that in recreating experimental measurements, we can used simulated STS to validate our molecular model of the sample system and also elucidate a more detailed understanding of the electronic structure.

Prerequisites: An optimized model with WAVECAR and PARCHG files; the voltage range used during IV measurements.

Note: Some of the code has already been pre-made for you.
  1. You will need to turn STS voltage range into a list of voltages. Each voltage will have a subfolder associated with it, and you will need to use those subfolders to run jobs creating PARCHG files at each of these voltages.
    • I've generally found that a resolution of .1 V is fin enough, so if my voltage range was -1.5 to 1.7 V then my listed would be: -1.5, -1.4, -1.3, ... , 1.5, 1.6, 1.7.
    • In Jon's code "exampleIV", you can modify the directory structure and variable to match his ((same folder as script)/vaspData/<your folder>); set the voltage table and submit script you would like to use (default "g8vasp");  run the function "ListWriteVASPIV". (For some reason, if an error occurs at this step, it is likely in copying the POTCAR. I've just modified the bash script to take care of this for me, rather than dig into someone else's code. Alternatively, you can drag and drop it yourself.)
    • If you then copy the resulting directory structure (<your folder>) over to vasp1 where your original job (with its PARCHG and WAVECAR exist), you can use a bash script (also generated in the Mathematica code) to run all of these.
    • Move to the parent folder and use "bash runIV.sh". This copies the WAVECAR into each folder and submits the job specified for each of the folders.
    • Wait for your jobs to complete, and consider using "bash cleanIV.sh" to clean up (getting rid of the huge amount of space taken up by copied over WAVECAR files). Also, use "rm */CHG", "rm */CHGCAR", and "rm */AEC*". (Maybe we'll collect these into one script someday.)
  2. At this point, you may check your PARCHG.
  3. Simulated STS assumes that you are changing your voltage at a given height (or distance away from the sample), so set this with variable "z".
  4. Set the variable "center". 
    • You need to figure out where on the system you are probing for STS. In typical systems in our lab, where we are analyzing an adsorbed molecule, you want to make sure you XY coordinates put you above this molecule. For the newer film analyses, choosing a center point in the coordinate system will serve fine.
  5.  Run the block "iCurve".
    • This will read the local PARCHG at every voltage for the given XYZ settings given, building a table of corresponding current values.
  6. View IV curve. You now have voltage values (see step #1) and corresponding current values, so view this however you want. In Jon's code, you can use the "ListPlot" function. You may subtract each value from its predecessor in a mock dI/dV curve as well (to estimate the bandgap).