diff -Naur lammps-9Jan17/doc/html/.buildinfo lammps-17Jan17/doc/html/.buildinfo --- lammps-9Jan17/doc/html/.buildinfo 2017-01-09 13:33:58.000000000 -0700 +++ lammps-17Jan17/doc/html/.buildinfo 2017-01-18 08:33:45.414433124 -0700 @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: 77e273bf55cdca4f3ac157243a46785b +config: ad8be50116609dd15b1a9b9b50fe8610 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff -Naur lammps-9Jan17/doc/html/Manual.html lammps-17Jan17/doc/html/Manual.html --- lammps-9Jan17/doc/html/Manual.html 2017-01-09 13:33:58.000000000 -0700 +++ lammps-17Jan17/doc/html/Manual.html 2017-01-18 08:33:45.445434041 -0700 @@ -148,7 +148,7 @@
LAMMPS tries to flag errors and print informative error messages so -you can fix the problem. Of course, LAMMPS cannot figure out your -physics or numerical mistakes, like choosing too big a timestep, -specifying erroneous force field coefficients, or putting 2 atoms on -top of each other! If you run into errors that LAMMPS doesn’t catch -that you think it should flag, please send an email to the -developers.
+you can fix the problem. For most errors it will also print the last +input script command that it was processing. Of course, LAMMPS cannot +figure out your physics or numerical mistakes, like choosing too big a +timestep, specifying erroneous force field coefficients, or putting 2 +atoms on top of each other! If you run into errors that LAMMPS +doesn’t catch that you think it should flag, please send an email to +the developers.If you get an error message about an invalid command in your input script, you can determine what command is causing the problem by looking in the log.lammps file or using the echo command diff -Naur lammps-9Jan17/doc/html/Section_packages.html lammps-17Jan17/doc/html/Section_packages.html --- lammps-9Jan17/doc/html/Section_packages.html 2017-01-09 13:33:58.000000000 -0700 +++ lammps-17Jan17/doc/html/Section_packages.html 2017-01-18 08:33:45.445434041 -0700 @@ -391,7 +391,7 @@ -
The current list of user-contributed packages is as follows:
| Pic/movie | Library | - | |||||||
| USER-ATC | atom-to-continuum coupling | @@ -1551,7 +1549,6 @@atc | lib/atc | - | |||||
| USER-AWPMD | wave-packet MD | @@ -1564,7 +1561,6 @@lib/awpmd | - | ||||||
| USER-CG-CMM | coarse-graining model | @@ -1577,7 +1573,6 @@- | |||||||
| USER-COLVARS | collective variables | @@ -1587,7 +1582,6 @@colvars | lib/colvars | - | |||||
| USER-DIFFRACTION | virutal x-ray and electron diffraction | @@ -1603,7 +1597,6 @@- | |||||||
| USER-DPD | reactive dissipative particle dynamics (DPD) | @@ -1619,7 +1612,6 @@- | |||||||
| USER-DRUDE | Drude oscillators | @@ -1635,7 +1627,6 @@- | |||||||
| USER-EFF | electron force field | @@ -1648,7 +1639,6 @@- | |||||||
| USER-FEP | free energy perturbation | @@ -1664,7 +1654,6 @@- | |||||||
| USER-H5MD | dump output via HDF5 | @@ -1680,7 +1669,6 @@lib/h5md | - | ||||||
| USER-INTEL | Vectorized CPU and Intel(R) coprocessor styles | @@ -1699,7 +1687,6 @@- | |||||||
| USER-LB | Lattice Boltzmann fluid | @@ -1715,7 +1702,6 @@- | |||||||
| USER-MGPT | fast MGPT multi-ion potentials | @@ -1731,7 +1717,6 @@- | |||||||
| USER-MISC | single-file contributions | @@ -1750,7 +1735,6 @@- | |||||||
| USER-MANIFOLD | motion on 2d surface | @@ -1763,7 +1747,6 @@- | |||||||
| USER-MOLFILE | VMD molfile plug-ins | @@ -1779,14 +1762,12 @@VMD-MOLFILE | - | ||||||
| USER-NC-DUMP | dump output via NetCDF | Lars Pastewka (Karlsruhe Institute of Technology | KIT) | -:doc:`dump nc | -dump nc/mpiio <dump_nc>` | +dump nc / dump nc/mpiio | - | ||
| USER-PHONON | phonon dynamical matrix | @@ -1830,7 +1810,6 @@- | |||||||
| USER-QMMM | QM/MM coupling | @@ -1843,7 +1822,6 @@lib/qmmm | - | ||||||
| USER-QTB | quantum nuclear effects | @@ -1859,7 +1837,6 @@- | |||||||
| USER-QUIP | QUIP/libatoms interface | @@ -1872,7 +1849,6 @@lib/quip | - | ||||||
| USER-REAXC | C version of ReaxFF | @@ -1888,7 +1864,6 @@- | |||||||
| USER-SMD | smoothed Mach dynamics | @@ -1904,7 +1879,6 @@- | |||||||
| USER-SMTBQ | Second Moment Tight Binding - QEq potential | @@ -1920,7 +1894,6 @@- | |||||||
| USER-SPH | smoothed particle hydrodynamics | @@ -1933,7 +1906,6 @@- | |||||||
| USER-TALLY | Pairwise tallied computes | @@ -1949,7 +1921,6 @@- | |||||||
| USER-VTK | VTK-style dumps | @@ -1965,7 +1936,6 @@lib/vtk | - | ||||||
| @@ -1975,7 +1945,6 @@ | - |
NetCDF files can be directly visualized with the following tools: -Ovito (http://www.ovito.org/). Ovito supports the AMBER convention
---and all of the above extensions.
-
a NetCDF reader that is not present in the standard distribution of AtomEye
-NetCDF files can be directly visualized with the following tools:
+The person who created these files is Lars Pastewka at Karlsruhe Institute of Technology (lars.pastewka at kit.edu). diff -Naur lammps-9Jan17/doc/html/Section_start.html lammps-17Jan17/doc/html/Section_start.html --- lammps-9Jan17/doc/html/Section_start.html 2017-01-09 13:33:58.000000000 -0700 +++ lammps-17Jan17/doc/html/Section_start.html 2017-01-18 08:33:45.427433509 -0700 @@ -1813,8 +1813,9 @@ thermodynamic state and a total run time for the simulation. It then appends statistics about the CPU time and storage requirements for the simulation. An example set of statistics is shown here:
-Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms
+Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms + Performance: 18.436 ns/day 1.302 hours/ns 106.689 timesteps/s 97.0% CPU use with 4 MPI tasks x no OpenMP threads @@ -1843,14 +1844,14 @@ Neighbor list builds = 26 Dangerous builds = 0-
The first section provides a global loop timing summary. The loop time +
The first section provides a global loop timing summary. The loop time is the total wall time for the section. The Performance line is provided for convenience to help predicting the number of loop -continuations required and for comparing performance with other -similar MD codes. The CPU use line provides the CPU utilzation per +continuations required and for comparing performance with other, +similar MD codes. The CPU use line provides the CPU utilzation per MPI task; it should be close to 100% times the number of OpenMP -threads (or 1). Lower numbers correspond to delays due to file I/O or -insufficient thread utilization.
+threads (or 1 of no OpenMP). Lower numbers correspond to delays due +to file I/O or insufficient thread utilization.The MPI task section gives the breakdown of the CPU run time (in seconds) into major categories: