about

avro (adaptive Voronoi remesher) is a mesh adaptation tool with the following capabilities:

  1. dimension-independent mesh adaptation using local cavity operators (with exact geometric predicates for 2d, 3d and 4d).
  2. visualization of meshes with solution fields using either OpenGL (and GLFW) or WebGL (through a websocket connection).
  3. dimension-independent calculation of Voronoi diagrams (with exact geoemtric predicates up to 4d).

note: this is the avro-2.0 version, which is a rewrite of the original version written by Philip for his PhD.


quickstart

These steps will guide you through downloading a fresh copy of avro, installing dependencies, and compiling the source code.

avro depends on having the following on your computer (some may be omitted if you don't want certain features):

  • git
  • CMake (2.8.8) for generating Makefiles and targets when compiling
  • C++ compiler (with C++11 capability), such as g++ 4.8.3+, clang 3.5+, icpc 14+
  • EngineeringSketchPad 1.09+ (mostly just for EGADS) for interfacing to geometry models
  • nlopt 2.4.2+ for solving optimization problems
  • OpenGL 3+
  • LAPACK for linear algebra
  • Eigen for the Newton-based optimal transport solver

Most of these dependencies can be installed with a package manager. On Linux, you can use apt (the Advanced Package Tool). On OS X, either homebrew or MacPorts will work (I use homebrew). avro is not currently supported on Windows.

dependencies on OS X

Install the clang compiler:


          $ xcode-select --install
        

Install dependencies:


          $ brew install git cmake nlopt lapack eigen
        

dependencies on Linux

Install dependencies:


          $ sudo apt-get install git cmake cmake-curses-gui libnlopt-dev libblas-dev liblapack-dev libeigen3-dev xorg-dev
        

EngineeringSketchPad (both Linux and OS X)

Download the latest version of ESP (source code) and pre-built OpenCASCADE libraries (Linux, OS X). Then extract the contents to some appropriate location where you keep your codes. For example, ~/Codes/EngSketchPad and ~/Codes/OpenCASCADE-7.3.1. Then append your ~/.bashrc (on Linux) or ~/.bash_profile (on OS X) file with the following lines:


          export CASROOT='/Users/pcaplan/Codes/OpenCASCADE-7.3.1'
          export CAS_DIR=$CASROOT
          export ESP_DIR='/Users/pcaplan/Codes/EngSketchPad'
          export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CAS_DIR/lib
        

Then navigate to the EngSketchPad/config directory and run the configuration file to create your ESP environment (this only needs to be done once when you compile ESP):


          $ cd ~/Codes/EngSketchPad/config
          $ ./makeEnv
        

This generates two files ESPenv.sh and ESPenv.csh in the root EngSketchPad directory. Source whichever one you like, and then navigate to the src directory and compile:


          $ cd ~/Codes/EngSketchPad
          $ source ESPenv.sh
          $ cd src
          $ make
        

note: use make -j 4 (or higher) if you want to use 4 threads to make compiling faster.

building avro (both Linux and OS X)

Now you're ready to download and build avro! Navigate to wherever you want to store avro, such as ~/Codes, and clone the repository:


          $ cd ~/Codes
          $ git clone --recursive https://gitlab.com/philipclaude/avro.git
          $ git checkout [your branch name]
        
... and enter your GitLab username and password. Note that in the last step, you need to provide the name of your branch (such as philip, pruffolo or hbrady). This will switch you over to your branch - note that pushing to the main branch is not allowed.


          $ cd avro
          $ mkdir build
          $ mkdir build/release
        

This is where avro will be compiled (anything in the build directory is ignored by git - see the .gitignore in the root avro directory). In the last step, you could have also created a directory build/debug if you want to compile with debugging symbols (useful if you're hunting a segfault). The name of the directory (release, debug, coverage) tells CMake what kind of build we want. The release version will tell the compiler to do some optimizations so avro will run faster.

Navigate to your build directory and run CMake:


          $ cd build/release
          $ cmake ../../
        

If the CMake configuration was successful, you can now compile avro:


          $ make avro -j 4
        

This will compile the main avro executable (build/release/bin/avro) using 4 threads. Here are some extra targets that you may want to build:


          $ make avro_lib       # only compiles the library (no unit tests or main executable)
          $ make                # compiles all unit tests (without running)
          $ make unit           # compiles and runs all unit tests
        

Information about using the avro executable as well as the avro API are described in the usage section.


developing

conventions

  1. enclose everything within the avro namespace (sub-namespaces may include numerics, graphics, EGADS, PSC).
  2. use snake_case for variable and method names (there might be some lingering drinkingCamelCase from version 1.0 - these will be renamed in the future).
  3. use CamelCase (not drinkingCamelCase) for class names.
  4. use const-correctness whenever possible.
  5. class variables_ should have an underscore ( _ ) at the end of the variable_name_ (this helps distinguish between class_variables_ and local_variables)
  6. include headers in alphabetical order (by directory, then header file name).
  7. avoid including headers within headers - try to use forward declarations when possible.
  8. use smart pointers (i.e. std::shared_ptr and std::make_shared from the memory header in C++) - do not use new to allocate memory!

directory structure

Here are the most important directories in avro you should be familiar with:
  • src: the main directory in which all the source code for the library, executables and third-party libraries are stored:
    • bin: contains the code used for the main avro executable.
    • lib: contains the main avro library source code:
      • adaptation: contains functions and classes for anisotropic mesh adaptation.
      • common: contains common utilities for various data structures.
      • geometry: contains definitions for entities, bodies ad models, which are specialized in egads and psc directories.
      • graphics: contains source for visualizing meshes and solutions, using both OpenGL and WebGL.
      • library: contains an internal library of meshes, metric, geometries and plots used in unit tests and the avro interface.
      • element: contains various reference element utilities, such as the definition of the reference simplex or polytope, along with functions for doing numerical quadrature with the reference element.
      • mesh: contains the core definition of a Topology and Points along with data structures useful when manipulating meshes, such as the inverse, neighbours and facets. Also contains code for defining Fields that can be attached to topologies.
      • numerics: contains definitions of vectors, matrices and important linear algebra functions, as well as the implementation of exact geometric predicates.
      • voronoi: contains the code to calculate the restricted Voronoi diagram and Delaunay triangulations.
    • third_party: contains the code for open-source third-party libraries. One of the most important ones is tinymat which is used for all linear algebra calculations. This library originated in SANS at MIT.
  • test: the main directory in which all tests are developed.
    • bin: contains unit tests for the avro API and executable.
    • lib: contains unit tests for the avro library. This directory directly parallels the structure of the src/lib directory.
    • library: should only contain a few mesh and geometry files for simple testing. In general, meshes and geometries should be stored in our library.
    • regression: contains nightly regression tests (currently the UGAWG benchmark cases).
    • sandbox: is a place where you can play with avro! Any tests written here will not be included in the unit target (so the pipelines will not fail).
    • third_party: contains unit tests for some of the third-party libraries, notably for tinymat.
    • tmp: a dump for unit test output files - anything stored in here will be ignored by git.

types

Numbers
In src/common/types.h, you will find various type definitions which should be used throughout the code. Do not use int, unsigned long or double directly unless you are calling a function in another library (e.g. you need to pass a float to OpenGL or an int to EGADS).
  • index_t: refers to "indices," such as those in a mesh topology, which is an unsigned long.
  • coord_t: refers to "coordinate", such as the dimension of points, which is an unsigned short.
  • real_t: refers to a "real" type, such as the coordinates of a mesh vertex, which can be double or float, depending on the desired precision.
  • nb: refers to the "number" of elements in any kind of container in avro, such as a list/array/table/topology/points, etc. For example, the number of vertices in the mesh can be accessed through points.nb() whereas the number of elements (triangles, tetrahedra, pentatopes, polygons, polytopes) can be accessed through topology.nb().
  • number: refers to the topological number of some object. For example, a topology which stores triangles will have topology.number() equal to 2. Similarly, a geometric entity describing a Line will have entity.number() equal to 1.
  • dim: refers to the ambient dimension, such as the dimension of vertex coordinates. For example, points in 5d space will have points.dim() equal to 5.
Points
Points (see src/lib/mesh/points.h) are the main container for vertices, which store physical coordinates, parameter space coordinates, geometric entities, as well as some other metadata. Here are some useful functions if you have a Points object called points:
  • points.dim(): returns the ambient dimension of the points.
  • points.udim(): returns the maximum dimension of the parameter space coordinates (often dim_-1 ).
  • points[k]: retrieves the pointer to the coordinates of vertex k.
  • points[k][d]: retrieves the value of the coordinate of vertex k at dimension d.
  • points.entity(k): retrieves the pointer to geometric entity on which vertex k lies; this pointer should be non-null for boundary vertices, and null for interior vertices.
  • points.u(k): retrieves the pointer to the parameter coordinate of vertex k. An entity ek with topological number N will only require N parameter coordinates to be stored at vertex k. All remaining parameter values (up to udim_) will be set to something very big (1e20), so that a bug will cause some calculation to blow up.
Topology
The Topology (see src/lib/mesh/topology.h) stores the mesh topology and can be used to manipulate (add or remove) elements in the mesh. Very importantly, it is templated by the reference element type, since most computations on meshes are independent of the type of the mesh, but invoke the reference element for specialized calculations (such as edge/facet extraction, volumes, etc.). Here are some useful functions if you have a Topology<type> object called topology:
  • topology.number(): returns the topological number of the mesh.
  • topology.points(): returns a reference to the points in the mesh. Note that a topology stores a reference to mesh points (instead of its own object), since points can be shared between multiple topologies.
  • topology(k): retrieves the pointer to the first index of element k.
  • topology(k)[j]: retrieves the global index of element k in local index j.
  • topology.nv(k): returns the number of vertices in element k. For example, this should be equal to number_+1 for a straight-sided simplicial mesh.
Entity
Integration with the geometry is critical for mesh generation and adaptation algorithms. As such, avro provides a generic interface through the Entity class (see src/lib/geometry/entity.), which provides a hierarchical representation (as a tree - see src/lib/common/tree.h) of the geometry. In other words, a square Face will have 4 children (one for each Edge), where each of these child Edges will have 2 children itself (one for each Node). Note that capital letters are used to distinguish geometry terms from mesh terms (i.e. "mesh edge" versus "geometry Edge"). Here are some useful functions if you have a pointer to a geometry entity (it is often the case to have a pointer to this if you are retrieving the entity from points.entity(k)).
  • entity->number(): returns the topological number of the entity.
  • entity->evaluate(u,x): evaluates the parameter coordinates in u and stores the result in the physical coordinates x.
  • entity->inverse(x,u): looks up the parameter coordinate for the closest point to x and saves the result in u (this function should be used sparingly).
  • entity->above(e): returns true or false depending on whether the entity is "above" some other entity e in the geometry hierarchy.
  • entity->intersect(e): returns a pointer to the common parent entity of both entity and e - the entity with the lowest topological number is returned.
  • entity->child(k): returns the k'th child of the entity. For example if entity is a Face, then entity->child(0) will return the entity associated with the first Edge child.

testing

running unit tests

The CMake configuration provides targets for all unit tests found in the test directory in which the name of the file ends with *_ut.cpp. All subdirectories in these targets are separated by underscores. For example, you can build and run the unit test found in test/lib/mesh/topology_ut.cpp by executing:

writing unit tests

avro uses its own unit testing framework with the single header file in test/unit_tester.hpp. When writing a unit test, the most important things you need to remember are to (1) create a testing suite and (2) create the actual test cases. For example, if we want to write tests for src/lib/mesh/something_fun.cpp (defined in header src/lib/mesh/something_fun.h), you would first create a new file in test/lib/mesh/something_fun_ut.cpp (remember to end the filename with _ut.cpp so CMake knows to create a target for it). The contents of the file would then be:


          #include "unit_tester.hpp"

          #include "mesh/something_fun.h"

          UT_TEST_SUITE( something_fun_test_suite )

          UT_TEST_CASE( fun_test1 )
          {
            // write your test body here!
          }
          UT_TEST_CASE_END( fun_test1 )

          UT_TEST_CASE( fun_test2 )
          {
            // write your test body here!
          }
          UT_TEST_CASE_END( fun_test2 )

          UT_TEST_SUITE_END( something_fun_test_suite )
        

Of course, you can name your "suite" and "test" to whatever you like (make sure the initial declaration matches the closing section). Here are some useful macros which will help you assert whether your code is working correctly (use these within your test bodies):


          UT_ASSERT( something_to_assert_is_true )
          UT_ASSERT_EQUALS( value , expected_value )
          UT_ASSERT_NEAR( value , expected_value , tolerance )
          UT_CATCH_EXCEPTION( command_which_should_throw_an_exception )
        

Then re-run CMake (so it can find your test file) and run your test:


          $ cmake .
          $ make lib_mesh_something_fun_ut
        

which should display:


          ==== Test suite: mesh_something_fun_suite ====

          running 2 tests...

          --> running case fun_test1:

          --> running case fun_test2:

           ==== Total time = X.YZ seconds. ====
          Summary:
          N assertions passed out of N with 0 failures and 0 exceptions.
          [100%] Built target lib_mesh_something_fun_ut
        

This unit test will automatically be added to the complete list of tests for the unit target (run when changes are pushed).

debugging

Bugs happen and can be frustrating to hunt down. In the event you are hunting a segmentation fault (or some other unexpected signal), you should use a "debug" version (see above) so that you can run some debugging tools. Specifically, avro's CMake configuration provides two targets that can be useful when debugging: (1) with gdb (ldb on OS X) and (2) with valgrind's memcheck tool. Suppose your new unit test something_fun_ut produces a segmentation fault. You can run gdb on your test by making the target something_fun_ut_gdb:

          $ make something_fun_ut_gdb
        

You will then enter into the gdb console. Some useful commands for debugging are:

  • bt: print the "backtrace" from the point of the segfault, which shows each frame (functions called with line numbers - super helpful!)
  • f [frame number]: go to a specific frame indexed by "frame number" in the backtrace.
  • p [some variable]: print the value of some variable
  • call [some function]: call some function (can be used on objects too).

gdb will most likely provide everything you need when debugging, but for really special bugs, valgrind's memcheck tool can be useful for finding bad memory accesses, such as using uninitialized data. On Linux, you can install valgrind through your package manager, but for recent versions of OS X, you need to manually install it (so I don't recommend doing this). If you do have valgrind installed on your computer, you can make the target:


            $ make something_fun_ut_memcheck
          

You will notice that your test runs significantly slower with memcheck. In the future, a pipeline will be added (see below) that will also run all unit tests under memcheck (probably once per week).

pushing changes

Any time you want to push changes to avro, this will trigger a "pipeline" on GitLab. There are three phases to this pipeline: build, test and deploy.

  • build: builds avro using a variety of compilers.
  • test: runs all unit tests and generates code coverage information.
  • deploy: makes the coverage information and this documentation page available.

After pushing your changes, you can then request (on GitLab) to merge (a.k.a. "pull" in GitHub terminology) with the main branch. Philip will then review any pipelines associated with the changes proposed in your merge request; if they pass, then your changes will be merged and your code will be available to the rest of the group (upon doing a git pull). Note that there is also a nightly pipeline (run at midnight every night) that runs our regression tests (see the test/regression directory). These regression tests should also pass before merging your changes into the main branch. Keep an eye on the "builds" channel in slack for succesful pipelines. If you see one, it's probably a good idea to do a git pull!