In linear elastic fracture mechanics, the elastic strain energy released rate can be calculated from the following expression: G=P22b(a)C(a)a where a is the crack length (or depth), P is the load, b(a) is the crack front length (or crack width, which can vary with a) and ∂C(a)/∂a is the derivative of the compliance C(a) of the sample (constant of proportionality between the load application point displacement and the applied load P) with respect to crack length a.

In a chevron-notched sample, starting at a = 0, at some fixed applied load P, G at first decreases, then reaches a minimum at a given crack length ac and then increases as a increases beyond ac. If the material has a single, well-defined toughness R (= number of joules consumed per square meter of crack creation), crack growth will be stable up to a = ac for both load-controlled and displacement controlled test. Beyond ac, crack growth becomes spontaneous and unstable for any further increase of the load (brittle samples in a load controlled tests at this point exhibit sudden failure, while the same sample in a displacement controlled test might still show some stable crack propagation under decreasing load before its final failure). If, on the other hand, the toughness depends on crack length (meaning if the material displays R-curve behaviour) Gc is then no longer a material parameter and interpretation of results obtained from tests with chevron notched samples becomes more complicated (and should be done with care). In what follows we ignore this R-curve complication. We also ignore complications that may arise as a result of crack tip plasticity or subcritical crack growth phenomena. In microscopic samples, given the limited extent of crack growth, this is often a reasonable assumption.

If the two functions b(a) and C(a) are known, it then suffices to measure the critical load at failure, Pc, to be able to compute the material's toughness Gc = R using equation above and knowledge of the critical crack length ac. For simple sample geometries, there may exist analytical expressions available; however, this is generally not the case with chevron-notched samples, particularly in microscopic samples that have been carved using methods such as focused ion beam milling because such methods lack precision. Generally the finite element method (FEM), can then be used to calculate the sample's compliance function C(a); to this end, for triangular cantilever beams containing a chevron-notch we provide here Python scripts that are intended to be used with the FEM software Abaqus.

Figure 1. Geometry of triangular cantilever beam with chevron-notch. Uncracked portion of the chevron-notch is shown as shaded region.

The sample geometry is depicted in Figure 1, where required dimensions are defined; unless the sample was machined with precision, these dimensions must be measured directly on the sample before testing (typically using a scanning electron microscope when dealing with microscopic samples). In the compliance calibration procedure to be conducted using the present code, we calculate how the compliance C(a) of such a triangular chevron-notched beam evolves as a straight-fronted symmetric crack grows across its chevron notch, from the tip towards its base. To this end, we choose to calculate the beam linear elastic force-displacement response (the slope of which gives the compliance C) for a number of different crack lengths. The compliance at each crack length is then simply obtained by dividing the displacement with the load.


Note: Abaqus software comes with its own python interpreter. This package is designed to use specifically the python interpreter that is part of the Abaqus software. The main Abaqus commands mentioned here should be used in exactly the same way under *nix and Windows operating systems. Some commands mentioned here are, however, specific for *nix environments only.


The achvbsT script package is to be used and distributed freely under the conditions of the GNU GPL licence.

Download the latest tar-ball achvbsT_v1.tar.gz from lmm.epfl.ch Open Science web site.

To use achvbsT library, no particular installation is necessary. The only thing one needs to do is to unpack the tar-ball package within some chosen working folder.

In the *nix environment, to create a current working folder (CWF) that will contain all calculations of some specific project, execute in bash shell something like:

mkdir ~/achvbsT_project_1

The name of CWF in this case was chosen as "~/achvbsT_project_1".

Copy the tar-bal package to CWF. To unpack the tar-ball package, after stepping into CWF, use command:

tar -zxvf achvbsT_v1.tar.gz

After unpacking, CWF should contain the following files/folders:

In case the folder "./jobs" is missing, it can be created by running the command:

mkdir jobs

That is all!!!

Important: From this point onwards it is assumed that one works in CWD.


  1. Prepare Abaqus input files. Use/edit pr_testCHV2.py as template.
  2. Generate Abaqus input files (.inp files)

    abaqus cae noGUI=pr_testCHV2.py

  3. Run Abaqus jobs from all input files.

    To run a single Abaqus job on desktop PC use command such as:

    abaqus job=jobBaseName-xx input=jobBaseName-XX.inp cpus=N

    To run job series via GNU Parallel tool on *nix machines edit file that contains list of Abaqus jobs (use file parallel_job_list_testCHV2 as example). Then run command of form:

    parallel --jobs N -a JOB_LIST_FILE ./run_local_parallel_abaqus_job.sh

  4. Post-process all Abaqus output files (.odb files) with po_CHV.py script

    abaqus python po_CHV.py jobBaseName-XX

  5. Extract crack-length vs. compliance data with get_compliance.py script

    abaqus python get_compliance.py pr_testCHV2


I. Setting up the preparation script "pr_testCHV2.py".

With preparation scripts such as pr_testCHV1.py or pr_testCHV2.py, we can generate series of Abaqus input files in one go; in particular these scripts will generate nJobs Abaqus input files (see parameter nJobs in e.g. pr_testCHV1.py). Note that preparation scripts are regular python scripts, so they have to follow the proper python syntax rules.

Let’s name the series of nJobs=30 with the job base name testCHV2. We set, in the file pr_testCHV2.py, the parameter jobBaseName equal, for example, to "testCHV2". The jobBaseName parameter will be automatically used to create a series of sub-folders that will contain one Abaqus input file per sub-folder.

Note: After all 30 models are generated, root folder ./jobs should contain sub-folder testCHV2-1, which contains input file testCHV2-1.inp, sub-folder testCHV2-2, which contains input file testCHV2-2.inp, etc. all up to sub-folder testCHV2-30, which contains input file testCHV2-30.inp.

Next, set the geometrical parameters in pr_testCHV2.py. These are the parameters: Wc, B1, S2, a0, a1, a and Ss (see Figure 1.). For description of geometrical parameters consult also the code of libabaqus_achvbsT.py library.

You need to set the elastic constants D11, D12 and D44 and to define the local coordinate system (parameters lcsX and lcsXY) of the material. At this moment, only materials described with up to three elastic constants can be represented.

The geometrical parameter a represents the current crack length. We want to construct the range of crack length values, and to generate the individual Abaqus input file for each of crack length by looping over the crack length list. The for loop is the final part of the pr_testCHV2.py script.

To actually run the pr_testCHV2.py script use the command:

abaqus cae noGUI=pr_testCHV2.py

Generation of all Abaqus models might require some time, so be patient. Progress during model generation can be followed from the output messages.

Note: If one wants to test the preparation script, set the script to generate only one .inp file and run the script as:

abaqus cae script=pr_testCHV2.py

This will open Abaqus GUI and directly generate the beam model for the particular set of parameters.

Note: The only difference between files pr_testCHV1.py and pr_testCHV2.py is the value of jobBaseName. Since post-processed data for pr_testCHV1 are present in the ./data folder, the purpose of pr_testCHV2.py is to gain hands-on experience in using this package and check results with test case testCHV1.

II. Running Abaqus calculations

Running Abaqus calculations can be carried out on a single PC computer (having one or several cores and/or core threads) but also on a high performance computing cluster. The details on how to carry out Abaqus calculation given the Abaqus input file is assumed to be known.

Completed Abaqus calculations means that each sub-folder that contained the input (.inp) file now also has at least the Abaqus (.odb) output file. If Abaqus calculations were done somewhere else, the .odb files need to be copied back to their appropriate sub-folders within the root folder ./jobs.

A. Working in *nix operating systems

To run series and/or parallel Abaqus calculations on a single *nix PC (to run Abaqus calculations for all input files that ware generated by pr_testCHV2.py), it can be of convenience to use the GNU Parallel tool. To use the GNU Parallel tool, a small bash shell script called run_local_parallel_abaqus_job.sh is provided (read the script, it is very simple). To use this script type a command such as:

parallel --jobs N -a JOB_LIST_FILE ./run_local_parallel_abaqus_job.sh

where N is the number of (parallel) jobs that are run simultaneously on a PC (this is highly dependent on the particular PC hardware). JOB_LIST_FILE is the text file that contains the list of sub-folders which contain Abaqus input files for which one wants to run calculations. An example of such a list is given in the file parallel_job_list_testCHV1. By default, each of N Abaqus jobs assumes the use of 4 CPU threads. If this is too much, adjust the number of CPUs in Abaqus command line in file run_local_parallel_abaqus_job.sh to the desired value. For example, on a PC that has 4 CPUs with enabled threading, (8 CPU threads in total) one can run 2 Abaqus jobs simultaneously (N=2), where each job is using 4 threads (cpus=4).

Note: Each individual abacus job is set to calculate the linear elastic response (force-deflection relation) of a beam for particular set of beam model parameters. By default, force data for individual Abaqus model are calculated by prescribing small displacements in node QQ along y-direction (parameter QQ2=0.01 in pr_testCHV2.py). By default only two calculation frames are carried out. This default behaviour can be changed by modifying the incrementation parameters in preparation script pr_testCHV2.py. For the list of available incrementation parameters check the code in libabaqus_achvbsT.py.

Note: The compliance is, by default, calculated from the force and displacement data of the first frame (see parameter nFrame=1 in function get_chvb_compliance in libutils_achvbsT.py).

III. Post-processing Abaqus calculations

To post-process finished Abaqus calculations (.odb files) with po_CHV.py script, use a command such as:

abaqus python po_CHV.py jobBaseName-XX

The post-processing will extract the prescribed displacement and reaction force data for the particular job of the set jobBaseName indexed with the number XX. For example, to post-process the Abaqus job in folder testCHV2-10, simply replace jobBaseName-XX in above command with testCHV2-10. The result of this command will create in the root folder ./data, a sub-folder testCHV2-10 and within it, a series of files containing the reaction force data (files ending with ...RF1.hro, ...RF2.hro or ...RF3.hro) and files containing prescribed displacement data (files ending with ...U1.hro, ...U2.hro or ...U3.hro). The .hro files are simple text files; a short description of their data format is given in first few commented lines.

To post-process all 30 Abaqus jobs of the example series named testCHV2, it is convenient to embed the above single job post-processing command line within the loop. Under *nix using bash shell, the full command that will post-process a set of 30 Abaqus calculations can be:

for job in `echo testCHV2-{1..30}`; do abaqus python po_CHV1.py $job; done

Note: Post-processing script po_CHV.py is generic to all jobs created by script like pr_testCHV2.py and typically needs not be modified.

IV. Compliance calibration data - the dependence of compliance on crack length.

To calculate the compliance from the force-displacement data of a particular Abaqus model, due to the way in which the beam model is oriented in the Abaqus coordinate system, only the reaction force RF2 and displacement U2 files are really needed.

Script get_compliance can be used to extract the crack length vs. compliance data from post-processed files created by po_CHV.py. To use it, type command:

abaqus python get_compliance.py pr_testCHV2

The last argument in this command has to be the name of the preparation file that was used to create Abaqus input files of the series but excluding the extension ".py" of python script files, e.g. just pr_testCHV2. The crack length vs. compliance data are printed to terminal screen.

Fitting the polynomial to FEM data will result with the smooth compliance function approximation, that than can be differentiated with respect to the crack length for the purpose of obtaining the fracture toughness, as described above.



Python library where generation of triangular beam model with chevron notch geometry is implemented. Example of how to generate series of Abaqus models for compliance calibration procedure is given in python scripts po_testCHV1.py or po_testCHV2.py. This library uses Abaqus python binding functions.


Python library where various post-processing functions are implemented. The function reportHistoryRequestParameter from this library is used to extract reaction forces and displacements from the Abaqus output (.odb) file. An example of how to post-process Abaqus calculations is given in python script po_CHV.py. This library uses Abaqus python binding functions.


Python library with a few utility functions for final data processing.


Python scripts that are working examples of how to prepare the Abaqus input files for compliance calibration using the libabaqus_achvbsT.py. The Abaqus input files will be stored in a folder named ./jobs/jobBaseName-XX, where XX is an integer counter of an Abaqus job in a series. The name of each input file is of form jobBaseName-XX.inp. Parameters that control the model geometry and individual Abaqus calculation are set directly in these scripts. (see file libabaqus_achvbsT.py)


Python script that extracts force-displacement data from Abaqus (.odb) file. The script is quite general and generally does not need to be edited.


Python script that extracts crack length vs. compliance data from post-processed data files crated by po_CHV.py.


The utility script that might be of help in running parallel Abaqus calculations using the GNU Parallel tool.


Example of a list of abacus jobs to run via the GNU Parallel tool.


The default root folder for storing post-processing data. Post-processed data for chevron-notched beam geometry prepared by the script pr_testCHV1.py are already placed within the ./data folder.


The default root folder where all Abaqus input files are going to be stored. If it is present after unpacking the tar-ball, it should be empty. If it is not present, it should be created.


Documentation folder containing this README.html

Created by Goran Žagar, EPFL (2017)