doFORC Parameters

Configuration file and keywords for doFORC

Acronyms: $\mathcal{O}$ = Optional
  $\mathcal{M}$ = Mandatory
  N/A = Not Applicable
Keyword$\mathcal{O}$
$\mathcal{M}$
TypeDefault
value
Accepted valuesDescription
Input data
input_file $\mathcal{M}$ string N/A N/A File containing the input data.
  • file name and/or its path may contain blank spaces
  • comments beginning with an exclamation point (!) are ignored
  • non-numeric lines are treated as blank lines
  • line terminator (the character or sequence of characters that marks the end of a line of text) can be CR (usually Macintosh files), LF (usually Unix files), or CRLF (usually Windows files). All lines in a given file must have the same terminator.
  • the different values (columns) in a line can be separated by spaces, tabs, commas, semicolons, or a combination of them
input_file_format $\mathcal{M}$ string N/A PMC, h_m, ha_hr_m, hr_ha_m
file formatcolumn_1column_2column_3column_4
PMC $h_{\mathrm{applied}}$ magnetic_moment user weight (optional)  
h_m $h_{\mathrm{applied}}$ magnetic_moment user weight (optional)  
ha_hr_m $h_{\mathrm{applied}}$ $h_{\mathrm{reversal}}$ magnetic_moment user weight (optional)
hr_ha_m $h_{\mathrm{reversal}}$ $h_{\mathrm{applied}}$ magnetic_moment user weight (optional)
  • lines with fewer columns are treated as blank lines
  • any additional columns are ignored
  • $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right)$ play the role of the independent variables $\left( x,y \right)$, also known as explanatory variables, input, predictor, regressor, feature, etc.
  • magnetic_moment plays the role of the dependent variable $f$, also known as output, outcome, response, etc.
PMC
  • file can have any of the PMC / Lakeshore files formats
  • header lines are not mandatory
  • each FORC curve must be preceded by the drift field measurement and a blank line
  • each FORC must be followed by a blank (non-numeric) line
  • the first line from a FORC is the reversal point
  • the first FORC does not need to contain a single point
h_m
  • similar with the PMC format, but without the drift field measurements
  • each FORC must be followed by a blank (non-numeric) line
  • the first line from a FORC is the reversal point
ha_hr_m
  • the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates
  • blank and non-numeric lines are ignored
hr_ha_m
  • the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates
  • blank and non-numeric lines are ignored
drift_correction $\mathcal{O}$ logical false true
false
Only for PMC / Lakeshore format.
uw $\mathcal{O}$ logical false true
false
User weights to give to individual observations in the sum of squared residuals that forms the local fitting criterion.
false No user weight is provided.
true User weights (nonnegative values) are provided as a 3rd (PMC and h_m data) or 4th (otherwise) column in input_file.
If an observation's weight is zero or negative, the observation is ignored in analysis.
atol $\mathcal{O}$ real 0.0 atol ≥ 0 Remove points that are closer than some tolerance (duplicate or nearby points) from input_file.
Only one of atol and rtol tolerance parameters can be used:
  • atol = absolute tolerance
  • rtol = tolerance relative to the size of the smallest $h_{\mathrm{applied}},h_{\mathrm{reversal}}$ interval
Default value removes only the duplicate points.
rtol $\mathcal{O}$ real 0 rtol ≥ 0
noncircularity $\mathcal{O}$ real 1.0 > 0 Scale factor for input data before smoothing process, during the preprocessing step:

$\;\overline{y}=\dfrac{y}{\mathrm{noncircularity}}$ .

Scale factor changes the shape of the neighborhood, considering the points lying on an ellipse centered at the given point to be equidistant from the given point, and it is useful when the variables have different scales.

After smoothing all the data are transformed back to their original state.

Default value = no scaling of the independent variables $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$.

standardize_data $\mathcal{O}$ integer 0 0,1,2,3,4 Standardize input data before smoothing process, during the preprocessing step:
$\;\overline{x}=\dfrac{x-x_{\mathrm{mean}}}{\sigma _x}$, $\;\overline{y}=\dfrac{y-y_{\mathrm{mean}}}{\sigma _y}$, $\;\overline{z}=\dfrac{z}{\sigma_z}$,
where $x_{\mathrm{mean}}$, $y_{\mathrm{mean}}$ are the Winsorized mean value of each variable, and $\sigma _x$, $\sigma _y$, $\sigma_z$ the Winsorized standard deviation of each variable.

Winsorized mean and standard deviation are robust scale estimators in that extreme values of a variable are discarded (the smallest and largest 5% of the data) before estimating the data scaling.

Standardization changes the shape of the neighborhood and it is useful when the variables have significantly different scales.

After smoothing all the data are transformed back to their original state.
0
  • no standardization, the data remain unchanged
1
  • independent variables are divided (scaled) by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right) $
    → does NOT change the shape of the neighborhood
  • dependent variable is NOT scaled
2
  • independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively
    → DOES change the shape of the neighborhood
  • dependent variable is NOT scaled
3
  • independent variables are scaled by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right) $
    → DOES change the shape of the neighborhood
  • dependent variable is scaled by $\sigma _z$
    → can affect the statistics
4
  • independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively
    → DOES change the shape of the neighborhood
  • dependent variable is scaled by $\sigma _z$
    → can affect the statistics
Output data
output_points $\mathcal{O}$ string input_points input_points Output is provided in the input $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$ points.

This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided in the input file.
ha_hr_regular_grid Output is provided in a regular grid with:
  • nha × nhr points
  • $\mathrm{ha\_min} \leq h \, _{\mathrm{applied}}^{\mathrm{grid}} \leq \mathrm{ha\_max}$

  • $\mathrm{hr\_min} \leq h \, _{\mathrm{reversal}}^{\mathrm{grid}} \leq \mathrm{hr\_max}$

  • ignore $h \, _{\mathrm{applied}}^{\mathrm{grid}} < h \, _{\mathrm{reversal}}^{\mathrm{grid}}$ points
This option is intended for FORC diagrams (or other derivatives) calculation.
hc_hu_regular_grid Output is provided in a regular grid with:
  • nhc × nhu points
  • $\mathrm{hc\_min} \leq h \, _{\mathrm{coercive}}^{\mathrm{grid}} \leq \mathrm{hc\_max}$

  • $\mathrm{hu\_min} \leq h \, _{\mathrm{interaction}}^{\mathrm{grid}} \leq \mathrm{hu\_max}$

where $\left\{ \begin{array}{l}h _{\mathrm{coercive}}=\dfrac{h_{\mathrm{applied}}-h _{\mathrm{reversal}}}{2} \\ h\,_{\mathrm{interaction}}=\dfrac{h_{\mathrm{applied}}+h\,_{\mathrm{reversal}}}{2}\end{array}\right. $, $\left\{ \begin{array}{l}h_{\mathrm{applied}}=h _{\mathrm{interaction}}+h _{\mathrm{coercive}} \\ h _{\mathrm{reversal}}=h _{\mathrm{interaction}}-h _{\mathrm{coercive}}\end{array}\right. $

This option is intended for FORC diagrams (or other derivatives) calculation.
rectangular_grid Output is provided in a regular rectangular grid with:
  • nx × ny points
  • $\mathrm{x\_min} \leq x^{\mathrm{grid}} \leq \mathrm{x\_max}$

  • $\mathrm{y\_min} \leq y^{\mathrm{grid}} \leq \mathrm{y\_max}$

This option is intended for a general use.
user point Output is provided in the user defined points from user_output_points_file.

This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided by the user.
user_output_points $\mathcal{M}$ string N/A N/A Only for output_points = user_points.
ha_in_min $\mathcal{O}$ string N/A $\geq \min \left( h_{\mathrm{applied}}\right) $ Only for output_points = input_points.

Crop input points, ignoring the points that are outside the domain:

$\; \left[ \mathrm{ha\_in\_min},\,\mathrm{ha\_in\_max}\right] \times \left[ \mathrm{hr\_in\_min},\,\mathrm{hr\_in\_max}\right] $.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{ha\_in\_min-}\Delta h_{a},\,\mathrm{ha\_in\_max+}\Delta h_{a}\right] \times \left[ \mathrm{hr\_in\_min-}\Delta h_{r},\,\mathrm{hr\_in\_max+}\Delta h_{r}\right], $

where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \,\max \left( h_{\mathrm{applied}}\right) -\min \left( h_{\mathrm{applied}}\right) \right) \\ \Delta h_{r}=0.1\left( \,\max \left( h_{\mathrm{reversal}}\right) -\min \left( h_{\mathrm{reversal}}\right) \right) \end{array}\right. $
ha_in_max $\mathcal{O}$ real N/A $\leq \max \left( h_{\mathrm{applied}}\right) $
hr_in_min $\mathcal{O}$ real N/A $\geq \min \left( h_{\mathrm{reversal}}\right) $
hr_in_max $\mathcal{O}$ real N/A $\leq \max \left( h_{\mathrm{reversal}}\right) $
nha $\mathcal{M}$ integer N/A nha > 0 Only for output_points = ha_hr_regular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{ha\_min}-\Delta h_{a},\,\mathrm{ha\_max}+\Delta h_{a}\right] \times \left[ \mathrm{hr\_min}-\Delta h_{r},\,\mathrm{hr\_max}+\Delta h_{r}\right] $,

where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \mathrm{ha\_max} - \mathrm{ha\_min} \right) \\ \Delta h_{r}=0.1\left( \mathrm{hr\_max} - \mathrm{hr\_min} \right) \end{array}\right. $.
ha_min $\mathcal{M}$ real N/A $\geq \min \left( h_{\mathrm{applied}}\right) $
ha_max $\mathcal{M}$ real N/A $\leq \max \left( h_{\mathrm{applied}}\right) $
nhr $\mathcal{M}$ integer N/A nhr > 0
hr_min $\mathcal{M}$ real N/A $\geq \min \left( h_{\mathrm{reversal}}\right) $
hr_max $\mathcal{M}$ real N/A $\leq \max \left( h_{\mathrm{reversal}}\right) $
nhc $\mathcal{M}$ integer N/A nhc > 0 Only for output_points = hc_hu_regular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{hc\_min}-\Delta h_{c},\,\mathrm{hc\_max}+\Delta h_{c}\right] \times \left[ \mathrm{hu\_min}-\Delta h_{u},\,\mathrm{hu\_max}+\Delta h_{u}\right] $,

where $\left\{ \begin{array}{l} \Delta h_{c}=0.1\left( \mathrm{hc\_max} - \mathrm{hc\_min} \right) \\ \Delta h_{u}=0.1\left( \mathrm{hu\_max} - \mathrm{hu\_min} \right) \end{array}\right. $.
hc_min $\mathcal{M}$ real N/A $\geq\min \left( h_{\mathrm{coercive}}\right) $
hc_max $\mathcal{M}$ real N/A $\leq\max \left( h_{\mathrm{coercive}}\right) $
nhr $\mathcal{M}$ integer N/A nhu > 0
hu_min $\mathcal{M}$ real N/A $\geq \min \left( h_{\mathrm{coercive}}\right) $
hu_max $\mathcal{M}$ real N/A $\leq \max \left( h_{\mathrm{coercive}}\right) $
nx $\mathcal{M}$ integer N/A nx > 0 Only for output_points = rectangular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{x\_min}-\Delta x,\,\mathrm{x\_max}+\Delta x\right] \times \left[ \mathrm{y\_min}-\Delta y,\,\mathrm{y\_max}+\Delta y\right] $,

where $\left\{ \begin{array}{l} \Delta x=0.1\left( \mathrm{x\_max} - \mathrm{x\_min} \right) \\ \Delta y=0.1\left( \mathrm{y\_max} - \mathrm{y\_min} \right) \end{array}\right. $.
x_min $\mathcal{M}$ real N/A $\geq \min \left( x\right) $
x_max $\mathcal{M}$ real N/A $\leq \max \left( x\right) $
ny $\mathcal{M}$ integer N/A ny > 0
y_min $\mathcal{M}$ real N/A $\geq \min \left( y\right) $
y_max $\mathcal{M}$ real N/A $\leq \max \left( y\right) $
nsb $\mathcal{O}$ integer 0 $\mathrm{nsb} \geq 0$ Number of points to skip at the border $h_{\mathrm{applied}}=h_{\mathrm{reversal}}$ or $h_{\mathrm{coercive}}=0$ in a regular grid output.

These points are only omitted in the output data, not in the calculations.

This option is useful to hide the possible boundary effects (numerical artifacts).
order_of_derivative $\mathcal{O}$ integer 0 6 0, 1, 2, 3, 4, 5, 6 Order of the partial derivatives to be numerically computed in the output_points
0 zero derivative, i.e., the smoothed (estimated) value $\hat{f}$
1, 2 first order derivatives $\dfrac{\partial \hat{f}}{\partial x}$, $\dfrac{\partial \hat{f}}{\partial y}$
3, 4, 5 second order derivatives $\dfrac{\partial ^{2}\hat{f}}{\partial x^{2}}$, $\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$, $\dfrac{\partial ^{2}\hat{f}}{\partial y^{2}}$
6 FORC diagram $=-\dfrac{1}{2}\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$
format $\mathcal{O}$ string g17.5e3 N/A Format at which the values in the output files are to be saved.
Effect (purpose)FormRequisiteDescription
Exponential form with 'D' exponents Dw.d $\mathrm{w} > \mathrm{d}+7$ - printed number has a zero digit as the integral part
- the middle d positions are for the number in normalized form
- the last three positions are for the exponent, including its sign
Exponential form with 'E' exponents Ew.d $\mathrm{w} > \mathrm{d}+7$  
Ew.dEe $\mathrm{w} > \mathrm{d}+\mathrm{e}+5$ the only difference from the above is that the exponent part has e positions plus one more for its sign
Engineering form ENw.d $\mathrm{w} > \mathrm{d}+9$ printed number has no more than three and at least one non-zero digits, and the exponent is a multiple of three
ENw.dEe $\mathrm{w} > \mathrm{d}+\mathrm{e}+7$
Scientific form ESw.d $\mathrm{w} > \mathrm{d}+7$ printed number has one non-zero digit as the integral part
ESw.dEe $\mathrm{w} > \mathrm{d}+\mathrm{e}+5$
Decimal form(no exponent) Fw.d $\mathrm{w} > \mathrm{d}+2$ if $d=0$ no fractional part will be printed, the right-most position being the decimal point
Mixture of the F and E formats Gw.d see above if a number can reasonably be printed with F format, that is used, all others (very large and very small) are displayed in E format
Gw.dEe see above
where:

w = number of positions to be used to write a number including its sign, decimal point, decimal places, exponent part and leading spaces between two consecutive numbers on a line

d = number of digits to the right of the decimal point

e = number of digits in the exponent part without its sign

Warning: errors with FORMAT are not detected until writing time, when the output can be asterisks *
Regression / Least squares fit
regression_method $\mathcal{O}$ string qshep loess loess method using quadratic ...
qshep quadratic → modified quadratic polynomial Shepard method
cshep cubic
tshep trigonometric → based on a cosine series
kernel $\mathcal{O}$ integer 6 1 uniform (rectangular window) $1$
2 triangular $1-\left\vert u\right\vert $
3 Epanechnikov (quadratic, parabolic) $1-u^{2}$
4 quartic (biweight, bisquare) $\left( 1-u^{2}\right) ^{2}$
5 triweight $\left( 1-u^{2}\right) ^{3}$
6 tricube $\left( 1-\left\vert u\right\vert ^{3}\right) ^{3}$
7 raised cosine (Tukey-Hanning) $\dfrac{1+\cos \left( \pi u\right) }{2}$
8 cosine $\cos \left( \dfrac{\pi }{2}u\right) $
9 Gaussian $\exp\left( -\dfrac{1}{2}\dfrac{u^{2}}{\sigma ^{2}}\right) $
10 exponential $\exp \left( -\lambda \left\vert u\right\vert \right) $
11 inverse distance $\dfrac{1}{1+\left\vert u\right\vert }$
12 Cauchy $\dfrac{1}{1+u^{2}}$
13 Parzen $\left\{ \begin{array}{ll} 1-6u^{2}+6\left\vert u\right\vert ^{3} & \mathrm{,if\quad }0\leq \left\vert u\right\vert
14 McLain $\dfrac{1}{\left( \varepsilon +\left\vert u\right\vert \right) ^{2}}$
15 Franke-Nielson $\dfrac{1-\left\vert u\right\vert }{\left\vert u\right\vert }$
uif $\mathcal{O}$ real 1.0   User interpolation factor = Scale factor for the weight associated with a node in the least squares system for the corresponding nodal function.

A large weight can be used to force interpolation.

Default value 1 means "pure" fitting.
nrr $\mathcal{O}$ integer 0 nrr ≥ 0 Number of robust locally weighted regressions: initial fit is followed by nrr iteratively reweighted iterations.

Such iterations are appropriate when there are outliers in the data or when the error distribution is a symmetric long-tailed distribution.

If nrr is provided then:
  • nn_list should be provided also
  • no CRITERION (or DEFAULT) should be provided
  • only (nn, RSS, RSSm) is provided as statistics
Default value 0 means no robust regression.
nn_list
nn_range
Number of nearest neighbors nn (smoothing parameter) - number of data points to be used in the least squares fit for coefficients defining the nodal functions.
  • radius of each neighborhood is chosen so that the neighborhood contains a specified number of the data points
  • nn in each local neighborhood controls the smoothness of the estimated surface
  • minimum value of nn: loess: 7, qshep: 6, cshep: 10, tshep: 10
  • one of nn_list or nn_range options is required
  • only one of nn_list and nn_range options can be used
nn_list $\mathcal{M}$ integer N/A see above nn_list specifies a list of positive integer nn values and it can be given as:
  • a list of values, separated by spaces or by commas
  • a loop construct start : end : increment, where increment is $\mathcal{O}$ (default increment is 1)
  • a combination of the above methods
  • for example, '10 20:30:5 50' will return the values '10,20,25,30,50'

- if no CRITERION is specified → a separate fit is provided for each nn value

- if a CRITERION is specified → all values specified in nn_list are examined, and the value that minimizes the specified CRITERION is selected
nn_range $\mathcal{M}$ integer N/A see above nn_range specifies two values: lower, upper
  • the two values must be separated by spaces or by comma
  • only the values lower ≤ nn ≤ upper are examined, and the value that minimizes the specified CRITERION is selected
  • golden section search method is used to find a local minimum of the specified CRITERION in the [lower, upper] range
rnnw $\mathcal{O}$ real 1.0 rnnw > 0

Only for 'shep' methods and only for output_points ≠ input_points.

Relative number (with regard to nn) of nearest neighbors used to compute the output in points that are different from the input points.
Statistics
ihat $\mathcal{O}$ integer 1 0, 1, 2 Only for 'loess' method:
  • determines how the statistical quantities are computed
  • can be decreased by the 'nrr' option
  • can be increased if necessary by the CRITERION parameter
ihatstatistics
0 nn RSS RSSm                    
1 nn RSS RSSm RSE DF1     delta1 delta2 rho AICC   GCV
2 nn RSS RSSm RSE DF1 DF2 DF3 delta1 delta2 rho AICC AICC1 GCV
0 The hat matrix $L$ is not computed
1 Only the diagonal of $L$ matrix is computed → approximate delta
2 Full $L$ matrix is computed:
  • exact delta
  • use only for testing
  • it is not meant for routine usage because computation time can be horrendous
istat $\mathcal{O}$ integer 1 0, 1, 2 Only for 'shep' methods:
  • regardless of 'istat' value, no approximations are used in the statistics computation
  • can be decreased by the 'nrr' option
  • can be increased if necessary by the CRITERION parameter
istatstatistics
0 nn RSS RSSm                    
1 nn RSS RSSm RSE DF1     delta1     AICC   GCV
2 nn RSS RSSm RSE DF1 DF2 DF3 delta1 delta2 rho AICC AICC1 GCV
alpha $\mathcal{O}$ real 0.05 0 < alpha < 1 Significance level for confidence intervals.

Only for ihat = 2 or istat = 2.
smoothresidual $\mathcal{O}$ logical false true
false
Add in the smoothed_input file a smoothing fit of the residuals for each smoothing parameter nn.
This fit is computed independently of the fit that is used to obtain the residuals.
CRITERION $\mathcal{O}$ string DEFAULT   Criterion for automatic smoothing parameter selection.
DEFAULT value means:
- no automatic selection for nn_list
- AICC for nn_range
AICC an approximation of AICC1
AICC1 corrected/improved version of Akaike information criterion (AIC)
GCV generalized cross validation
DF1 degrees of freedom
DF2
DF3
DFtarget $\mathcal{M}$ real N/A 1 < DFtarget < n Degrees of Freedom target → only for 'DF' criterions