doFORC Parameters

### Configuration file and keywords for doFORC

 Acronyms: $\mathcal{O}$ = Optional $\mathcal{M}$ = Mandatory N/A = Not Applicable

Keyword$\mathcal{O}$
$\mathcal{M}$
TypeDefault
value
Accepted valuesDescription
Input data
input_file$\mathcal{M}$stringN/AN/A File containing the input data.
• file name and/or its path may contain blank spaces
• comments beginning with an exclamation point (!) are ignored
• non-numeric lines are treated as blank lines
• line terminator (the character or sequence of characters that marks the end of a line of text) can be CR (usually Macintosh files), LF (usually Unix files), or CRLF (usually Windows files). All lines in a given file must have the same terminator.
• the different values (columns) in a line can be separated by spaces, tabs, commas, semicolons, or a combination of them
input_file_format$\mathcal{M}$stringN/APMC, h_m, ha_hr_m, hr_ha_m
file formatcolumn_1column_2column_3column_4
PMC$h_{\mathrm{applied}}$magnetic_momentuser weight (optional)
h_m$h_{\mathrm{applied}}$magnetic_momentuser weight (optional)
ha_hr_m$h_{\mathrm{applied}}$$h_{\mathrm{reversal}}magnetic_momentuser weight (optional) hr_ha_mh_{\mathrm{reversal}}$$h_{\mathrm{applied}}$magnetic_momentuser weight (optional)
• lines with fewer columns are treated as blank lines
• any additional columns are ignored
• $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right)$ play the role of the independent variables $\left( x,y \right)$, also known as explanatory variables, input, predictor, regressor, feature, etc.
• magnetic_moment plays the role of the dependent variable $f$, also known as output, outcome, response, etc.
PMC
• file can have any of the PMC / Lakeshore files formats
• header lines are not mandatory
• each FORC curve must be preceded by the drift field measurement and a blank line
• each FORC must be followed by a blank (non-numeric) line
• the first line from a FORC is the reversal point
• the first FORC does not need to contain a single point
h_m
• similar with the PMC format, but without the drift field measurements
• each FORC must be followed by a blank (non-numeric) line
• the first line from a FORC is the reversal point
ha_hr_m
• the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates
• blank and non-numeric lines are ignored
hr_ha_m
• the first three columns represent the $\left( x,y,z \right)$ Cartesian coordinates
• blank and non-numeric lines are ignored
drift_correction$\mathcal{O}$logicalfalsetrue
false
Only for PMC / Lakeshore format.
uw$\mathcal{O}$logicalfalsetrue
false
User weights to give to individual observations in the sum of squared residuals that forms the local fitting criterion.
falseNo user weight is provided.
trueUser weights (nonnegative values) are provided as a 3rd (PMC and h_m data) or 4th (otherwise) column in input_file.
If an observation's weight is zero or negative, the observation is ignored in analysis.
atol$\mathcal{O}$real0.0atol ≥ 0 Remove points that are closer than some tolerance (duplicate or nearby points) from input_file.
Only one of atol and rtol tolerance parameters can be used:
• atol = absolute tolerance
• rtol = tolerance relative to the size of the smallest $h_{\mathrm{applied}},h_{\mathrm{reversal}}$ interval
Default value removes only the duplicate points.
rtol$\mathcal{O}$real0rtol ≥ 0
fill_x-steps_gt$\mathcal{O}$real0 ≥ 0 Resample individual FORCs by filling gaps greater than 'fill_x-steps_gt' on each individual FORC [curves given by the points with the same $y$ ($h_{\mathrm{reversal}}$) coordinate], in the preprocessing step.

This feature is useful in the case of missing data or of large gaps in the input data.

A statistic regarding the values of the increment $dx$ on each individual curve and respectively of the increment $dy$ between curves is provided in the Command Prompt window.

Default value = no filling
merge_x-steps_lt$\mathcal{O}$real0 ≥ 0 Resample individual FORCs by merging the points that are separated by steps less than 'merge_x-steps_lt' on each individual FORC [curves given by the points with the same $y$ ($h_{\mathrm{reversal}}$) coordinate], in the preprocessing step.

This feature is useful in the case of "very close" points in the input data.

The merging procedure may return points that were not input points, and it can even have a "smoothing effect".

The first point corresponding to the smallest value of the $x$ ($h_{\mathrm{applied}}$) coordinate on each individual FORC is not modified.

Default value = no merging
nn_iFORC$\mathcal{O}$integer0nn_iFORC = 0
nn_iFORC ≥ 2
Number of nearest neighbors used to smooth each individual FORC with respect to $h_{\mathrm{applied}}$, in the preprocessing step.

Smoothing is performed with a local regression around each input point, the size of each neighborhood being chosen so that the neighborhood contains at most nn_iFORC+1 data points.

The individually smoothed curves will be subsequently smoothed with respect to both $h_{\mathrm{applied}}$ and $h_{\mathrm{reversal}}$, in the processing step.

Default vale = no smoothing.
noncircularity$\mathcal{O}$real1.0 > 0 Scale factor for input data before smoothing process, in the preprocessing step:

$\;\overline{y}=\dfrac{y}{\mathrm{noncircularity}}$ .

Scale factor changes the shape of the neighborhood, considering the points lying on an ellipse centered at the given point to be equidistant from the given point, and it is useful when the variables have different scales.

After smoothing all the data are transformed back to their original state.

Default value = no scaling of the independent variables $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$.
standardize_data$\mathcal{O}$integer00,1,2,3,4 Standardize input data before smoothing process, in the preprocessing step:
$\;\overline{x}=\dfrac{x-x_{\mathrm{mean}}}{\sigma _x}$, $\;\overline{y}=\dfrac{y-y_{\mathrm{mean}}}{\sigma _y}$, $\;\overline{z}=\dfrac{z}{\sigma_z}$,
where $x_{\mathrm{mean}}$, $y_{\mathrm{mean}}$ are the Winsorized mean value of each variable, and $\sigma _x$, $\sigma _y$, $\sigma_z$ the Winsorized standard deviation of each variable.

Winsorized mean and standard deviation are robust scale estimators in that extreme values of a variable are discarded (the smallest and largest 5% of the data) before estimating the data scaling.

Standardization changes the shape of the neighborhood and it is useful when the variables have significantly different scales.

After smoothing all the data are transformed back to their original state.
0
• no standardization, the data remain unchanged
1
• independent variables are divided (scaled) by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right)$
→ does NOT change the shape of the neighborhood
• dependent variable is NOT scaled
2
• independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively
→ DOES change the shape of the neighborhood
• dependent variable is NOT scaled
3
• independent variables are scaled by the same scale factor = $\mathrm{max}\left( \sigma _x,\sigma _y\right)$
→ DOES change the shape of the neighborhood
• dependent variable is scaled by $\sigma _z$
→ can affect the statistics
4
• independent variables are scaled by $\sigma _x$ and $\sigma _y$, respectively
→ DOES change the shape of the neighborhood
• dependent variable is scaled by $\sigma _z$
→ can affect the statistics
curves_to_be_processed$\mathcal{O}$stringall
• positive integers
• $dy \geq 0$

• extend
Curves (FORCs) to be processed in the processing step.

The command has three optional subcommands (parts):

• a list of the curves that will be processed given as a list of values, a loop construct start:end:increment where increment is optional (default increment is 1), as a combination of them, or as one of the keywords 'all','odd','even'
• the minimum increment (step) dy between the curves selected according the first subcommand
• the curves for which the value of the increment between them is smaller than dy will not be processed
• the first two subcommands can restrain (if necessary) the cropped region defined by ha_in_min, ..., ha_min, ..., hc_min, ..., or x_min, ...
• the cropped region can be extended to the maximum values allowed by the first two subcommands using the third subcommand
• 'extend' or 'ext'

For example, '10:30:2 dy=0.01 extend' will select from the curves 10, 12, ..., 28, 30 those curves for which the value of the increment between them is greater than 0.01, and will extend the cropped region to the maximum allowed values.

Default value = all curves will be processed
Output data
output_points$\mathcal{O}$stringinput_pointsinput_points Output is provided in the input $\left( h_{\mathrm{applied}},h_{\mathrm{reversal}} \right) \equiv \left( x,y \right)$ points.

This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided in the input file.
ha_hr_regular_grid Output is provided in a regular grid with:
• nha × nhr points
• $\mathrm{ha\_min} \leq h \, _{\mathrm{applied}}^{\mathrm{grid}} \leq \mathrm{ha\_max}$

• $\mathrm{hr\_min} \leq h \, _{\mathrm{reversal}}^{\mathrm{grid}} \leq \mathrm{hr\_max}$

• ignore $h \, _{\mathrm{applied}}^{\mathrm{grid}} < h \, _{\mathrm{reversal}}^{\mathrm{grid}}$ points
This option is intended for FORC diagrams (or other derivatives) calculation.
hc_hu_regular_grid Output is provided in a regular grid with:
• nhc × nhu points
• $\mathrm{hc\_min} \leq h \, _{\mathrm{coercive}}^{\mathrm{grid}} \leq \mathrm{hc\_max}$

• $\mathrm{hu\_min} \leq h \, _{\mathrm{interaction}}^{\mathrm{grid}} \leq \mathrm{hu\_max}$

where $\left\{ \begin{array}{l}h _{\mathrm{coercive}}=\dfrac{h_{\mathrm{applied}}-h _{\mathrm{reversal}}}{2} \\ h\,_{\mathrm{interaction}}=\dfrac{h_{\mathrm{applied}}+h\,_{\mathrm{reversal}}}{2}\end{array}\right.$, $\left\{ \begin{array}{l}h_{\mathrm{applied}}=h _{\mathrm{interaction}}+h _{\mathrm{coercive}} \\ h _{\mathrm{reversal}}=h _{\mathrm{interaction}}-h _{\mathrm{coercive}}\end{array}\right.$

This option is intended for FORC diagrams (or other derivatives) calculation.
rectangular_grid Output is provided in a regular rectangular grid with:
• nx × ny points
• $\mathrm{x\_min} \leq x^{\mathrm{grid}} \leq \mathrm{x\_max}$

• $\mathrm{y\_min} \leq y^{\mathrm{grid}} \leq \mathrm{y\_max}$

This option is intended for a general use.
user_points Output is provided in the user defined points from user_output_points_file.

This option is intended for both FORC diagrams (or other derivatives) calculation and for a general use, as the calculation is made at the points provided by the user.
user_output_points$\mathcal{M}$stringN/AN/A Only for output_points = user_points.
ha_in_min$\mathcal{O}$realN/A$\geq \min \left( h_{\mathrm{applied}}\right)$ Only for output_points = input_points.

Crop input points, ignoring the points that are outside the domain:

$\; \left[ \mathrm{ha\_in\_min},\,\mathrm{ha\_in\_max}\right] \times \left[ \mathrm{hr\_in\_min},\,\mathrm{hr\_in\_max}\right]$.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{ha\_in\_min-}\Delta h_{a},\,\mathrm{ha\_in\_max+}\Delta h_{a}\right] \times \left[ \mathrm{hr\_in\_min-}\Delta h_{r},\,\mathrm{hr\_in\_max+}\Delta h_{r}\right],$

where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \,\max \left( h_{\mathrm{applied}}\right) -\min \left( h_{\mathrm{applied}}\right) \right) \\ \Delta h_{r}=0.1\left( \,\max \left( h_{\mathrm{reversal}}\right) -\min \left( h_{\mathrm{reversal}}\right) \right) \end{array}\right.$
ha_in_max$\mathcal{O}$realN/A$\leq \max \left( h_{\mathrm{applied}}\right)$
hr_in_min$\mathcal{O}$realN/A$\geq \min \left( h_{\mathrm{reversal}}\right)$
hr_in_max$\mathcal{O}$realN/A$\leq \max \left( h_{\mathrm{reversal}}\right)$
nha$\mathcal{M}$integerN/Anha > 0 Only for output_points = ha_hr_regular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{ha\_min}-\Delta h_{a},\,\mathrm{ha\_max}+\Delta h_{a}\right] \times \left[ \mathrm{hr\_min}-\Delta h_{r},\,\mathrm{hr\_max}+\Delta h_{r}\right]$,

where $\left\{ \begin{array}{l} \Delta h_{a}=0.1\left( \mathrm{ha\_max} - \mathrm{ha\_min} \right) \\ \Delta h_{r}=0.1\left( \mathrm{hr\_max} - \mathrm{hr\_min} \right) \end{array}\right.$.
ha_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{applied}}\right)$
ha_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{applied}}\right)$
nhr$\mathcal{M}$integerN/Anhr > 0
hr_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{reversal}}\right)$
hr_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{reversal}}\right)$
nhc$\mathcal{M}$integerN/Anhc > 0 Only for output_points = hc_hu_regular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{hc\_min}-\Delta h_{c},\,\mathrm{hc\_max}+\Delta h_{c}\right] \times \left[ \mathrm{hu\_min}-\Delta h_{u},\,\mathrm{hu\_max}+\Delta h_{u}\right]$,

where $\left\{ \begin{array}{l} \Delta h_{c}=0.1\left( \mathrm{hc\_max} - \mathrm{hc\_min} \right) \\ \Delta h_{u}=0.1\left( \mathrm{hu\_max} - \mathrm{hu\_min} \right) \end{array}\right.$.
hc_min$\mathcal{M}$realN/A$\geq\min \left( h_{\mathrm{coercive}}\right)$
hc_max$\mathcal{M}$realN/A$\leq\max \left( h_{\mathrm{coercive}}\right)$
nhr$\mathcal{M}$integerN/Anhu > 0
hu_min$\mathcal{M}$realN/A$\geq \min \left( h_{\mathrm{coercive}}\right)$
hu_max$\mathcal{M}$realN/A$\leq \max \left( h_{\mathrm{coercive}}\right)$
nx$\mathcal{M}$integerN/Anx > 0 Only for output_points = rectangular_grid.

In order to diminish the boundary effects (numerical artifacts), the processing is accomplished (if there are input points) on a larger domain:

$\left[ \mathrm{x\_min}-\Delta x,\,\mathrm{x\_max}+\Delta x\right] \times \left[ \mathrm{y\_min}-\Delta y,\,\mathrm{y\_max}+\Delta y\right]$,

where $\left\{ \begin{array}{l} \Delta x=0.1\left( \mathrm{x\_max} - \mathrm{x\_min} \right) \\ \Delta y=0.1\left( \mathrm{y\_max} - \mathrm{y\_min} \right) \end{array}\right.$.
x_min$\mathcal{M}$realN/A$\geq \min \left( x\right)$
x_max$\mathcal{M}$realN/A$\leq \max \left( x\right)$
ny$\mathcal{M}$integerN/Any > 0
y_min$\mathcal{M}$realN/A$\geq \min \left( y\right)$
y_max$\mathcal{M}$realN/A$\leq \max \left( y\right)$
nsb$\mathcal{O}$integer0$\mathrm{nsb} \geq 0$ Number of points to skip at the border $h_{\mathrm{applied}}=h_{\mathrm{reversal}}$ or $h_{\mathrm{coercive}}=0$ in a regular grid output.

These points are only omitted in the output data, not in the calculations.

This option is useful to hide the possible boundary effects (numerical artifacts).
order_of_derivative$\mathcal{O}$integer0 60, 1, 2, 3, 4, 5, 6 Order of the partial derivatives to be numerically computed in the output_points
0zero derivative, i.e., the smoothed (estimated) value $\hat{f}$
1, 2first order derivatives $\dfrac{\partial \hat{f}}{\partial x}$, $\dfrac{\partial \hat{f}}{\partial y}$
3, 4, 5second order derivatives $\dfrac{\partial ^{2}\hat{f}}{\partial x^{2}}$, $\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$, $\dfrac{\partial ^{2}\hat{f}}{\partial y^{2}}$
6FORC diagram $=-\dfrac{1}{2}\dfrac{\partial ^{2}\hat{f}}{\partial x\partial y}$
format$\mathcal{O}$stringg17.5e3N/A Format at which the values in the output files are to be saved.

Effect (purpose)FormRequisiteDescription
Exponential form with 'D' exponentsDw.d$\mathrm{w} > \mathrm{d}+7$ - printed number has a zero digit as the integral part
- the middle d positions are for the number in normalized form
- the last three positions are for the exponent, including its sign
Exponential form with 'E' exponentsEw.d$\mathrm{w} > \mathrm{d}+7$
Ew.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+5$the only difference from the above is that the exponent part has e positions plus one more for its sign
Engineering formENw.d$\mathrm{w} > \mathrm{d}+9$printed number has no more than three and at least one non-zero digits, and the exponent is a multiple of three
ENw.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+7$
Scientific formESw.d$\mathrm{w} > \mathrm{d}+7$printed number has one non-zero digit as the integral part
ESw.dEe$\mathrm{w} > \mathrm{d}+\mathrm{e}+5$
Decimal form(no exponent)Fw.d$\mathrm{w} > \mathrm{d}+2$if $d=0$ no fractional part will be printed, the right-most position being the decimal point
Mixture of the F and E formatsGw.dsee aboveif a number can reasonably be printed with F format, that is used, all others (very large and very small) are displayed in E format
Gw.dEesee above

where:

w = number of positions to be used to write a number including its sign, decimal point, decimal places, exponent part and leading spaces between two consecutive numbers on a line

d = number of digits to the right of the decimal point

e = number of digits in the exponent part without its sign

Warning: errors with FORMAT are not detected until writing time, when the output can be asterisks *
Regression / Least squares fit
regression_method$\mathcal{O}$stringqsheploess LOESS (LOcal regrESSion) method using quadratic polynomials for the local fitting
qshepquadratic → modified quadratic polynomial Shepard method
cshepcubic → modified cubic polynomial Shepard method
tsheptrigonometric → modified cosine series Shepard method
drop_x2$\mathcal{O}$logicalfalsetrue
false
Drop square: only for 'regression_method = loess' or 'regression_method = qshep'.

Specifies the quadratic monomials to exclude from the local quadratic fits.

For example, 'drop_x2 = false, drop_y2 = true' uses the monomials 1, $x$, $y$, $x^{2}$, and $xy$ in performing the local fitting.
drop_y2
kernel$\mathcal{O}$integer6 1uniform (rectangular window)$1$
2triangular$1-\left\vert u\right\vert$
3Epanechnikov (quadratic, parabolic)$1-u^{2}$
4quartic (biweight, bisquare)$\left( 1-u^{2}\right) ^{2}$
5triweight$\left( 1-u^{2}\right) ^{3}$
6tricube$\left( 1-\left\vert u\right\vert ^{3}\right) ^{3}$
7raised cosine (Tukey-Hanning)$\dfrac{1+\cos \left( \pi u\right) }{2}$
8cosine$\cos \left( \dfrac{\pi }{2}u\right)$
9Gaussian$\exp\left( -\dfrac{1}{2}\dfrac{u^{2}}{\sigma ^{2}}\right)$
10exponential$\exp \left( -\lambda \left\vert u\right\vert \right)$
11inverse distance$\dfrac{1}{1+\left\vert u\right\vert }$
12Cauchy$\dfrac{1}{1+u^{2}}$
13Parzen $\left\{ \begin{array}{ll} 1-6u^{2}+6\left\vert u\right\vert ^{3} & \mathrm{,if\enspace }0\leq \left\vert u\right\vert <0.5 \\ 2\left( \,1-\left\vert u\right\vert \right) ^{3} & \mathrm{,if\enspace }0.5\leq \left\vert u\right\vert \leq 1% \end{array}% \right.$
14McLain$\dfrac{1}{\left( \varepsilon +\left\vert u\right\vert \right) ^{2}}$
15Franke-Nielson$\dfrac{1-\left\vert u\right\vert }{\varepsilon +\left\vert u\right\vert }$
uif$\mathcal{O}$real1.0 User interpolation factor = Scale factor for the weight associated with a node in the least squares system for the corresponding nodal function.

A large weight can be used to force interpolation.

Default value 1 means "pure" fitting.
nrr$\mathcal{O}$integer0nrr ≥ 0 Number of robust locally weighted regressions: initial fit is followed by nrr iteratively reweighted iterations.

Such iterations are appropriate when there are outliers in the data or when the error distribution is a symmetric long-tailed distribution.

If nrr is provided then:
• nn_list should be provided also
• no CRITERION (or DEFAULT) should be provided
• only (nn, RSS, RSSm) is provided as statistics
Default value 0 means no robust regression.
nn_list
nn_range
Number of nearest neighbors nn (smoothing parameter) - number of data points to be used in the least squares fit for coefficients defining the nodal functions.
• radius of each neighborhood is chosen so that the neighborhood contains a specified number of the data points
• nn in each local neighborhood controls the smoothness of the estimated surface
• minimum value of nn: loess: 7, qshep: 6, cshep: 10, tshep: 10 (each of drop_x2 and drop_y2 decreases by one unit the minimum value of nn required by the loess and qshep methods respectively)
• one of nn_list or nn_range options is required
• only one of nn_list and nn_range options can be used
nn_list$\mathcal{M}$integerN/Asee above nn_list specifies a list of positive integer nn values and it can be given as:
• a list of values, separated by spaces or by commas
• a loop construct start : end : increment, where increment is optional (default increment is 1)
• a combination of the above methods
• for example, '10 20:30:5 50' will return the values '10,20,25,30,50'

- if no CRITERION is specified → a separate fit is provided for each nn value

- if a CRITERION is specified → all values specified in nn_list are examined, and the value that minimizes the specified CRITERION is selected
nn_range$\mathcal{M}$integerN/Asee above nn_range specifies two values: lower, upper
• the two values must be separated by spaces or by comma
• only the values lower ≤ nn ≤ upper are examined, and the value that minimizes the specified CRITERION is selected
• golden section search method is used to find a local minimum of the specified CRITERION in the [lower, upper] range
rnnw$\mathcal{O}$real1.0rnnw > 0

Only for 'shep' methods and only for output_points ≠ input_points.

Relative number (with regard to nn) of nearest neighbors used to compute the output in points that are different from the input points.
Statistics
ihat$\mathcal{O}$integer10, 1, 2 Only for 'loess' method:
• determines how the statistical quantities are computed
• can be decreased by the 'nrr' option
• can be increased if necessary by the CRITERION parameter
ihatstatistics
0The hat matrix $L$ is not computed
1Only the diagonal of $L$ matrix is computed → approximate delta
2Full $L$ matrix is computed:
• exact delta
• use only for testing
• it is not meant for routine usage because computation time can be horrendous
istat$\mathcal{O}$integer10, 1, 2 Only for 'shep' methods:
• regardless of 'istat' value, no approximations are used in the statistics computation
• can be decreased by the 'nrr' option
• can be increased if necessary by the CRITERION parameter
istatstatistics
alpha$\mathcal{O}$real0.050 < alpha < 1 Significance level for confidence intervals.

Only for ihat = 2 or istat = 2.
smoothresidual$\mathcal{O}$logicalfalsetrue
false
Add in the smoothed_input file a smoothing fit of the residuals for each smoothing parameter nn.
This fit is computed independently of the fit that is used to obtain the residuals.
CRITERION$\mathcal{O}$stringDEFAULT Criterion for automatic smoothing parameter selection.
DEFAULT value means:
- no automatic selection for nn_list
- AICC for nn_range
AICCan approximation of AICC1
AICC1corrected/improved version of Akaike information criterion (AIC)
GCVgeneralized cross validation
DF1degrees of freedom
DF2
DF3
DFtarget$\mathcal{M}$realN/A1 < DFtarget < n Degrees of Freedom target → only for 'DF' criterions