Validation Metrics
Validation metrics used in cross validation of CausalELM estimators
CausalELM.Metrics — ModuleMetrics to evaluate the performance of an Extreme learning machine for regression and classification tasks.
CausalELM.Metrics.mse — Functionmse(y, ŷ)Calculate the mean squared error
See also mae.
Examples
julia> mse([0.0, 0.0, 0.0], [0.0, 0.0, 0.0])
0
julia> mse([-1.0, -1.0, -1.0], [1.0, 1.0, 1.0])
4CausalELM.Metrics.mae — Functionmae(y, ŷ)Calculate the mean absolute error
See also mse.
Examples
julia> mae([-1.0, -1.0, -1.0], [1.0, 1.0, 1.0])
2
julia> mae([1.0, 1.0, 1.0], [2.0, 2.0, 2.0])
1CausalELM.Metrics.confusionmatrix — Functionconfusionmatrix(y, ŷ)Generate a confusion matrix
Examples
julia> confusionmatrix([1, 1, 1, 1, 0], [1, 1, 1, 1, 0])
2×2 Matrix{Int64}:
1 0
0 4
julia> confusionmatrix([1, 1, 1, 1, 0, 2], [1, 1, 1, 1, 0, 2])
3×3 Matrix{Int64}:
1 0 0
0 4 0
0 0 1CausalELM.Metrics.accuracy — Functionaccuracy(y, ŷ)Calculate the accuracy for a classification task
Examples
julia> accuracy([1, 1, 1, 1], [0, 1, 1, 0])
0.5
julia> accuracy([1, 2, 3, 4], [1, 1, 1, 1])
0.25CausalELM.Metrics.precision — Functionprecision(y, ŷ)Calculate the precision for a classification task
See also recall.
Examples
julia> precision([0, 1, 0, 0], [0, 1, 1, 0])
0.5
julia> precision([0, 1, 0, 0], [0, 1, 0, 0])
1CausalELM.Metrics.recall — Functionrecall(y, ŷ)Calculate the recall for a classification task
See also precision.
Examples
julia> recall([1, 2, 1, 3, 0], [2, 2, 2, 3, 1])
0.5
julia> recall([1, 2, 1, 3, 2], [2, 2, 2, 3, 1])
1CausalELM.Metrics.F1 — FunctionF1(y, ŷ)Calculate the F1 score for a classification task
Examples
julia> F1([1, 2, 1, 3, 0], [2, 2, 2, 3, 1])
0.4
julia> F1([1, 2, 1, 3, 2], [2, 2, 2, 3, 1])
0.47058823529411764