šŸ—‚ļøModel management

Through model management you can train and deploy your models.

Go to Menu > Training > Model management.

The model management module allows you to train models, deploy them and analyse the accuracy metrics. The following metrics are reported for all types of models:

Precision

Precision summarises how many of the predictions of a certain class actually are of that class, it is the fraction of the retrieved instances of a certain class that actually pertain to this class:

P=TruePositives/(TruePositives+FalsePositives)P = True Positives / (True Positives + False Positives)

If you have a document classification model that predicts whether a document is an invoice or a purchase order, the precision for the invoice document type is calculated by dividing the total amount of documents that were predicted to be invoices and really are invoices, by the amount of documents that was (correctly or wrongly) predicted to be an invoice. In other words, it summarises how many of the documents predicted to be invoices are really invoices.

The precision for an invoice date entity is the amount of correctly predicted invoice date entities divided by the total amount of predicted invoice date entities.

If you want to avoid false positives in the predictions, you should optimise for precision, i.e., set the prediction threshold to have a high precision. You can analyse the relation between the precision and the prediction thresholds in the model analytics, described in the next pages of this section.

Recall

Recall is the percentage of examples for a certain class that were found by the model, it is the fraction of relevant instances that were retrieved:

R=TruePositives/(TruePositives+FalseNegatives)R = True Positives / (True Positives + False Negatives)

If you have a document classification model that predicts whether a document is an invoice or a purchase order, the recall for the invoice document type is calculated by dividing the total amount of documents that were predicted to be invoices and really are invoices, by the amount of documents that are invoices. In other words, it summarises how many invoices the model found out of the total amount of invoices.

The recall for an invoice date entity is the amount of correctly predicted invoice date entities divided by the total amount of invoice date entities.

If you want to make sure that all occurrences of a document type or an entity are recognised, even if some of them might be false positives, you should optimise for recall. i.e., set the prediction threshold to have a high recall. You can analyse the relation between the recall and the prediction thresholds in the model analytics, described in the next pages of this section.

F1-score

A high F1 score is an indication of an accurate model. The F1-score is the harmonic mean between the precision and the recall:

F1=2(Pāˆ—R)/(P+R)F1 = 2(P * R) / (P + R)
F1=(2āˆ—TruePositives)/((2āˆ—TruePositives)+FalsePositives+FalseNegatives)F1 = (2 * True Positives) / ((2 * True Positives) + False Positives + False Negatives)

Last updated