Questions tagged [validation]
The validation tag has no summary.
65 questions
6 votes
1 answer
292 views
If my binary classifier results in a negative outcome, is it right to try again with another classifier which has the same FPR but higher recall?
I have two strings that represent two institutions. For instance, a1="University of Milan" a2="University Milan" or ...
3 votes
0 answers
48 views
How to conduct A/B testing for AI models properly with limited dataset (NLP)
Situation: I want to compare the performance of two models on the same task. I have a dataset of around 400 manually curated samples. The task is relatively niche (targeted sentiment analysis on ...
1 vote
0 answers
53 views
How should I handle tuning critical postprocessing parameters when publishing a hybrid deep learning + classical CV pipeline?
I'm preparing my first paper in computer vision, which presents a hybrid pipeline combining deep learning with classical techniques. The training process is fully automated across datasets, except for ...
8 votes
2 answers
368 views
What is the conclusion from this Accuracy / Loss plot for Train and Validation
What is the conclusion from this Accuracy / Loss plot for Train and Validation ? It seems, that the best results for Validation are after few (5) epochs. Also I'm not comfortable how the Loss and ...
0 votes
0 answers
36 views
Validation metrics plateau from the first few epochs at relatively good values and don't improve
I am working on 6D pose tracking, where the goal is to estimate how 3D position and orientation of an object changes from frame t-1 to t. Train/validation datasets are synthetic and come from a single ...
4 votes
2 answers
83 views
Validatioin loss zigzagging
I'm training a speech recognition model using the Nvidia Nemo framework. Just results with the small fastconformer model and two dozen iterations are pretty good; for my data I would say they are ...
1 vote
0 answers
26 views
My Validation set, gets me worse results, even when I'm using the train set as validation. What may cause that?
I have a question about how Keras handles validation. I have a pre-trained model (ResNet34 with a U-NET architecture) and want to train on a custom dataset for binary segmentation. I created a ...
1 vote
0 answers
29 views
How to improve precision/recall on multiclass classifier
I am working on an image recognition algorithm to classify images of starch granules to their source plant species. My model right now has 10 classes (plant species). Each class is trained with 600 ...
1 vote
1 answer
150 views
1 vote
1 answer
47 views
Hyperparameter tuning
Jane trains three different classifiers: Logistic Regression, Decision Tree, and Support Vector Machines on the training set. Each classifier has one hyper-parameter (regularisation parameter, depth-...
2 votes
1 answer
575 views
Optimal Number of Epochs for Training Transformer Network on Time series data? Early Stopping and Model Selection Strategies
I have a transformer network that is trained on time series data. The task is to predict if a variable will increase a certain percentage in the next 7 days. The input is data from the 90 previous ...
0 votes
2 answers
66 views
how to fix my increasing validation loss and decreasing training loss?
here is the code that got me this, please i need an advise on what to do to correct this. ...
0 votes
0 answers
36 views
Why isn't the validation data loss close to the test data loss?
First I set aside about 15% of my data as test data. Then, I used tensorflow.keras to create a relatively simple neural net model. Then I set the model.fit() parameter validation_split=0.2, so 20% of ...
0 votes
0 answers
26 views
Training with few samples, dropping training loss but constant validation loss
I am training a resnet50-based model using transfer learning. My dataset has 10 classes and about 10 occurrences per class, so it is very small. The training loss is decreasing steadily to 0.07 for ...
0 votes
1 answer
96 views
Is it a problem to use the test dataset for the hyperparameter tuning, when I want to compare 2 classification algorithms on the 10 different dataset?
I know that we should use the validation set to perform hyperparameter tuning and that test dataset is not anymore really the test if it is used for hyperparameter tuning. But is this a problem if i ...