Resize to large size in order to perform augmentations. This allows margin to do augmentations without creating empty zones. Then resize to smaller size. All data augmentations, after item_tfms resize, are performed together on GPU and only have 1 interpolation step.
** Not done
Either label in name, or labels are separated as folders. split(train,valid), then folders for each category in each folder
L
and try using a few of the new methods is that it adds.
** Not done
Done
1) Reflection padding artifacts 2) Parts of images being interpolated incorrectly
“.show_batch()”
“.summary() -> datablock.summary(filePath_with_images)”
No, you can use your trained model to help you clean your data
softmax and negative log likelihood log_softmax + nll_loss
softmax ensures probabilities lie between 0-1, and the sum of all the predictions = 1
in your loss function because a 0.99 and 0.999 probability might not affect much in gradients even though 0.999 is 10x more confident than 0.99
Done
torch.where finds true/false scenarios. While you could have a lot of torch.where statements, it is not as efficient as doing softmax
nan or undefined because log is not defined for numbers less than or equal to 0
Use either 10x less than the min, or use where the loss downward slope is steepest
fit_one_cycle on just the head, then unfreeze all with discriminative learning rates, and fit_one_cycle
?? before or after a function, eg ??learn.fine_tune or learn.fine_tune??
each layer has a different learning rate. You are discriminating learning rates among the layers
slice(1e-6, 1e-4). First argument goes to shallowest layer. Last argument goes to deepest layer. Layers in between get a multiplicative equidistant value.
With 1cycle training, there is a chance that early stopping picks a model before the 1cycle learning rate reaches small values. So, perhaps the accuracy lowers, increase a little, and then goes lower towards the end
resnet101 has 51 more layers than resnet50
Enables calculations to use less precise numbers where possible during training Speeds up training, and reduces memory load