What are cross-validation and bootstrapping?
Cross-validation and bootstrapping are both methods for estimating generalization error based on “resampling”.
In k-fold cross-validation, you divide the data into k subsets of (approximately) equal size. You train the net k times, each time leaving out one of the subsets from training, but using only the omitted subset to compute whatever error criterion interests you. If k equals the sample size, this is called “leave-one-out” cross-validation.