Sure, it’s forecasting , but using the test set would defeat the purpose of validation because no matter what your evaluation metric value is for the training set, you will just retrain on entire data set, weights change, after which there is no way/data left to validate your new model?

I understand how ARIMA works, you’re trying to estimate the regression coefficients of the AR and MA terms that fit the data, the more recent time steps might be given higher weights, but normally with prediction algorithms once trained the weights shouldn’t change, what changes is the new observation data inputted into the trained model to make predictions.

i guess my question is if you’re making/training a new model each time why validate just the first model?

Researcher | Investor | Data Scientist | Curious Observer. Thoughts and insights from the confluence of investing and machine learning.

Researcher | Investor | Data Scientist | Curious Observer. Thoughts and insights from the confluence of investing and machine learning.