Assume I have a business such that the annual cash flows operate in a 12 month cycle where each month's credits are equal to Cn/12 and the monthly debits are Dn/12, and n = the month represented as a number, i.e. 1 = January. I cannot just use a distribution of credits and debits because the inflows and outflows have to follow the cycle. For example, if a business makes most of its inventory in the 1st and 2nd quarters and but sells the inventory in the 3rd and 4th quarter, a standard volatility will not reflect this annual cycle but only the over all distribution. So, what I am trying to do is (i) set the mean, min, max, and st dev of credits and debits respectively for the year, (ii) set the monthly weighting of the cycle (e.g. if Nov has twice the weighting as January, then Nov should have twice the expected credits), and (iii) then apply the volatility based the monthly weighting. I could set the mean, and st dev for each month individually and then apply an individual volatility, but this seems inefficient and I would rather do it by setting the overall mean, st dev, etc and then multiplying it by a lookup table weighting. I don't know how to do this in Goldsim, or if this is even the right approach for what I am trying to accomplish. Any help is appreciated.
-
I believe you should first attempt to convert the time series into a stationary series. Model the stationary series, then convert back into seasonally-affected series.
This could be done with seasonal indices, followed by first differences on the adjusted data (or other transformations of your series). The Dickey-Fuller test can be used to test for stationarity (to see if your transforms worked). I think what you are describing is a form of developing seasonal indices (what you call weighting factors, i.e. multiplicative seasonal adjustments), and that is on the right track. Unless you have a long time series to accurately define the standard deviation/variance/volatility of November credits for example, the uncertainties in your estimates are likely to be large due to the small sample sizes. Perhaps you could aggregate data for similar businesses, that would come with its own problems.
Whatever approach you use, you can test it by running the results into a submodel and calculating your simulated time series properties and comparing to the actual time series properties.
Hope that helps.
Please sign in to leave a comment.
Comments
1 comment