Monte Carlo simulation with optimized realizations

Answered

Comments

4 comments

  • Official comment
    Avatar
    Rick Kossik

    Tim,

    Very tricky stuff you are trying!

    You can export Time Histories from a SubModel. I think this tructure (https://goldsim.sharefile.com/d-s74435a4daa04a869) would accomplish what you want. Unfortunately, this only works for scalars.  How big are your arrays?  If they are not large, you could pull out the items as scalars for the export, and then put them back into vector form outside the SubModel.  A little ugly, but doable (unless your arrays are very large).

    Regarding you second question, that is a bit tricky.  In order for that to work, you would need to stage the SubModels, so that the outputs from a previous stage served as the initial conditions for the next stage. Depending on the structure of your model you might be able to pull this off. But then collecting all the stages together as if they were a single history could be tricky.  I would need to think about how that might work. 

    But is this really necessary?  Why do you feel you need a large number of realizations?  Do you have long tails (that result in high consequene outcomes). Can you take advantage of Importance Sampling to reduce the number of realizations?  Alternatively, simply use Distributed Processing on the model.  You can use multiple processors on your computer, and even access other computers to do the crunching.

    Comment actions Permalink
  • Avatar
    Tim Crownshaw

    I forgot to mention, the submodel outputs I need to view are all arrays, not scalars.

    1
    Comment actions Permalink
  • Avatar
    Tim Crownshaw

    The arrays are large and unfortunately too many of them to reassemble from scalar submodel outputs. However, I can instead run a parallel model at the parent level using optimized values from the submodel in order to access the full time histories of array elements.

    The reason for the number of realizations needed is the large number of stochastic element inputs and the highly non-linear behavior of the model over time. I need to adequately sample these inputs in order to fully explore possible system behavior. I'm not sure if importance sampling is appropriate for this?

    This may be a basic question, but for Distributed Processing, how can I configure the network run option to use additional processors on my computer?

    0
    Comment actions Permalink
  • Avatar
    Rick Kossik

    The number of realizations required is really just a function of how well you need to see the tails, and is discussed here: https://goldsimcourses.appspot.com/basic_goldsim/unit?unit=127&lesson=135.  As a very general rule, you would typically want on the order of 10 realizations outside of the percentile of interest to define that percentile with high confidence (which would mean that to compute the 99thpercentile, you would need on the order of 1000 realizations).

    The Distributed Processing Manual discusses in detail how to use additional processors on your (or other) computers: https://179445c431b780d58cf0-90087a6e8515a56df885cd2eb068b585.ssl.cf1.rackcdn.com/DistributedProcessing.pdf

    1
    Comment actions Permalink

Please sign in to leave a comment.