LHC applies only to distributions where the inverse method exist, but for distributions like the normal and lognormal, the inverse method is an approximate one, so it would be better relying on another number generation method instead. Vose) that LHC is a relict of the past, carrying an overload of memory and computing time. So, it seems clear that LHC improves the shape of the target distribution, even at the largest sample size (n=10,000) (* set function as listable, capable of handling lists*) (*scaleu is the LHS re-scaling, very simple indeed! *) (* =HERE IS THE MATHEMATICA CODE WITH COMMENTS = *) In essence, the sample size ss serves to divide the sampling space into ss categories and then the U values are re-scaled to the new limits: The paper of Swiler & Wyss presents a detailed example of the algorithm so anybody can check the results and the algorithm itself (pages 2-9 in the paper). LHC is a re-scaling function in the domain of a random uniform variate so to have a better dispersion of the input numbers used to generate the pdf deviates. After searching fow a while, I finally found a paper describing the LHC sampling (Swiler and Wyss 2004).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |