Lawrence Livermore National Laboratory researchers described a validation exercise for simple models. These models used to understand hot spot conditions reached in an implosion. These will find good agreement when compared to a set of simulations.
Progress toward ignition needs accurately diagnosing current conditions. Accessing proximity metrics for implosion experiments at LLNL’s National Ignition Facility. Hot spot conditions are not directly measured. They are inferred by using simple 0- and 1-dimensional (1D) models.
LLNL physicist Alex Zylstra explained, doing ignition experiments on NIF, they have a phenomenal array of diagnostics that can measure many aspects of the shot and its performance. But some things for the burn physics are not directly measurable. This includes pressure or amount of energy in the hot spot.
Scientists therefore rely on simple models to infer these quantities from the data. Scientists then need to benchmark those models, in order for findings derived from inferences to be credible.
The study also is a much more extensive validation exercise for these simple models. It has used more than 20,000 2D simulations varying performance. It has also used various things that can go “wrong” in an experiment.
Scientists found that the simple models still do quite a good job over a reasonable range of parameters. They have started to use a new Markov Chain Monte Carlo algorithm to produce probabilistic distributions for the inferred quantities. It is based on the measurement uncertainties of the inputs.
These simple models were important for evaluating some of the burning-plasma criteria. These simple models have been used in the literature for some time. What’s new here is that the development of ‘ensembles’ by the cognitive simulation group within the ICF program. It has now produced simulation sets large enough to support these sorts of studies.