Application of artificial neural networks in reactor physics calculations
Research host
KTH, Department of Physics, Division of Nuclear Engineering
Research done by doctoral student
Yi Meng Chang
Main supervisor
Jan Dufek
Co-supervisor
Christophe Demaziere
Formal project start
2021-08-16
Expected time of completion
2025-08-16
Discipline
Reactor Physics
Keywords
Reactor physics, artificial intelligence, artificial neural networks
Motivation
Nodal diffusion codes, that are used in industry for reactor simulations require spatially homogenised and energy collapsed nodal data, such as group macroscopic cross sections, microscopic cross sections for selected nuclides, diffusion coefficients, discontinuity factors, etc. The nodal data depends on both instantaneous and historic state variables, such as fuel depletion, fuel temperature, moderator density, and others. Nodal data generation is carried out by computationally expensive neutron transport codes, and it is impractical to generate nodal data on demand from these codes, therefore, it is necessary to build simplified models of the nodal data based on its state dependencies.
These dependences are usually tabulated or approximated by multivariate functions, mostly polynomials. The general problem with the table models is that tables grow exponentially with the number of state variables. The amount of data stored in the tables and the number of lattice calculations needed to fill the tables can easily become impractically large for this reason. Therefore, the table models can consider only relatively few state variables.
In this PhD project, we propose the application of Artificial Neural Networks (ANNs) to represent nodal data. The advantage of Artificial Neural Networks is its capacity to represent highly complex and non-smooth functions, which we believe may lead to more accurate nodal data representation compared to the models in the current literature. This would allow for more flexible and accurate reactor simulations than possible with existing data models. This can translate into a better optimisation of fuel load patterns and an improved reactor economy.
Key findings thus far
- The optimization of neural network architecture was found to be one of the most challenging obstacles to obtain good performance of the deep learning neural network.
- Presently, the best neural network architectures tested exhibited comparable performance to the standard linear regression methods.
- The use of non-parametric statistical tests was used to quantify the relative performance of the methods over different test problems.
Publications from this project
None at time of writing.
Collaboration partners
Erwin Müller, Consultant on project
Petri Forslund Guimarães, Consultant on project