How do data-driven machine learning models that generate stable structures compare against conventional methods?

Data-driven machine learning models are increasingly being used to generate stable atomic configurations. These models are trained on a database of stable structures computed using density functional theory (DFT). Based on this database, architectures such as the variational auto-encoder, generative adversarial networks and (more recently) the transformer learn an implicit probability distribution. This distribution is used to decide if a given configuration is stable. 

In contrast, conventional methods use global optimization techniques such as basin and minima hopping. These techniques traverse through the potential energy surface of a given atomic configuration while looking for stable (lowest energy) structures.

It is currently unclear if these data-driven machine learning models outperform conventional algorithms for the task of generating new stable structures for a fixed stoichiometry. In this work, we will apply available machine learning models as well as global optimization techniques to a few prototypical materials. We will benchmark stable structures generated by these methods and compare their time-to-solution.

This project is well suited for students interested in developing and applying machine learning models. Prior experience in DFT calculations and machine learning techniques is not required. Basic knowledge of programming will be useful but can be picked up during the project. 

UG Project Type
BTP
SLP
Name of Faculty