Researchers report in the journal Nexus that they have developed a framework for assessing the relative value of rules and data in "informed machine learning models" that incorporate both. They showed that by doing so, they could help the AI incorporate basic laws of the real world and better navigate scientific problems like solving complex mathematical problems and optimizing experimental conditions in chemistry experiments.
"Embedding human knowledge into AI models has the potential to improve their efficiency and ability to make inferences, but the question is how to balance the influence of data and knowledge," says first author Hao Xu of Peking University. "Our framework can be employed to evaluate different knowledge and rules to enhance the predictive capability of deep learning models."
Generative AI models like ChatGPT and Sora are purely data-driven - the models are given training data, and they teach themselves via trial and error. However, with only data to work from, these systems have no way to learn physical laws, such as gravity or fluid dynamics, and they also struggle to perform in situations that differ from their training data. An alternative approach is informed machine learning, in which researchers provide the model with some underlying rules to help guide its training process, but little is known about the relative importance of rules vs data in driving model accuracy.
"We are trying to teach AI models the laws of physics so that they can be more reflective of the real world, which would make them more useful in science and engineering," says senior author Yuntian Chen of the Eastern Institute of Technology, Ningbo.
To improve the performance of informed machine learning, the team developed a framework to calculate the contribution of an individual rule to a given model's predictive accuracy. The researchers also examined interactions between different rules because most informed machine learning models incorporate multiple rules, and having too many rules can cause models to collapse.
This allowed them to optimize models by tweaking the relative influence of different rules and to filter out redundant or interfering rules entirely. They also identified some rules that worked synergistically and other rules that were completely dependent on the presence of other rules.
"We found that the rules have different kinds of relationships, and we use these relationships to make model training faster and get higher accuracy," says Chen.
The researchers say that their framework has broad practical applications in engineering, physics, and chemistry. In the paper, they demonstrated the method’s potential by using it to optimize machine learning models to solve multivariate equations and to predict the results of thin layer chromatography experiments and thereby optimize future experimental chemistry conditions.
Next, the researchers plan to develop their framework into a plugin tool that can be used by AI developers. Ultimately, they also want to train their models so that the models can extract knowledge and rules directly from data, rather than having rules selected by human researchers.
"We want to make it a closed loop by making the model into a real AI scientist," says Chen. "We are working to develop a model that can directly extract knowledge from the data and then use this knowledge to create rules and improve itself."
Hao Xu, Yuntian Chen, Dongxiao Zhang.
Worth of prior knowledge for enhancing deep learning.
Nexus, 2024. doi: 10.1016/j.ynexs.2024.100003