Dissertation Defense of CMSE David Butts
Department of Computational Mathematics, Science & Engineering
Michigan State University
Dissertation Defense Notice
Friday, March 8, 2024, 9:00 AM (EST)
3540 Engineering Building
Meeting ID: 217 732 6832
Passcode: butts
DATA, MACHINE LEARNING, AND POLICY INFORMED AGENT-BASED MODELING
By
David J Butts
Abstract:
Agent-based models (ABMs) examine emergent phenomena that arise from individual agent rules. This work extends the basic ABM paradigm in three key areas: data integration, evaluation of policies, and incorporating machine learning techniques. I will discuss how data-driven approaches can enhance the accuracy of ABMs, explore the practical applications of ABMs in developing policies for real-world issues, and investigate the fusion of machine learning with ABMs to optimize model design and functionality. These techniques span a range of applications including infectious disease, disinformation, and conflict. Logically, these three applications group themselves into pairs.
The first pair addresses data integration of GPS deer movement data into a generalized Langevin model and its use in uncertainty quantification of disease spread. Exploratory data analysis revealed a discernible non-parametric trend in the GPS data with non-Gaussian statistics. This analysis led to a model that is consistent with the observed data. Subsequent incorporation of chronic wasting disease (CWD) and population dynamics were used to forecast the prevalence of CWD. This extended model was analyzed with a global sensitivity analysis that tied variance in disease prevalence to variance in the parameters of the model, providing predictions of future prevalence of the disease.
The second pair examines policy evaluation, specifically strategies for mitigating disinformation in social networks. Multiple strategies were evaluated on various topologically diverse networks that led to policy recommendations. Simulations on these graphs revealed challenges associated with large network simulations, particularly in computational cost and influence of network topologies. These challenges led to a method to miniaturize real social networks while preserving key attributes, enabling more efficient and realistic simulations to run on artificial social networks.
The final pair investigates the possibility of inverting the ABM paradigm to instead have agents learn their own rules through environmental interactions. Reinforcement learning was applied to a model of conflict based on capture the flag, where an agent learned in progressively difficult competitions. The emergence of deterrence was explored through adding asymmetries between competing teams, and differential equation-based models were created to help interpret results.
Committee Members:
Michael Murillo (chair)
Arika Ligmann-Zielinska
John Luginsland
Yuying Xie