Abstract

Knot theory is a branch of mathematics that studies embeddings of the circle in $\mathbb{R}^3$ that are equivalent up to ambient isotopy. A link is a knot with one or more component. Knots and links have invariants, or functions from each knot to a set such as the integers or a family of polynomials. Predicting invariants can help us approximate them, understand them better, and find useful ways of representing knots and useful models for other machine learning tasks in knot theory. I perform many supervised learning experiments, predicting signature on a large, wide-ranging dataset using 5 different models and 3 different representations. We present a new method of converting knots and links to graph data, prior to being fed into a graph neural network (GNN). The GNN outperforms the other models, unveiling a new tool for machine learning tasks in knot theory. Additionally, we attempted to use reinforcement learning to find candidate knots to test for disproving a conjecture concerning an invariant called the Jones polynomial. We present a reinforcement learning environment for building links to maximize or minimize different invariants. Despite successfully maximizing and minimizing the desired invariants, the agent always generates links not knots, so no candidates . We re produced. We suggest future directions for finding candidate knots and applying GNNs to new tasks. The code for this thesis can be found at https://github.com/ndriggs/conditional-link-generation.

Degree

MS

College and Department

Computational, Mathematical, and Physical Sciences; Mathematics

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2025-04-23

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd13617

Keywords

knot theory, graph neural networks, reinforcement learning

Language

english

Share

COinS