Maxwell Adam

My Resumeℜ↗

Posts

The Learning Process™ (singular influence functions)

Jul 1, 2025

Interpreting Complexity

Mar 3, 2025

A Contrastive Analysis of Features in Transformers that (play) Chess

Oct 1, 2024

Papers

The Loss Kernel: A Geometric Probe for Deep Learning Interpretability

We introduce the loss kernel, an interpretability method for measuring similarity between data points according to a trained neural network. The kernel is the covariance matrix of per-sample losses computed under a distribution of low-loss-preserving parameter perturbations.

Maxwell Adam*, Zach Furman*, Jesse Hoogland

Bayesian Influence Functions for Hessian-Free Data AttributionICLR 2026

We propose the local Bayesian influence function (BIF), an extension of classical influence functions that replaces Hessian inversion with loss landscape statistics that can be estimated via stochastic-gradient MCMC sampling.

Philipp Alexander Kreer*, Wilson Wu, Maxwell Adam, Zach Furman, Jesse Hoogland

Influence Dynamics and Stagewise Data AttributionICLR 2026

We introduce a framework for stagewise data attribution grounded in singular learning theory. We predict that influence can change non-monotonically, including sign flips and sharp peaks at developmental transitions. We first validate these predictions in a toy model, then at scale in language models, where token-level influence changes align with known developmental stages.

Jin Hwa Lee*, Matthew Smith*, Maxwell Adam, Jesse Hoogland

Posts

The Learning Process™ (singular influence functions)

Jul 1, 2025

Interpreting Complexity

Mar 3, 2025

A Contrastive Analysis of Features in Transformers that (play) Chess

Oct 1, 2024