Mike Lasby’s collaborative work with researchers at Google, MIT and the Vector Institute, “Dynamic Sparse Training with Structured Sparsity” (Lasby et al., 2024), was accepted at ICLR 2024! DST methods learn state-of-the-art sparse masks, but accelerating DNNs with unstructured masks is difficult. SRigL learns structured masks, improving real-world CPU/GPU timings!