-
Beyond Compression: How Knowledge Distillation Impacts Fairness and Bias in AI Models
A summary of our research exploring the effects of knowledge distillation on how deep neural networks make decisions, particularly in terms of fairness and bias.
-
Dynamic Sparse Training with Structured Sparsity
Learning Performant and Efficient Representations suitable for Hardware Acceleration
-
Gradient Flow in Sparse Neural Networks & Why Lottery Tickets Win
An exploration of why sparse neural networks are hard to train and how understanding gradient flow sheds light on Lottery Tickets and Dynamic Sparse Training.