Link: arXiv/DOI

Link: Source code


We propose a novel sparse-to-sparser training scheme: DA-DPFL. DA-DPFL initializes with a subset of model parameters, which progressively reduces during training via dynamic aggregation and leads to substantial energy savings while retaining adequate information during critical learning periods.

First published: 6 September 2023