Updates on Sharpness Aware Minimization part2(Machine Learning 2023) | by Monodeep Mukherjee | Nov, 2023

Photo by Kenny Eliason on UnsplashOn Memorization and Privacy dangers of Sharpness Aware Minimization(arXiv)Author : Young In Kim, Pratiksha Agrawal, Johannes O. Royset, Rajiv KhannaAbstract : n many latest works, there may be an elevated focus on designing algorithms that search flatter optima for neural community loss optimization as there may be empirical proof that it results in higher generalization efficiency in lots of datasets. In this work, we dissect these efficiency beneficial properties by means of the lens of knowledge memorization in overparameterized fashions. We outline a brand new metric that helps us establish which knowledge factors particularly do algorithms looking for flatter optima do higher when in comparison with vanilla SGD. We discover that the generalization beneficial properties achieved by Sharpness Aware Minimization (SAM) are notably pronounced for atypical knowledge factors, which necessitate memorization. This perception helps us unearth larger privateness dangers related to SAM, which we confirm by means of exhaustive empirical evaluations. Finally, we suggest mitigation methods to realize a extra fascinating accuracy vs privateness tradeoff2.RSAM: Learning on manifolds with Riemannian Sharpness-aware Minimization (arXiv)Author : Tuan Truong, Hoang-Phi Nguyen, Tung Pham, Minh-Tuan Tran, Mehrtash Harandi, Dinh Phung, Trung LeAbstract : Nowadays, understanding the geometry of the loss panorama reveals promise in enhancing a mannequin’s generalization means. In this work, we draw upon prior works that apply geometric ideas to optimization and current a novel method to enhance robustness and generalization means for constrained optimization issues. Indeed, this paper goals to generalize the Sharpness-Aware Minimization (SAM) optimizer to Riemannian manifolds. In doing so, we first prolong the idea of sharpness and introduce a novel notion of sharpness on manifolds. To assist this notion of sharpness, we current a theoretical evaluation characterizing generalization capabilities with respect to manifold sharpness, which demonstrates a tighter sure on the generalization hole, a consequence not identified earlier than. Motivated by this evaluation, we introduce our algorithm, Riemannian Sharpness-Aware Minimization (RSAM). To display RSAM’s means to reinforce generalization means, we consider and distinction our algorithm on a broad set of issues, corresponding to picture classification and contrastive studying throughout totally different datasets, together with CIFAR100, CIFAR10, and FGVCAircraft. Our code is publicly obtainable at url{https://t.ly/RiemannianSAM}.

https://medium.com/@monocosmo77/updates-on-sharpness-aware-minimization-part2-machine-learning-2023-d7814c439a88

Recommended For You