Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Diet deep generative audio models with structured lottery

Published in Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx-20), 2020

Deep learning models have provided extremely successful solutions in most audio application fields. However, the high accuracy of these models comes at the expense of a tremendous computation cost. This aspect is almost always overlooked in evaluating the quality of proposed models. However, models should not be evaluated without taking into account their complexity. This aspect is especially critical in audio applications, which heavily relies on specialized embedded hardware with real-time constraints. In this paper, we build on recent observations that deep models are highly overparameterized, by studying the lottery ticket hypothesis on deep generative audio models. This hypothesis states that extremely efficient small sub-networks exist in deep models and would provide higher accuracy than larger models if trained in isolation. However, lottery tickets are found by relying on unstructured masking, which means that resulting models do not provide any gain in either disk size or inference time. Instead, we develop here a method aimed at performing structured trimming. We show that this requires to rely on global selection and introduce a specific criterion based on mutual information. First, we confirm the surprising result that smaller models provide higher accuracy than their large counterparts. We further show that we can remove up to 95% of the model weights without significant degradation in accuracy. Hence, we can obtain very light models for generative audio across popular methods such as Wavenet, SING or DDSP, that are up to 100 times smaller with commensurate accuracy. We study the theoretical bounds for embedding these models on Raspberry Pi and Arduino, and show that we can obtain generative models on CPU with equivalent quality as large GPU models. Finally, we discuss the possibility of implementing deep generative audio models on embedded platforms.

Recommended citation: P. Esling, N. Devis, A. Bitton, A. Caillon, A. Chemla-Romeu-Santos and C.Douwes, "Diet deep generative audio models with structured lottery" DAFx-20, 2020 https://dafx2020.mdw.ac.at/proceedings/papers/DAFx2020_paper_56.pdf

Is Quality Enough : Integrating Energy Consumption in a Large-Scale Evaluation of Neural Audio Synthesis Models

Published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023

Deep learning models are now core components of modern audio synthesis, and their use has increased significantly in recent years, leading to highly accurate systems for multiple tasks. However, this quest for quality comes at a tremendous computational cost, which incurs vast energy consumption and greenhouse gas emissions. At the heart of this problem are the standardized evaluation metrics used by the scientific community to compare various contributions. In this paper, we suggest relying on a multi-objective metric based on Pareto optimality, which considers equally the accuracy and energy consumption of a model. By applying our measure to the current state-of-the-art in generative audio models, we show that it can drastically change the significance of the results. We hope to raise awareness on the need to more systematically investigate the energy efficiency of high-quality models, in order to place computational costs at the center of deep learning research priorities.

Recommended citation: C. Douwes, G. Bindi, A. Caillon, P. Esling and J. -P. Briot, "Is Quality Enough : Integrating Energy Consumption in a Large-Scale Evaluation of Neural Audio Synthesis Models," ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5, doi: 10.1109/ICASSP49357.2023.10096975. https://ieeexplore.ieee.org/document/10096975

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.