CV

Demba Ba

Harvard University
School of Engineering and Applied Sciences
Maxwell Dworkin
33 Oxford st Cambridge, MA 02138
Phone: (617) 496-1228
Office: MD 143
Email: demba@seas.harvard.edu
Homepage: http://demba-ba.org/
Group page: http://crisp.seas.harvard.edu/

Education

  • Ph.D. EECS, Massachusetts Institute of Technology, 2011.
  • M.S. EECS, Massachusetts Institute of Technology, 2006.
  • B.S. Electrical Engineering, University of Maryland, College Park, 2004.

Professional Appointments

  • Harvard University, School of Engineering and Applied Sciences, Cambridge, MA
    Dean of Undergraduate Studies for Bioengineering (August 2020–Present)
  • Harvard University, School of Engineering and Applied Sciences, Cambridge, MA
    Associate Professor of Electrical Engineering and Bioengineering (July 2019–Present)
  • Harvard University, School of Engineering and Applied Sciences, Cambridge, MA
    Assistant Professor of Electrical Engineering and Bioengineering (July 2015–June 2019)
  • Manifold AI – Algorithm Design and Development, Oakland, CA
    Senior Data Science Consultant (September 2018–Present)
  • Neuroscience Statistics Research lab, Cambridge, MA
    Massachusetts Institute of Technology
    Research Assistant/Post-doc fellow (Fall 2007–Summer 2014)
  • Google – Anomaly Detection -and Trend Estimation, Mountain View, CA
    Summer Intern (June 2010–September 2010)
  • Microsoft Research – Communications and Collaboration Systems Group, Redmond, WA
    Summer Intern (June 2006/2009–September 2006/2009)

Awards & Honors

  • 2021 Roslyn Abramson Award for Outstanding Undergraduate Teaching, Harvard Faculty of Arts and Sciences
  • 2016 Fellow in Neuroscience of the Alfred P. Sloan Foundation
  • Spotlight Presentation at Advances in Neural Information Processing Systems 25 (NIPS 2012) [< 5% acceptance rate]
  • ICME 2010 Best Student Paper Award (for summer 2009 work at MS Research)
  • University of Maryland Engineering honors citation
  • A Scholars Programs for Industry-oriented Research in Engineering

Relevant Coursework

Discrete-time Signal Processing, Stochastic Processes Detection and Estimation, Statistical Learning and Estimation, High-dimensional Statistics, Dynamic systems and Control, Advanced Computational Photography, Principles of Digital Communication, Abstract Linear Algebra, Real Analysis, Functional Analysis.

Presentations

Technical Talks

  • “Sparse Coding, Artificial Neural Networks, and the Brain: Toward Substantive Intelligence.” Center for Brain Science, Harvard University, April 2021.
  • “Sparse Coding, Artificial Neural Networks, and the Brain: Toward Substantive Intelligence.” Institute for Artificial Intelligence and Fundamental Interactions Colloquium Series, MIT, April 2021.
  • “Interpretable AI in Computational Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain.” Princeton Neuroscience Institute, Princeton University, April 2021.
  • “Interpretable AI in Computational Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain.” Center for Theoretical Neuroscience, Columbia University, March 2021.
  • “Interpretable AI in Computational Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain.” Computer Science Colloquium, Indiana University Bloomington, March 2021.
  • “Deeply-sparse signal representations.” Physical Mathematics Seminar Series, MIT, February 2021.
  • “Interpretable AI in Computational Neuroscience: Sparse Coding, Artificial Neural Networks, and the Brain.” Center for Computational Neuroscience, University of Washington, January 2021.
  • “Deeply-sparse signal representations.” Conference of the Mathematical Theory of Deep Neural Networks (DeepMath), November 2020.
  • “Learning deeply-sparse signal representations.” 6.S975 Seminar Series (Advanced Topics in Signal Processing), MIT, February 2020.
  • “Auto-encoders for convolutional dictionary learning.” IBRO-SIMONS Computational Neuroscience Imbizo, Muizenberg, South Africa, January 2020.
  • “AI in computational neuroscience: sparsity, artificial neural networks and the brain.” Center for Mind, Brain, Computation and Technology Seminar Series, Stanford University, October 2019.
  • “Learning deeply-sparse signal representations.” Applied Mathematics Seminar Series, Tufts University, September 2019.
  • “Population codes, hierarchical sparse coding and connections to artificial neural networks.” Oxford University Cortexlab Seminar Series, Oxford UK, May 2019.
  • “AI in computational neuroscience: sparsity, artificial neural networks and the brain.” International Brain Lab (IBL) annual meeting, Paris France, May 2019.
  • “Learning deeply-sparse signal representations.” Electrical Engineering Seminar Series, Cornell Tech, April 2019.
  • “AI in computational neuroscience: sparsity, artificial neural networks and the brain.” Amazon AWS AI in Practice – DSP, Audio, Speech and Languages, Electrical Engineering Seminar Series, Palo Alto CA, March 2019.
  • “Learning deeply-sparse signal representations.” Electrical Engineering Seminar Series, Rice University, March 2019.
  • “Learning deeply-sparse signal representations.” Electrical Engineering Seminar Series, Harvard University, March 2019.
  • “Sparse coding, sensory processing in the brain, and artificial neural networks.” IBRO-SIMONS Computational Neuroscience Imbizo, Muizenberg, South Africa, January 2019.
  • “Estimating a separable random field from binary observations.” Department of ECE, University of Maryland – College Park, March 2017.
  • “Estimating a separable random field from binary data.” Center for Brain Science, Harvard University, November 2016.
  • “Estimating a separable random field from binary data.” Department of ECE, SILO Seminar Series, University of Wisconsin – Madison, October 2016.
  • “Estimating structured state-space models from point-process data.” Neurocontrol Workshop, Automatic Control Conference, Boston MA, July 2016.
  • “Estimating structured state-space models from point-process data.” Second Workshop on Modelling Neural Activity, Waikoloa HI, June 2016.
  • “New time-frequency tools toward a more precise characterization of rhythms from the brain.” Institute of Applied and Computational Sciences, Harvard University, February 2016.

Conference Proceedings

  1. Abiy Tasissa, Pranay Tankala, and Demba Ba. Weighted ℓ1 on the simplex: compressed sensing meets locality. In 2021 IEEE Statistical Signal Processing Workshop (SSP), pages 1–5.
  2. Andrew Song, Demba Ba, and Emery Brown. Plso: A generative framework for decomposing nonstationary timeseries into piecewise stationary oscillatory components. In 37th Conference on Uncertainty in Artificial Intelligence, 2021.
  3. Bahareh Tolooshams, Satish Mulleti, Demba Ba, and Yonina C Eldar. Unfolding neural networks for compressive multichannel blind deconvolution. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2890–2894. IEEE, 2021.
  4. Bahareh Tolooshams, Hao Wu, Paul Masset, Venkatesh Murthy, and Demba Ba. Unsupervised learning of a dictionary of neural impulse responses from spiking data. In Computational and Systems Neuroscience (COSYNE), 2021.
  5. Andrew Song, Francisco Flores, Demba Ba, and Emery Brown. A statistical framework for extracting time-varying oscillations from neural data. In Computational and Systems Neuroscience (COSYNE), 2021.
  6. Bahareh Tolooshams, Andrew H Song, Simona Temereanca, and Demba Ba. Convolutional dictionary learning based auto-encoders for natural exponential-family distributions. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. URL: https://arxiv.org/abs/1907.03211.
  7. Andrew Song, Bahareh Tolooshams, Simona Temereanca, and Demba Ba. Convolutional dictionary learning of stimulus from spiking data. In Computational and Systems Neuroscience (COSYNE), 2020.
  8. Thomas Chang, Bahareh Tolooshams, and Demba Ba. Randnet: deep learning with compressed measurements of images. In 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). URL: https://arxiv.org/abs/1908.09258.
  9. Javier Zazo, Bahareh Tolooshams, and Demba Ba. Convolutional dictionary learning in hierarchical hetworks. In 2019 IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP 2019). URL: https://arxiv.org/abs/1907.09881.
  10. Alexander Lin, Yingzhuo Zhang, Jeremy Heng, Stephen A. Allsop, Pierre Jacob, and Demba Ba. Clustering time series with nonlinear dynamics: A bayesian non-parametric and particle-based approach. In International Conference on Artificial Intelligence and Statistics, 2019. URL: https://arxiv.org/abs/1810.09920.
  11. Taposh Banerjee, Stephen Allsop, Kay M Tye, Demba Ba, and Vahid Tarokh. Sequential detection of regime changes in neural data. In 2019 IEEE 9th International IEEE/EMBS Conference on Neural Engineering (NER). URL: https://arxiv.org/abs/1809.00358.
  12. Bahareh Tolooshams, Sourav Dey, and Demba Ba. Scalable convolutional dictionary learning with constrained recurrent sparse auto-encoders. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP).
  13. Taposh Banerjee, John Choi, Bijan Pesaran, Demba Ba, and Vahid Tarokh. Classification of local field potentials using gaussian sequence model. In 2018 IEEE Statistical Signal Processing Workshop (SSP), pages 683–687.
  14. Taposh Banerjee, John Choi, Bijan Pesaran, Demba Ba, and Vahid Tarokh. Wavelet shrinkage and thresholding based robust classification for brain-computer interface. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 836–840.
  15. Yingzhuo Zhang, Noa Shinitski, Stephen Allsop, Kay Tye, and Demba Ba. A two-dimensional seperable random field model of within and cross-trial neural spiking dynamics. In Computational and Systems Neuroscience (COSYNE), 2017.
  16. Noa Malem-Shinitski, Yingzhuo Zhang, Daniel Gray, Sarah Burke, Anne Smith, Carol Barnes, and Demba Ba. Can you teach an old monkey a new trick? In Computational and Systems Neuroscience (COSYNE), 2017.
  17. Gabriel Schamberg, Demba Ba, Mark Wagner, and Todd Coleman. Efficient low-rank spectrotemporal decomposition using admm. In 2016 IEEE Statistical Signal Processing Workshop (SSP), pages 1–5.
  18. Demba Ba, Behtash Babadi, Patrick L Purdon, and Emery N Brown. Neural spike train denoising by point process re-weighted iterative smoothing. In 2014 IEEE 48th Asilomar Conference on Signals, Systems and Computers., pages 763–768. doi:10.1109/ACSSC.2014.7094552.
  19. Demba Ba, Behtash Babadi, Patrick L Purdon, and Emery N Brown. Exact and stable recovery of sequences of signals with sparse increments via differential ℓ1-minimization. Advances in Neural Information Processing Systems, 25, pages 2636–2644, 2012.
  20. Flavio Ribeiro, Demba Ba, Cha Zhang, and Dinei Florencio. Turning enemies into friends: Using reflections to improve sound source localization. In 2010 IEEE International Conference on Multimedia and Expo (ICME), pages 731–736. doi: 10.1109/ICME.2010.5583886.
  21. Demba Ba, Flavio Ribeiro, Cha Zhang, and Dinei Florencio. L1 regularized room modeling with compact microphone arrays. In 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pages 157–160. doi:10.1109/ ICASSP.2010.5496093.

Refereed Journal Articles

  1. Demba Ba. Deeply-sparse signal representations. IEEE Transactions on Signal Processing, 2020. URL: https://arxiv.org/abs/1807.01958.
  2. Bahareh Tolooshams, Sourav Dey, and Demba Ba. Deep residual auto-encoders for expectation maximization-inspired dictionary learning. IEEE Transcations on Neural Networks, 2020. URL: https://arxiv.org/pdf/1904.08827.
  3. Andrew Song, Francisco Flores, and Demba Ba. Fast convolutional dictionary learning with grid refinement. IEEE Transactions on Signal Processing (Accepted with minor revisions), 2020. URL: https://arxiv.org/abs/1807.01958.
  4. Noa Malem-Shinitski, Yingzhuo Zhang, Daniel T Gray, Sara N Burke, Anne C Smith, Carol A Barnes, and Demba Ba. A separable two-dimensional random field model of binary response data from multi-day behavioral experiments. Journal of neuroscience methods, 307:175–187, 2018. doi:10.1016/j.jneumeth.2018.04.006.
  5. Yingzhuo Zhang, Noa Malem-Shinitski, Stephen A Allsop, Kay M Tye, and Demba Ba. Estimating a separably markov random field from binary observations. Neural computation, 30(4):1046–1079, 2018. doi:10.1162/neco_a_01059.
  6. Gabriel Schamberg, Demba Ba, and Todd Coleman. A modularized efficient framework for non-markov time series estimation. IEEE Transactions on Signal Processing, 66(12):3140–3154, 2018. doi:10.1109/TSP.2018.2793870.
  7. Seong-Eun Kim, Michael Behr, Demba Ba, and Emery N Brown. State-space multitaper time-frequency analysis. Proceedings of the National Academy of Sciences, 111(50):E5336–E5345, 2018. doi:10.1073/pnas.1702877115.
  8. Seong-Eun Kim, Demba Ba, and Emery N Brown. A multitaper frequency-domain bootstrap method. IEEE Signal Processing Letters, 25(12):1805–1809, 2018.
  9. Gabriela Czanner, Sridevi V Sarma, Demba Ba, Uri T Eden, Wei Wu, Emad Eskandar, Hubert H Lim, Simona Temereanca, Wendy A Suzuki, and Emery N Brown. Measuring the singal-to-noise ratio of a neuron. Proceedings of the National Academy of Sciences, 112(23):E7141–E7146, 2015. doi:10.1073/pnas.1505545112.
  10. Demba Ba, Behtash Babadi, Patrick L Purdon, and Emery N Brown. Convergence and stability of iteratively re-weighted least squares algorithms. IEEE Transactions on Signal Processing, 62(1):183–195, 2014. doi:10.1109/TSP.2013.2287685.
  11. Demba Ba, Behtash Babadi, Patrick L Purdon, and Emery N Brown. Robust spectrotemporal decomposition by iteratively reweighted least squares. Proceedings of the National Academy of Sciences, 111(50):E5336–E5345, 2014. doi:10.1073/pnas. 1320637111.
  12. Demba Ba, Simona Temereanca, and Emery N Brown. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models. Frontiers in computational neuroscience, 8, 2014. doi:10.3389/fncom.2014. 00006.
  13. Luca Citi, Demba Ba, Emery N Brown, and Riccardo Barbieri. Likelihood methods for point processes with refractoriness. Neural computation, 26(2):237–263, 2014. doi:10.1162/NECO_a_00548.
  14. Flavio Ribeiro, Dinei Florencio, Demba Ba, and Cha Zhang. Geometrically constrained room modeling with compact microphone arrays. IEEE Transactions on Audio, Speech, and Language Processing, 20(5):1449–1460, 2012. doi:10.1109/TASL. 2011.2180897.
  15. Flavio Ribeiro, Cha Zhang, Dinei Florencio, and Demba Ba. Using reverberation to improve range and elevation discrimination for small array sound source localization. IEEE Transactions on Audio, Speech, and Language Processing, 18(7):1781–1792, 2010. doi:10.1109/TASL.2010.2052250.
  16. Cha Zhang, Dinei Florencio, Demba Ba, and Zhengyou Zhang. Maximum likelihood sound source localization and beamforming for directional microphone arrays in distributed meetings. IEEE Transactions on Multimedia, 10(3):538–548, 2008. doi: 10.1109/TMM.2008.917406.

Working Papers

  1. Andrew H Song, Bahareh Tolooshams, and Demba Ba. Gaussian process convolutional dictionary learning. Submitted, 2021. URL: https://arxiv.org/abs/2104.00530.
  2. Alexander Lin, Andrew H Song, Berkin Bilgic, and Demba Ba. Covariance-free sparse bayesian learning. Submitted, 2021. URL: https://arxiv.org/abs/2105.10439.
  3. Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, and Demba Ba. On the convergence of group-sparse autoencoders. Submitted, 2021. URL: https://arxiv.org/abs/2102.07003.
  4. Pranay Tankala, Abiy Tasissa, James M Murphy, and Demba Ba. Manifold learning and deep clustering with local dictionaries. Submitted, 2021. URL: https://arxiv.org/abs/2012.02134.
  5. Abiy Tasissa, Emmanouil Theodosis, Bahareh Tolooshams, and Demba Ba. Dense and sparse coding: theory and architectures. Submitted, 2021. URL: https://arxiv.org/abs/2006.09534.

Courses Taught

  • Harvard Course ES 201: Decision Theory and Model-Based Deep Learning (Lecturer Spring 2021)
  • Harvard Course ES 157: Biomedical Signal Processing and Computing (Lecturer Fall 2020)
  • Harvard Course ES 155: Biomedical Signal Processing and Computing (Lecturer Fall 2019)
  • Harvard Course ES 201: Decision Theory (Lecturer Spring 2019)
  • Harvard Course ES 201: Decision Theory (Lecturer Spring 2018)
  • Harvard Course ES 155: Biomedical Signal Processing and Computing (Lecturer Fall 2016)
  • Harvard Course ES 201: Decision Theory (Lecturer Spring 2017)
  • Harvard Course ES 155: Biomedical Signal Processing and Computing (Lecturer Spring/Fall, 2016)
  • MIT Course 9.073: Statistics for Neuroscience Research (Lecturer Spring 2015)
  • MIT Course 9.272J: Topics in Neural Signal Processing (Lecturer Spring 2013/2014)
  • MIT Course 6.003: Signals and Systems (Head Teaching Assistant Fall 2007)
  • MIT Course 6.002: Circuits and Electronics (Teaching Assistant Fall 2004/2006, Spring 2005/2007)

Skills

  • Computer: Proficient: Python, Matlab/Octave. Experienced: C,C++,JavaScript.
  • Spoken Languages: French, English, Spanish.

References

Last updated: 2021/06/26 at 15:06:29