Neil Lindquist

NeilLindquist5@gmail.com

Research Interests

Numerical Linear Algebra

Effects of data representation on performance and accuracy

Approximate computing algorithms

High performance computing

Education

Ph.D. in Computer Science, University of Tennessee, 2023

M.S. in Computer Science, University of Tennessee, 2022

B.A. magna cum laude in Math and Computer Science, Saint John’s University, 2019

Professional Experience

Developer Technology Engineer - NVIDIA (January 2024 through the present)

Research Associate - University of Tennessee (August 2023 through December 2023)

  • Debugging and optimizing SLATE (a distributed, heterogeneous, dense linear algebra library) on the Frontier supercomputer.
  • Polishing LU alternatives from my graduate work for inclusion in SLATE.

Graduate Research Assistant - University of Tennessee (July 2019 through July 2023)

  • Developing optimizations and algorithms to address the performance of pivoting in dense, distributed LU factorizations.
    • Primarily focused on SLATE.
    • Approaches include randomization, relaxing pivoting criteria, and additive perturbations.
  • Experimented with mixing double and single floating point precision in GMRES, a sparse, iterative linear solver.
  • Contributed to a machine learning based workflow for classification of protein structures from XFEL diffraction patterns.

Intern - MathWorks (May 2022 through August 2022)

  • Contributed to the development and optimization of MATLAB linear algebra routines, including sparse-matrix-multiplication, least-squares, and eigenvalue problems.

Givens Associate - Argonne National Laboratory (May 2021 through August 2021)

  • Ported interpolation routines to GPU accelerators using the OCCA runtime in NekRS, a spectral-element based fluid dynamics code.
  • Implemented particle-tracking and overlapping simulation domains functionalities in NekRS using the new interpolation routines.

Research Assistant - Saint John’s University (May 2017 through May 2019)

  • Explored using data compression in Conjugate Gradient on distributed CPU clusters.
  • Investigated using the Julia language for distributed-memory sparse linear algebra.

Honors and Awards

Nominated for the Best Paper Award of ICS’23

Tennessee’s Top 100 Fellowship, University of Tennessee (August 2019 - Present)

Pi Mu Epsilon Mathematics Honor Society (inducted May 2019)

Phi Beta Kappa Honors Society (inducted April 2019)

Eagle Scout (awarded June 2014)

Publications

N. Lindquist, P. Luszczek, and J. Dongarra, “Using Additive Modifications in LU Factorization Instead of Pivoting,” presented at the 2023 ACM International Conference on Supercomputing (ICS), Orlando, FL, USA, June 2023, DOI: 10.1145/3577193.3593731

N. Lindquist, M. Gates, P. Luszczek, and J. Dongarra, “Threshold Pivoting for Dense LU Factorization,” presented at the 2022 IEEE/ACM 13th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Heterogeneous Systems (ScalAH), Dallas, TX, USA, Nov. 2022, DOI: 10.1109/ScalAH56622.2022.00010

N. Lindquist, P. Luszczek, and J. Dongarra, “Accelerating Restarted GMRES With Mixed Precision Arithmetic,” in IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 4, pp. 1027-1037, April 2022, DOI: 10.1109/TPDS.2021.3090757

N. Lindquist, P. Luszczek, and J. Dongarra, “Replacing Pivoting in Distributed Gaussian Elimination with Randomized Techniques,” presented at the 2020 IEEE/ACM 11th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA), Atlanta, GA, USA, Nov. 2020, DOI: 10.1109/ScalA51936.2020.00010

P. Luszczek, Y. Tsai, N. Lindquist, H. Anzt, and J. Dongarra, “Scalable Data Generation for Evaluating Mixed-Precision Solvers,” presented at the 2020 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, Sep. 2020, pp. 1–6, DOI: 10.1109/HPEC43674.2020.9286145.

N. Lindquist, P. Luszczek, and J. Dongarra, “Improving the Performance of the GMRES Method using Mixed-Precision Techniques,” presented at the 17th Smoky Mountains Computational Sciences and Engineering Conference, SMC 2020, Oak Ridge, TN, USA, Aug. 2020, DOI: 10.1007/978-3-030-63393-6_4

N. Lindquist, “Replicated Computational Results (RCR) Report for ‘Code Generation for Generally Mapped Finite Elements,’” ACM Trans. Math. Softw., vol. 45, no. 4, pp. 42:1–42:7, Dec. 2019, DOI: 10.1145/3360984

Preprints

N. Lindquist, P. Luszczek, and J. Dongarra, “Generalizing Random Butterfly Transforms to Arbitrary Matrix Sizes,” December 2023, arXiv:2312.09376

Presentations

Portable C++ for Modern, Heterogenous Mixed-Precision Methods

  • 2023 SIAM Conference on Computational Science and Engineering

Modern Mixed-Precision Methods in Portable C++ for Accelerated Hardware Platforms

  • 2022 SIAM Conference on Parallel Processing for Scientific Computing

Replacing Pivoting in Distributed Gaussian Elimination with Randomized Techniques

  • 2021 SIAM Annual Meeting
  • 13th JLESC workshop

Multiprecision Approach in GMRES and its Effects on Performance

  • 2021 SIAM Conference on Applied Linear Algebra

Accelerating GMRES via Mixed Precision

  • 12th JLESC workshop

Improve the Performance of GMRES using Mixed Precision

  • 2020 SIAM Conference of Parallel Processing for Scientific Computing

Reducing Memory Access Latencies using Data Compression in Sparse, Iterative Linear Solvers

  • 2019 CSB/SJU Pi Mu Epsilon Conference

Obtaining Performance from a Julia-Implementation of Trilinos Data Librairies

  • 2019 SIAM Conference on Computational Science and Engineering

Teaching Experience

Graduate Teaching Assistant - Scientific Computing For Engineers (Spring 2021)

Graduate Teaching Assistant - Scientific Computing For Engineers (Spring 2020)