Neil Lindquist

NeilLindquist5@gmail.com

Education

Ph.D. student in Computer Science, University of Tennesee,

  • Advised by Dr. Jack Dongarra

B.A. magna cum laude in Math and Computer Science, Saint John’s University, 2019

Honors and Awards

Top 500 Fellowship, University of Tennessee (August 2019 - Present)

Pi Mu Epsilon Mathematics Honor Society (inducted May 2019)

Phi Beta Kappa Honors Society (inducted April 2019)

Eagle Scout (awarded June 2014)

Publications

N. Lindquist, “Replicated Computational Results (RCR) Report for ‘Code Generation for Generally Mapped Finite Elements,’” ACM Trans. Math. Softw., vol. 45, no. 4, pp. 42:1–42:7, Dec. 2019.

  • A replication of the computational results in the named paper by Robert C. Kirby and Lawrence Mitchell
  • Download

Presentations

Improve the Performance of GMRES using Mixed Precision

  • 2020 SIAM Conference of Parallel Processing for Scientific Computing

Reducing Memory Access Latencies using Data Compression in Sparse, Iterative Linear Solvers

  • 2019 CSB/SJU Pi Mu Epsilon Conference

Obtaining Performance from a Julia-Implementation of Trilinos Data Librairies

  • 2019 SIAM Conference on Computational Science and Engineering

Research Experience

Graduate Research Assistant - Innovative Computing Laboratory at the University of Tennesee under Dr. Jack Dongarra (July 2019 through the present)

  • Experimenting with use of a mix of double and single floating point precision in GMRES, a sparse, iterative linear solver.

Graduate Research Assistant - Global Computing Laboratory at the University of Tennesee under Dr. Michela Taufer (August 2019 through February 2020)

  • Developed a machine learning based workflow for classification of protein structural properties from XFEL diffraction patterns.

Research Assistant - Collegeville Group at Saint John’s University under Dr. Mike Heroux (May 2017 through May 2019)

  • Explored the use of data compression to improve the performance of Conjugate Gradient, a sparse, iterative linear solver.
  • Tested the effect on performance of using Julia, a high level programming language, to implement distributed, sparse linear algebra codes.