Advanced search options

Sorted by: relevance · author · university · date | New search

You searched for `subject:(SpMV)`

.
Showing records 1 – 15 of
15 total matches.

▼ Search Limiters

Colorado State University

1. Augustine, Travis. Identification of regular patterns within sparse data structures.

Degree: MS(M.S.), Computer Science, 2020, Colorado State University

URL: http://hdl.handle.net/10217/208429

► Sparse matrix-vector multiplication (*SpMV*) is an essential computation in linear algebra. There is a well-known trade-off between operating on a dense or a sparse structure…
(more)

Subjects/Keywords: SpMV

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Augustine, T. (2020). Identification of regular patterns within sparse data structures. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/208429

Chicago Manual of Style (16^{th} Edition):

Augustine, Travis. “Identification of regular patterns within sparse data structures.” 2020. Masters Thesis, Colorado State University. Accessed April 13, 2021. http://hdl.handle.net/10217/208429.

MLA Handbook (7^{th} Edition):

Augustine, Travis. “Identification of regular patterns within sparse data structures.” 2020. Web. 13 Apr 2021.

Vancouver:

Augustine T. Identification of regular patterns within sparse data structures. [Internet] [Masters thesis]. Colorado State University; 2020. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/10217/208429.

Council of Science Editors:

Augustine T. Identification of regular patterns within sparse data structures. [Masters Thesis]. Colorado State University; 2020. Available from: http://hdl.handle.net/10217/208429

Iowa State University

2. Groth, Brandon. Using machine learning to improve dense and sparse matrix multiplication kernels.

Degree: 2019, Iowa State University

URL: https://lib.dr.iastate.edu/etd/17688

► This work is comprised of two different projects in numerical linear algebra. The first project is about using machine learning to speed up dense matrix-matrix…
(more)

Subjects/Keywords: BLAS; GEMM; HPC; OpenMP; SpMV; Applied Mathematics; Computer Sciences

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Groth, B. (2019). Using machine learning to improve dense and sparse matrix multiplication kernels. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/17688

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Groth, Brandon. “Using machine learning to improve dense and sparse matrix multiplication kernels.” 2019. Thesis, Iowa State University. Accessed April 13, 2021. https://lib.dr.iastate.edu/etd/17688.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Groth, Brandon. “Using machine learning to improve dense and sparse matrix multiplication kernels.” 2019. Web. 13 Apr 2021.

Vancouver:

Groth B. Using machine learning to improve dense and sparse matrix multiplication kernels. [Internet] [Thesis]. Iowa State University; 2019. [cited 2021 Apr 13]. Available from: https://lib.dr.iastate.edu/etd/17688.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Groth B. Using machine learning to improve dense and sparse matrix multiplication kernels. [Thesis]. Iowa State University; 2019. Available from: https://lib.dr.iastate.edu/etd/17688

Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Urbana-Champaign

3. AlMasri, Mohammad. On implementing sparse matrix-vector multiplication on intel platform.

Degree: MS, Electrical & Computer Engr, 2018, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/101729

► Sparse matrix-vector multiplication, *SpMV*, can be a performance bottle-neck in iterative solvers and algebraic eigenvalue problems. In this thesis, we present our sparse matrix compressed…
(more)

Subjects/Keywords: SpMV; SIMD; CCF; CSR; I-e; MKL; OpenMP; Skylake; KNL

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

AlMasri, M. (2018). On implementing sparse matrix-vector multiplication on intel platform. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/101729

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

AlMasri, Mohammad. “On implementing sparse matrix-vector multiplication on intel platform.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed April 13, 2021. http://hdl.handle.net/2142/101729.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

AlMasri, Mohammad. “On implementing sparse matrix-vector multiplication on intel platform.” 2018. Web. 13 Apr 2021.

Vancouver:

AlMasri M. On implementing sparse matrix-vector multiplication on intel platform. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/2142/101729.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

AlMasri M. On implementing sparse matrix-vector multiplication on intel platform. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/101729

Not specified: Masters Thesis or Doctoral Dissertation

The Ohio State University

4. Ashari, Arash. Sparse Matrix-Vector Multiplication on GPU.

Degree: PhD, Computer Science and Engineering, 2014, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100

► Sparse Matrix-Vector multiplication (*SpMV*) is one of the key operations in linear algebra. Overcoming thread divergence, load imbalance and un-coalesced and indirect memory access due…
(more)

Subjects/Keywords: Computer Engineering; Computer Science; GPU; CUDA; Sparse; SpMV; BRC; ACSR

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ashari, A. (2014). Sparse Matrix-Vector Multiplication on GPU. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100

Chicago Manual of Style (16^{th} Edition):

Ashari, Arash. “Sparse Matrix-Vector Multiplication on GPU.” 2014. Doctoral Dissertation, The Ohio State University. Accessed April 13, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

MLA Handbook (7^{th} Edition):

Ashari, Arash. “Sparse Matrix-Vector Multiplication on GPU.” 2014. Web. 13 Apr 2021.

Vancouver:

Ashari A. Sparse Matrix-Vector Multiplication on GPU. [Internet] [Doctoral dissertation]. The Ohio State University; 2014. [cited 2021 Apr 13]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Council of Science Editors:

Ashari A. Sparse Matrix-Vector Multiplication on GPU. [Doctoral Dissertation]. The Ohio State University; 2014. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100

Colorado State University

5.
Dinkins, Stephanie.
Model for predicting the performance of sparse matrix vector multiply (*SpMV*) using memory bandwidth requirements and data locality, A.

Degree: MS(M.S.), Computer Science, 2012, Colorado State University

URL: http://hdl.handle.net/10217/65303

► Sparse matrix vector multiply (*SpMV*) is an important computation that is used in many scientific and structural engineering applications. Sparse computations like *SpMV* require the…
(more)

Subjects/Keywords: data locality; Manhattan distance; performance model; sparse matrices; sparse matrix vector multiply; SpMV

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Dinkins, S. (2012). Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/65303

Chicago Manual of Style (16^{th} Edition):

Dinkins, Stephanie. “Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A.” 2012. Masters Thesis, Colorado State University. Accessed April 13, 2021. http://hdl.handle.net/10217/65303.

MLA Handbook (7^{th} Edition):

Dinkins, Stephanie. “Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A.” 2012. Web. 13 Apr 2021.

Vancouver:

Dinkins S. Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. [Internet] [Masters thesis]. Colorado State University; 2012. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/10217/65303.

Council of Science Editors:

Dinkins S. Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. [Masters Thesis]. Colorado State University; 2012. Available from: http://hdl.handle.net/10217/65303

University of Illinois – Chicago

6. Maggioni, Marco. Sparse Convex Optimization on GPUs.

Degree: 2016, University of Illinois – Chicago

URL: http://hdl.handle.net/10027/20173

► Convex optimization is a fundamental mathematical framework used for general problem solving. The computational time taken to optimize problems formulated as Linear Programming, Integer Linear…
(more)

Subjects/Keywords: SpMV; GPU; Interior Point Method; Convex Optimization; Linear Programming; Integer Linear Programming; Adaptive; Conjugate Gradient; Cholesky

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Maggioni, M. (2016). Sparse Convex Optimization on GPUs. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/20173

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Maggioni, Marco. “Sparse Convex Optimization on GPUs.” 2016. Thesis, University of Illinois – Chicago. Accessed April 13, 2021. http://hdl.handle.net/10027/20173.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Maggioni, Marco. “Sparse Convex Optimization on GPUs.” 2016. Web. 13 Apr 2021.

Vancouver:

Maggioni M. Sparse Convex Optimization on GPUs. [Internet] [Thesis]. University of Illinois – Chicago; 2016. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/10027/20173.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Maggioni M. Sparse Convex Optimization on GPUs. [Thesis]. University of Illinois – Chicago; 2016. Available from: http://hdl.handle.net/10027/20173

Not specified: Masters Thesis or Doctoral Dissertation

Virginia Tech

7. Belgin, Mehmet. Structure-based Optimizations for Sparse Matrix-Vector Multiply.

Degree: PhD, Computer Science, 2010, Virginia Tech

URL: http://hdl.handle.net/10919/30260

► This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative…
(more)

Subjects/Keywords: Code Generators; Vectorization; Sparse; SpMV; SMVM; Matrix Vector Multiply; PBR; OSF; thread pool; parallel SpMV

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Belgin, M. (2010). Structure-based Optimizations for Sparse Matrix-Vector Multiply. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/30260

Chicago Manual of Style (16^{th} Edition):

Belgin, Mehmet. “Structure-based Optimizations for Sparse Matrix-Vector Multiply.” 2010. Doctoral Dissertation, Virginia Tech. Accessed April 13, 2021. http://hdl.handle.net/10919/30260.

MLA Handbook (7^{th} Edition):

Belgin, Mehmet. “Structure-based Optimizations for Sparse Matrix-Vector Multiply.” 2010. Web. 13 Apr 2021.

Vancouver:

Belgin M. Structure-based Optimizations for Sparse Matrix-Vector Multiply. [Internet] [Doctoral dissertation]. Virginia Tech; 2010. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/10919/30260.

Council of Science Editors:

Belgin M. Structure-based Optimizations for Sparse Matrix-Vector Multiply. [Doctoral Dissertation]. Virginia Tech; 2010. Available from: http://hdl.handle.net/10919/30260

Iowa State University

8.
Townsend, Kevin Rice.
Computing *SpMV* on FPGAs.

Degree: 2016, Iowa State University

URL: https://lib.dr.iastate.edu/etd/15227

► There are hundreds of papers on accelerating sparse matrix vector multiplication (*SpMV*), however, only a handful target FPGAs. Some claim that FPGAs inherently perform inferiorly…
(more)

Subjects/Keywords: Computer Engineering (Computing and Networking Systems); Computer Engineering; Computing and Networking Systems; FPGA; High Performance Reconfigurable Computing; Sparse Matrix Vector Multiplication; SpMV; Computer Engineering

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Townsend, K. R. (2016). Computing SpMV on FPGAs. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/15227

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Townsend, Kevin Rice. “Computing SpMV on FPGAs.” 2016. Thesis, Iowa State University. Accessed April 13, 2021. https://lib.dr.iastate.edu/etd/15227.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Townsend, Kevin Rice. “Computing SpMV on FPGAs.” 2016. Web. 13 Apr 2021.

Vancouver:

Townsend KR. Computing SpMV on FPGAs. [Internet] [Thesis]. Iowa State University; 2016. [cited 2021 Apr 13]. Available from: https://lib.dr.iastate.edu/etd/15227.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Townsend KR. Computing SpMV on FPGAs. [Thesis]. Iowa State University; 2016. Available from: https://lib.dr.iastate.edu/etd/15227

Not specified: Masters Thesis or Doctoral Dissertation

9. Godwin, Jeswin Samuel. High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations.

Degree: MS, Computer Science and Engineering, 2013, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824

► In this thesis, we address efficient sparse matrix-vector multiplication for matrices arising from structured grid problems with high degrees of freedom at each grid node.…
(more)

Subjects/Keywords: Computer Engineering; Computer Science; "SPMV; GPU; Structured Grid; Column-Diagonal"

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Godwin, J. S. (2013). High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824

Chicago Manual of Style (16^{th} Edition):

Godwin, Jeswin Samuel. “High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations.” 2013. Masters Thesis, The Ohio State University. Accessed April 13, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

MLA Handbook (7^{th} Edition):

Godwin, Jeswin Samuel. “High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations.” 2013. Web. 13 Apr 2021.

Vancouver:

Godwin JS. High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations. [Internet] [Masters thesis]. The Ohio State University; 2013. [cited 2021 Apr 13]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Council of Science Editors:

Godwin JS. High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations. [Masters Thesis]. The Ohio State University; 2013. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824

Indian Institute of Science

10. Ramesh, Chinthala. Hardware-Software Co-Design Accelerators for Sparse BLAS.

Degree: PhD, Engineering, 2019, Indian Institute of Science

URL: http://etd.iisc.ac.in/handle/2005/4276

► Sparse Basic Linear Algebra Subroutines (Sparse BLAS) is an important library. Sparse BLAS includes three levels of subroutines. Level 1, Level2 and Level 3 Sparse…
(more)

Subjects/Keywords: Sparse Matrix Storage Formats; Hardware-Software Codesign Accelerators; Sparse BLAS; Hardware Accelerator; Sawtooth Compressed Row Storage; Sparse Vector Vector Multiplication; Sparse Matrix Matrix Multiplication; Sparse Matrix Vector Multiplication; Compressed Row Storage; Sparse Basic Linear Algebra Subroutines; SpMV Multiplication; SpMM Multiplication; Nano Science and Engineering

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ramesh, C. (2019). Hardware-Software Co-Design Accelerators for Sparse BLAS. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/4276

Chicago Manual of Style (16^{th} Edition):

Ramesh, Chinthala. “Hardware-Software Co-Design Accelerators for Sparse BLAS.” 2019. Doctoral Dissertation, Indian Institute of Science. Accessed April 13, 2021. http://etd.iisc.ac.in/handle/2005/4276.

MLA Handbook (7^{th} Edition):

Ramesh, Chinthala. “Hardware-Software Co-Design Accelerators for Sparse BLAS.” 2019. Web. 13 Apr 2021.

Vancouver:

Ramesh C. Hardware-Software Co-Design Accelerators for Sparse BLAS. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2019. [cited 2021 Apr 13]. Available from: http://etd.iisc.ac.in/handle/2005/4276.

Council of Science Editors:

Ramesh C. Hardware-Software Co-Design Accelerators for Sparse BLAS. [Doctoral Dissertation]. Indian Institute of Science; 2019. Available from: http://etd.iisc.ac.in/handle/2005/4276

11. Karakasis, Vasileios. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.

Degree: 2012, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)

URL: http://hdl.handle.net/10442/hedi/34819

►

This thesis focuses on the optimization of the Sparse Matrix-Vector Multiplication kernel (*SpMV*) for modern multicore architectures. We perform an in-depth performance analysis of the…
(more)

Subjects/Keywords: Υπολογιστικά συστήματα υψηλών επιδόσεων; Επιστημονικές εφαρμογές; Πολλαπλασιασμός αραιού πίνακα με διάνυσμα; Πολυπύρηνες αρχιτεκτονικές; Συμπίεση δεδομένων; Ενεργειακή απόδοση; High performance computing; Scientific applications; Sparse matrix-vector multiplication; Multicore; Data compression; Energy-efficiency; SpMV; CSX; HPC

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Karakasis, V. (2012). Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/34819

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Karakasis, Vasileios. “Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.” 2012. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed April 13, 2021. http://hdl.handle.net/10442/hedi/34819.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Karakasis, Vasileios. “Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.” 2012. Web. 13 Apr 2021.

Vancouver:

Karakasis V. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2012. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/10442/hedi/34819.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karakasis V. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2012. Available from: http://hdl.handle.net/10442/hedi/34819

Not specified: Masters Thesis or Doctoral Dissertation

12. Sedaghati Mokhtari, Naseraddin. Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators.

Degree: PhD, Computer Science and Engineering, 2016, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1452255686

► High performance applications depend on high utilization of memory bandwidth and computing resources, and data parallel accelerators have proven to be very effective in providing…
(more)

Subjects/Keywords: Computer Science; Computer Engineering; Engineering; Stencil Computation, GPU, CUDA, SpMV, Graph Processing, Performance Analysis, SIMD

…Conclusion . . . . . . . . . . . . . . . . .
*SpMV* Representations on GPUs . . . . .
Sparse Matices… …and Features . . . . . . .
4.2.1 Feature Analysis . . . . . . . . . .
*SpMV* Performance… …multiplication (*SpMV*). Efficient execution of *SpMV* on modern data
parallel accelerators (… …ratio. Such properties make GPU-specific optimizations
for high-performance *SpMV* very… …domains and sparsity features),
Chapter 4 evaluates the *SpMV* kernel performance of each of…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Sedaghati Mokhtari, N. (2016). Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1452255686

Chicago Manual of Style (16^{th} Edition):

Sedaghati Mokhtari, Naseraddin. “Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators.” 2016. Doctoral Dissertation, The Ohio State University. Accessed April 13, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1452255686.

MLA Handbook (7^{th} Edition):

Sedaghati Mokhtari, Naseraddin. “Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators.” 2016. Web. 13 Apr 2021.

Vancouver:

Sedaghati Mokhtari N. Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators. [Internet] [Doctoral dissertation]. The Ohio State University; 2016. [cited 2021 Apr 13]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1452255686.

Council of Science Editors:

Sedaghati Mokhtari N. Performance Optimization of Memory-Bound Programs on Data Parallel Accelerators. [Doctoral Dissertation]. The Ohio State University; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1452255686

13. Boyer, Brice. Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox.

Degree: Docteur es, Mathématiques, 2012, Université de Grenoble

URL: http://www.theses.fr/2012GRENM019

►

Dans ce mémoire de thèse, nous développons d'abord des multiplications matricielles efficaces. Nous créons de nouveaux ordonnancements qui permettent de réduire la taille de la… (more)

Subjects/Keywords: Algèbre linéaire exacte; Bibliothèque mathématique générique; Multiplication matricielle dense/SpMV; Matrice dense/creuse; Ordonnancements/jeu de galet; Patrons de conception; Exact linear algebra; Generic mathematic library; Dense matrix multiplication/SpMV; Sparse/dense matrix; Schedulings/pebble games; Design patterns

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Boyer, B. (2012). Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox. (Doctoral Dissertation). Université de Grenoble. Retrieved from http://www.theses.fr/2012GRENM019

Chicago Manual of Style (16^{th} Edition):

Boyer, Brice. “Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox.” 2012. Doctoral Dissertation, Université de Grenoble. Accessed April 13, 2021. http://www.theses.fr/2012GRENM019.

MLA Handbook (7^{th} Edition):

Boyer, Brice. “Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox.” 2012. Web. 13 Apr 2021.

Vancouver:

Boyer B. Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox. [Internet] [Doctoral dissertation]. Université de Grenoble; 2012. [cited 2021 Apr 13]. Available from: http://www.theses.fr/2012GRENM019.

Council of Science Editors:

Boyer B. Multiplication matricielle efficace et conception logicielle pour la bibliothèque de calcul exact LinBox : Efficient matrix multiplication and design for the exact linear algebra library LinBox. [Doctoral Dissertation]. Université de Grenoble; 2012. Available from: http://www.theses.fr/2012GRENM019

14. Hong, Changwan. Code Optimization on GPUs.

Degree: PhD, Computer Science and Engineering, 2019, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533

► Graphic Processing Units (GPUs) have become popular in the last decade due to their high memory bandwidth and powerful computing capacity. Nevertheless, achieving high-performance on…
(more)

Subjects/Keywords: Computer Science; GPU; performance; modeling; optimization; SpMV; SpMM; SDDMM; sparse matrix; graph processing; tiling; multicore; manycore; matrix multiplication; tensor; stencil; SIMD; data locality; CSR; parallel; load balance; shared memory; graph analytics

…82
4.1
cuSPARSE *SpMV*/SpMM performance and upper-bound: NVIDIA Pascal
P100 GPU… …115
4.12 Performance profiles: RS-SpMM and Loop-over-*SpMV*; single and double; K=8,32,128,512… …applications. *SpMV* requires a vector to be multiplied by a sparse matrix. SpMM
is a generalization of… …*SpMV*, and requires multiple vectors to be multiplied by a sparse
matrix. While repeated… …applications of *SpMV* can be used to perform SpMM, better data
reuse can be achieved by devising…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Hong, C. (2019). Code Optimization on GPUs. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533

Chicago Manual of Style (16^{th} Edition):

Hong, Changwan. “Code Optimization on GPUs.” 2019. Doctoral Dissertation, The Ohio State University. Accessed April 13, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533.

MLA Handbook (7^{th} Edition):

Hong, Changwan. “Code Optimization on GPUs.” 2019. Web. 13 Apr 2021.

Vancouver:

Hong C. Code Optimization on GPUs. [Internet] [Doctoral dissertation]. The Ohio State University; 2019. [cited 2021 Apr 13]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533.

Council of Science Editors:

Hong C. Code Optimization on GPUs. [Doctoral Dissertation]. The Ohio State University; 2019. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533

15. Ross, Christine Anne Haines. Accelerating induction machine finite-element simulation with parallel processing.

Degree: MS, Electrical & Computer Engineering, 2015, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/88070

► Finite element analysis used for detailed electromagnetic analysis and design of electric machines is computationally intensive. A means of accelerating two-dimensional transient finite element analysis,…
(more)

Subjects/Keywords: finite element; simulation; finite; element; MATLAB; Graphics Processing Unit (GPU); parallel; parallel; processing; linear; nonlinear; transient; eddy current; eddy; induction; Machine; induction machine; electrical machine; speedup; electromagnetic; Compute Unified Device Architecture (CUDA); sparse matrix-vector multiplication; Sparse Matrix-vector Multiply (SpMV); Krylov; iterative solver; Finite Element Method (FEM); Finite Element Analysis (FEA); Galerkin

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ross, C. A. H. (2015). Accelerating induction machine finite-element simulation with parallel processing. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/88070

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Ross, Christine Anne Haines. “Accelerating induction machine finite-element simulation with parallel processing.” 2015. Thesis, University of Illinois – Urbana-Champaign. Accessed April 13, 2021. http://hdl.handle.net/2142/88070.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Ross, Christine Anne Haines. “Accelerating induction machine finite-element simulation with parallel processing.” 2015. Web. 13 Apr 2021.

Vancouver:

Ross CAH. Accelerating induction machine finite-element simulation with parallel processing. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2015. [cited 2021 Apr 13]. Available from: http://hdl.handle.net/2142/88070.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ross CAH. Accelerating induction machine finite-element simulation with parallel processing. [Thesis]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/88070

Not specified: Masters Thesis or Doctoral Dissertation