Advanced search options

Sorted by: relevance · author · university · date | New search

You searched for `subject:(sparse matrix vector multiply)`

.
Showing records 1 – 30 of
13510 total matches.

◁ [1] [2] [3] [4] [5] … [451] ▶

Search Limiters

Dates

- 2017 – 2021 (4019)
- 2012 – 2016 (5227)
- 2007 – 2011 (3087)
- 2002 – 2006 (1059)
- 1997 – 2001 (339)
- 1992 – 1996 (157)
- 1987 – 1991 (95)
- 1982 – 1986 (55)
- 1977 – 1981 (23)
- 1972 – 1976 (30)

Universities

- Brno University of Technology (721)
- University of São Paulo (507)
- NSYSU (259)
- Universidade Estadual de Campinas (238)
- Texas A&M University (206)
- National University of Singapore (205)
- Virginia Tech (185)
- Georgia Tech (176)
- University of Illinois – Urbana-Champaign (174)
- Indian Institute of Science (167)
- University of Michigan (158)
- Brazil (157)
- Delft University of Technology (137)
- University of Manchester (134)
- The Ohio State University (125)

Department

- Electrical Engineering (264)
- Computer Science (218)
- Mathematics (201)
- Electrical and Computer Engineering (200)
- Mechanical Engineering (179)
- Physics (124)
- Faculty of Engineering (110)
- Biomedical Engineering (97)
- Informatique (93)
- Statistics (84)
- Chemistry (81)
- Materials Science and Engineering (62)
- Economics (61)
- Engineering (61)
- Computer Science and Engineering (57)

Degrees

Levels

- doctoral (5155)
- masters (2716)
- thesis (215)
- doctor of philosophy ph.d. (14)
- doctor of philosophy (ph.d.) (10)

Languages

Country

- US (4327)
- Brazil (1953)
- France (1005)
- Canada (792)
- Czech Republic (725)
- Sweden (569)
- UK (463)
- India (391)
- Greece (374)
- Australia (363)
- Netherlands (357)
- Taiwan (259)
- South Africa (209)
- Singapore (205)
- Germany (203)

▼ Search Limiters

Colorado State University

1.
Dinkins, Stephanie.
Model for predicting the performance of *sparse* *matrix* *vector* *multiply* (SpMV) using memory bandwidth requirements and data locality, A.

Degree: MS(M.S.), Computer Science, 2012, Colorado State University

URL: http://hdl.handle.net/10217/65303

► *Sparse* *matrix* *vector* *multiply* (SpMV) is an important computation that is used in many scientific and structural engineering applications. *Sparse* computations like SpMV require the…
(more)

Subjects/Keywords: data locality; Manhattan distance; performance model; sparse matrices; sparse matrix vector multiply; SpMV

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Dinkins, S. (2012). Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. (Masters Thesis). Colorado State University. Retrieved from http://hdl.handle.net/10217/65303

Chicago Manual of Style (16^{th} Edition):

Dinkins, Stephanie. “Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A.” 2012. Masters Thesis, Colorado State University. Accessed April 11, 2021. http://hdl.handle.net/10217/65303.

MLA Handbook (7^{th} Edition):

Dinkins, Stephanie. “Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A.” 2012. Web. 11 Apr 2021.

Vancouver:

Dinkins S. Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. [Internet] [Masters thesis]. Colorado State University; 2012. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10217/65303.

Council of Science Editors:

Dinkins S. Model for predicting the performance of sparse matrix vector multiply (SpMV) using memory bandwidth requirements and data locality, A. [Masters Thesis]. Colorado State University; 2012. Available from: http://hdl.handle.net/10217/65303

Virginia Tech

2.
Belgin, Mehmet.
Structure-based Optimizations for *Sparse* *Matrix*-*Vector* * Multiply*.

Degree: PhD, Computer Science, 2010, Virginia Tech

URL: http://hdl.handle.net/10919/30260

► This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of *Sparse* *Matrix*-*vector* *Multiply* (SMVM) kernels, which dominate the runtime of iterative…
(more)

Subjects/Keywords: Code Generators; Vectorization; Sparse; SpMV; SMVM; Matrix Vector Multiply; PBR; OSF; thread pool; parallel SpMV

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Belgin, M. (2010). Structure-based Optimizations for Sparse Matrix-Vector Multiply. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/30260

Chicago Manual of Style (16^{th} Edition):

Belgin, Mehmet. “Structure-based Optimizations for Sparse Matrix-Vector Multiply.” 2010. Doctoral Dissertation, Virginia Tech. Accessed April 11, 2021. http://hdl.handle.net/10919/30260.

MLA Handbook (7^{th} Edition):

Belgin, Mehmet. “Structure-based Optimizations for Sparse Matrix-Vector Multiply.” 2010. Web. 11 Apr 2021.

Vancouver:

Belgin M. Structure-based Optimizations for Sparse Matrix-Vector Multiply. [Internet] [Doctoral dissertation]. Virginia Tech; 2010. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10919/30260.

Council of Science Editors:

Belgin M. Structure-based Optimizations for Sparse Matrix-Vector Multiply. [Doctoral Dissertation]. Virginia Tech; 2010. Available from: http://hdl.handle.net/10919/30260

Universiteit Utrecht

3.
Kurt, H.
Improving the Mondriaan *vector* distribution.

Degree: 2016, Universiteit Utrecht

URL: http://dspace.library.uu.nl:8080/handle/1874/327906

► Mondriaan is a hypergraph based *matrix* partitioner, used to distribute the *matrix* and vectors in parallel *sparse* *matrix*-*vector* multiplication (SpMV) when calculating the product u=Av.…
(more)

Subjects/Keywords: mondriaan; parallel algorithms; sparse matrix vector multiplication; vector distribution

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Kurt, H. (2016). Improving the Mondriaan vector distribution. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/327906

Chicago Manual of Style (16^{th} Edition):

Kurt, H. “Improving the Mondriaan vector distribution.” 2016. Masters Thesis, Universiteit Utrecht. Accessed April 11, 2021. http://dspace.library.uu.nl:8080/handle/1874/327906.

MLA Handbook (7^{th} Edition):

Kurt, H. “Improving the Mondriaan vector distribution.” 2016. Web. 11 Apr 2021.

Vancouver:

Kurt H. Improving the Mondriaan vector distribution. [Internet] [Masters thesis]. Universiteit Utrecht; 2016. [cited 2021 Apr 11]. Available from: http://dspace.library.uu.nl:8080/handle/1874/327906.

Council of Science Editors:

Kurt H. Improving the Mondriaan vector distribution. [Masters Thesis]. Universiteit Utrecht; 2016. Available from: http://dspace.library.uu.nl:8080/handle/1874/327906

Indian Institute of Science

4.
Ramesh, Chinthala.
Hardware-Software Co-Design Accelerators for *Sparse* BLAS.

Degree: PhD, Engineering, 2019, Indian Institute of Science

URL: http://etd.iisc.ac.in/handle/2005/4276

► *Sparse* Basic Linear Algebra Subroutines (*Sparse* BLAS) is an important library. *Sparse* BLAS includes three levels of subroutines. Level 1, Level2 and Level 3 *Sparse*…
(more)

Subjects/Keywords: Sparse Matrix Storage Formats; Hardware-Software Codesign Accelerators; Sparse BLAS; Hardware Accelerator; Sawtooth Compressed Row Storage; Sparse Vector Vector Multiplication; Sparse Matrix Matrix Multiplication; Sparse Matrix Vector Multiplication; Compressed Row Storage; Sparse Basic Linear Algebra Subroutines; SpMV Multiplication; SpMM Multiplication; Nano Science and Engineering

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ramesh, C. (2019). Hardware-Software Co-Design Accelerators for Sparse BLAS. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/4276

Chicago Manual of Style (16^{th} Edition):

Ramesh, Chinthala. “Hardware-Software Co-Design Accelerators for Sparse BLAS.” 2019. Doctoral Dissertation, Indian Institute of Science. Accessed April 11, 2021. http://etd.iisc.ac.in/handle/2005/4276.

MLA Handbook (7^{th} Edition):

Ramesh, Chinthala. “Hardware-Software Co-Design Accelerators for Sparse BLAS.” 2019. Web. 11 Apr 2021.

Vancouver:

Ramesh C. Hardware-Software Co-Design Accelerators for Sparse BLAS. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2019. [cited 2021 Apr 11]. Available from: http://etd.iisc.ac.in/handle/2005/4276.

Council of Science Editors:

Ramesh C. Hardware-Software Co-Design Accelerators for Sparse BLAS. [Doctoral Dissertation]. Indian Institute of Science; 2019. Available from: http://etd.iisc.ac.in/handle/2005/4276

Penn State University

5. Kestur Vyasa Prasanna, Srinidhi. Domain-specific Accelerators on Reconfigurable Platforms.

Degree: 2012, Penn State University

URL: https://submit-etda.libraries.psu.edu/catalog/13147

► With the increasing number of transistors available on a chip, microprocessors have evolved from large monolithic cores to multiple cores on a chip. However, to…
(more)

Subjects/Keywords: accelerators; NuFFT; FPGA; N-body problem; matrix vector; sparse; saliency; HMAX; attention; recognition; neuromorphic; vision

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Kestur Vyasa Prasanna, S. (2012). Domain-specific Accelerators on Reconfigurable Platforms. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/13147

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Kestur Vyasa Prasanna, Srinidhi. “Domain-specific Accelerators on Reconfigurable Platforms.” 2012. Thesis, Penn State University. Accessed April 11, 2021. https://submit-etda.libraries.psu.edu/catalog/13147.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Kestur Vyasa Prasanna, Srinidhi. “Domain-specific Accelerators on Reconfigurable Platforms.” 2012. Web. 11 Apr 2021.

Vancouver:

Kestur Vyasa Prasanna S. Domain-specific Accelerators on Reconfigurable Platforms. [Internet] [Thesis]. Penn State University; 2012. [cited 2021 Apr 11]. Available from: https://submit-etda.libraries.psu.edu/catalog/13147.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kestur Vyasa Prasanna S. Domain-specific Accelerators on Reconfigurable Platforms. [Thesis]. Penn State University; 2012. Available from: https://submit-etda.libraries.psu.edu/catalog/13147

Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology

6.
Stathis, P.T.
*Sparse**Matrix* *Vector* Processing Formats.

Degree: 2004, Delft University of Technology

URL: http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74

► In this dissertation we have identified *vector* processing shortcomings related to the efficient storing and processing of *sparse* matrices. To alleviate existent problems we propose…
(more)

Subjects/Keywords: vector processor; sparse matrix; storage formats

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Stathis, P. T. (2004). Sparse Matrix Vector Processing Formats. (Doctoral Dissertation). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74

Chicago Manual of Style (16^{th} Edition):

Stathis, P T. “Sparse Matrix Vector Processing Formats.” 2004. Doctoral Dissertation, Delft University of Technology. Accessed April 11, 2021. http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74.

MLA Handbook (7^{th} Edition):

Stathis, P T. “Sparse Matrix Vector Processing Formats.” 2004. Web. 11 Apr 2021.

Vancouver:

Stathis PT. Sparse Matrix Vector Processing Formats. [Internet] [Doctoral dissertation]. Delft University of Technology; 2004. [cited 2021 Apr 11]. Available from: http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74.

Council of Science Editors:

Stathis PT. Sparse Matrix Vector Processing Formats. [Doctoral Dissertation]. Delft University of Technology; 2004. Available from: http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; urn:NBN:nl:ui:24-uuid:51b11f1c-699a-42f4-9373-b5c9697fde74 ; http://resolver.tudelft.nl/uuid:51b11f1c-699a-42f4-9373-b5c9697fde74

University of Illinois – Urbana-Champaign

7. Ravi, Vishal Jagannath. Automated methods for checking differential privacy.

Degree: MS, Computer Science, 2019, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/104913

► Differential privacy is a de facto standard for statistical computations over databases that contain private data. The strength of differential privacy lies in a rigorous…
(more)

Subjects/Keywords: differential privacy; sparse vector

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ravi, V. J. (2019). Automated methods for checking differential privacy. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/104913

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Ravi, Vishal Jagannath. “Automated methods for checking differential privacy.” 2019. Thesis, University of Illinois – Urbana-Champaign. Accessed April 11, 2021. http://hdl.handle.net/2142/104913.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Ravi, Vishal Jagannath. “Automated methods for checking differential privacy.” 2019. Web. 11 Apr 2021.

Vancouver:

Ravi VJ. Automated methods for checking differential privacy. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2019. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/2142/104913.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ravi VJ. Automated methods for checking differential privacy. [Thesis]. University of Illinois – Urbana-Champaign; 2019. Available from: http://hdl.handle.net/2142/104913

Not specified: Masters Thesis or Doctoral Dissertation

8. Ross, Christine Anne Haines. Accelerating induction machine finite-element simulation with parallel processing.

Degree: MS, Electrical & Computer Engineering, 2015, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/88070

► Finite element analysis used for detailed electromagnetic analysis and design of electric machines is computationally intensive. A means of accelerating two-dimensional transient finite element analysis,…
(more)

Subjects/Keywords: finite element; simulation; finite; element; MATLAB; Graphics Processing Unit (GPU); parallel; parallel; processing; linear; nonlinear; transient; eddy current; eddy; induction; Machine; induction machine; electrical machine; speedup; electromagnetic; Compute Unified Device Architecture (CUDA); sparse matrix-vector multiplication; Sparse Matrix-vector Multiply (SpMV); Krylov; iterative solver; Finite Element Method (FEM); Finite Element Analysis (FEA); Galerkin

…*matrix* A
is *sparse*, b is a *vector*, and the system is solved for the *vector* x. For the *sparse*… …depends on the
number and ordering of nonzero entries in the *matrix*.
16
*Sparse* iterative… …parallel
programming to reduce the simulation time.
2
CHAPTER 2
MAGNETIC *VECTOR* POTENTIAL… …FORMULATION AND
FINITE ELEMENT IMPLEMENTATION
2.1 Magnetic *Vector* Potential Formulation
The… …expressed as
3
J = Js + σ E +σ v × B
(2.8)
The magnetic *vector* potential A is used…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ross, C. A. H. (2015). Accelerating induction machine finite-element simulation with parallel processing. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/88070

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Ross, Christine Anne Haines. “Accelerating induction machine finite-element simulation with parallel processing.” 2015. Thesis, University of Illinois – Urbana-Champaign. Accessed April 11, 2021. http://hdl.handle.net/2142/88070.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Ross, Christine Anne Haines. “Accelerating induction machine finite-element simulation with parallel processing.” 2015. Web. 11 Apr 2021.

Vancouver:

Ross CAH. Accelerating induction machine finite-element simulation with parallel processing. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2015. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/2142/88070.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ross CAH. Accelerating induction machine finite-element simulation with parallel processing. [Thesis]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/88070

Not specified: Masters Thesis or Doctoral Dissertation

Universidade do Estado do Rio de Janeiro

9. Daniel Estrela Lima Fonseca. Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios.

Degree: Master, 2016, Universidade do Estado do Rio de Janeiro

URL: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=11153 ;

►

O presente trabalho faz uma avaliação de desempenho da multiplicação matriz esparsa por vetor denso (spMv), comparando dois formatos de armazenamento para as matrizes esparsas… (more)

Subjects/Keywords: Engenharia Mecânica; Matriz esparsa; Multiplicação matriz vetor; Simulação de reservatórios; Diferenças finitas; Mechanical Engineering; Sparse matrix; Matrix vector multiplication; Reservoir Simulation; Finite diferences; ENGENHARIA MECANICA

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Fonseca, D. E. L. (2016). Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios. (Masters Thesis). Universidade do Estado do Rio de Janeiro. Retrieved from http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=11153 ;

Chicago Manual of Style (16^{th} Edition):

Fonseca, Daniel Estrela Lima. “Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios.” 2016. Masters Thesis, Universidade do Estado do Rio de Janeiro. Accessed April 11, 2021. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=11153 ;.

MLA Handbook (7^{th} Edition):

Fonseca, Daniel Estrela Lima. “Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios.” 2016. Web. 11 Apr 2021.

Vancouver:

Fonseca DEL. Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios. [Internet] [Masters thesis]. Universidade do Estado do Rio de Janeiro; 2016. [cited 2021 Apr 11]. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=11153 ;.

Council of Science Editors:

Fonseca DEL. Comparação do desempenho spMv entre formatos de armazenamento de matrizes esparsas provenientes do método AIM de simulação de reservatórios. [Masters Thesis]. Universidade do Estado do Rio de Janeiro; 2016. Available from: http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=11153 ;

Texas A&M University

10.
Belsare, Aditya Sanjay.
* Sparse* LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms.

Degree: MS, Computer Engineering, 2014, Texas A&M University

URL: http://hdl.handle.net/1969.1/153210

► Direct *sparse* solvers are traditionally known to be robust, yet difficult to parallelize. In the context of circuit simulators, they present an important bottleneck where…
(more)

Subjects/Keywords: Sparse matrix solver; LU Factorization

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Belsare, A. S. (2014). Sparse LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153210

Chicago Manual of Style (16^{th} Edition):

Belsare, Aditya Sanjay. “Sparse LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms.” 2014. Masters Thesis, Texas A&M University. Accessed April 11, 2021. http://hdl.handle.net/1969.1/153210.

MLA Handbook (7^{th} Edition):

Belsare, Aditya Sanjay. “Sparse LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms.” 2014. Web. 11 Apr 2021.

Vancouver:

Belsare AS. Sparse LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms. [Internet] [Masters thesis]. Texas A&M University; 2014. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/1969.1/153210.

Council of Science Editors:

Belsare AS. Sparse LU Factorization for Large Circuit Matrices on Heterogenous Parallel Computing Platforms. [Masters Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153210

Texas A&M University

11.
Hoxha, Dielli.
* Sparse* Matrices and Summa

Degree: MS, Computer Engineering, 2016, Texas A&M University

URL: http://hdl.handle.net/1969.1/157157

► Applications of matrices are found in most scientiﬁc ﬁelds, such as physics, computer graphics, numerical analysis, etc. The high applicability of *matrix* algorithms and representations…
(more)

Subjects/Keywords: Parallel Computing; Sparse Matrix

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Hoxha, D. (2016). Sparse Matrices and Summa Matrix Multiplication Algorithm in STAPL Matrix Framework. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/157157

Chicago Manual of Style (16^{th} Edition):

Hoxha, Dielli. “Sparse Matrices and Summa Matrix Multiplication Algorithm in STAPL Matrix Framework.” 2016. Masters Thesis, Texas A&M University. Accessed April 11, 2021. http://hdl.handle.net/1969.1/157157.

MLA Handbook (7^{th} Edition):

Hoxha, Dielli. “Sparse Matrices and Summa Matrix Multiplication Algorithm in STAPL Matrix Framework.” 2016. Web. 11 Apr 2021.

Vancouver:

Hoxha D. Sparse Matrices and Summa Matrix Multiplication Algorithm in STAPL Matrix Framework. [Internet] [Masters thesis]. Texas A&M University; 2016. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/1969.1/157157.

Council of Science Editors:

Hoxha D. Sparse Matrices and Summa Matrix Multiplication Algorithm in STAPL Matrix Framework. [Masters Thesis]. Texas A&M University; 2016. Available from: http://hdl.handle.net/1969.1/157157

Texas State University – San Marcos

12.
Chaudhary, Anjani.
Conversion of *Sparse* *Matrix* to Band *Matrix* Using FPGA for High-Performance Computing.

Degree: MS, Engineering, 2020, Texas State University – San Marcos

URL: https://digital.library.txstate.edu/handle/10877/13033

► Low power and high computation speed with less memory storage are essential for a real-time scientific computational application. Applications such as image processing, power system,…
(more)

Subjects/Keywords: Sparse Matrix; Band Matrix; RCM algorithm

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Chaudhary, A. (2020). Conversion of Sparse Matrix to Band Matrix Using FPGA for High-Performance Computing. (Masters Thesis). Texas State University – San Marcos. Retrieved from https://digital.library.txstate.edu/handle/10877/13033

Chicago Manual of Style (16^{th} Edition):

Chaudhary, Anjani. “Conversion of Sparse Matrix to Band Matrix Using FPGA for High-Performance Computing.” 2020. Masters Thesis, Texas State University – San Marcos. Accessed April 11, 2021. https://digital.library.txstate.edu/handle/10877/13033.

MLA Handbook (7^{th} Edition):

Chaudhary, Anjani. “Conversion of Sparse Matrix to Band Matrix Using FPGA for High-Performance Computing.” 2020. Web. 11 Apr 2021.

Vancouver:

Chaudhary A. Conversion of Sparse Matrix to Band Matrix Using FPGA for High-Performance Computing. [Internet] [Masters thesis]. Texas State University – San Marcos; 2020. [cited 2021 Apr 11]. Available from: https://digital.library.txstate.edu/handle/10877/13033.

Council of Science Editors:

Chaudhary A. Conversion of Sparse Matrix to Band Matrix Using FPGA for High-Performance Computing. [Masters Thesis]. Texas State University – San Marcos; 2020. Available from: https://digital.library.txstate.edu/handle/10877/13033

Iowa State University

13. Townsend, Kevin Rice. Computing SpMV on FPGAs.

Degree: 2016, Iowa State University

URL: https://lib.dr.iastate.edu/etd/15227

► There are hundreds of papers on accelerating *sparse* *matrix* *vector* multiplication (SpMV), however, only a handful target FPGAs. Some claim that FPGAs inherently perform inferiorly…
(more)

Subjects/Keywords: Computer Engineering (Computing and Networking Systems); Computer Engineering; Computing and Networking Systems; FPGA; High Performance Reconfigurable Computing; Sparse Matrix Vector Multiplication; SpMV; Computer Engineering

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Townsend, K. R. (2016). Computing SpMV on FPGAs. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/15227

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Townsend, Kevin Rice. “Computing SpMV on FPGAs.” 2016. Thesis, Iowa State University. Accessed April 11, 2021. https://lib.dr.iastate.edu/etd/15227.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Townsend, Kevin Rice. “Computing SpMV on FPGAs.” 2016. Web. 11 Apr 2021.

Vancouver:

Townsend KR. Computing SpMV on FPGAs. [Internet] [Thesis]. Iowa State University; 2016. [cited 2021 Apr 11]. Available from: https://lib.dr.iastate.edu/etd/15227.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Townsend KR. Computing SpMV on FPGAs. [Thesis]. Iowa State University; 2016. Available from: https://lib.dr.iastate.edu/etd/15227

Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology

14.
Taouil, M. (author).
A hardware Accelerator for the OpenFOAM *Sparse* *Matrix*-*Vector* Product.

Degree: 2009, Delft University of Technology

URL: http://resolver.tudelft.nl/uuid:ce583533-45ea-4237-b18d-fe31272ea1ee

►

One of the key kernels in scientific applications is the *Sparse* *Matrix* *Vector* Multiplication (SMVM). Profiling OpenFOAM, a sophisticated scientific Computational Fluid Dynamics tool, proved…
(more)

Subjects/Keywords: FPGA; Double Precision Floating Point; Sparse Matrix dense Vector Product; OpenFOAM

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Taouil, M. (. (2009). A hardware Accelerator for the OpenFOAM Sparse Matrix-Vector Product. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:ce583533-45ea-4237-b18d-fe31272ea1ee

Chicago Manual of Style (16^{th} Edition):

Taouil, M (author). “A hardware Accelerator for the OpenFOAM Sparse Matrix-Vector Product.” 2009. Masters Thesis, Delft University of Technology. Accessed April 11, 2021. http://resolver.tudelft.nl/uuid:ce583533-45ea-4237-b18d-fe31272ea1ee.

MLA Handbook (7^{th} Edition):

Taouil, M (author). “A hardware Accelerator for the OpenFOAM Sparse Matrix-Vector Product.” 2009. Web. 11 Apr 2021.

Vancouver:

Taouil M(. A hardware Accelerator for the OpenFOAM Sparse Matrix-Vector Product. [Internet] [Masters thesis]. Delft University of Technology; 2009. [cited 2021 Apr 11]. Available from: http://resolver.tudelft.nl/uuid:ce583533-45ea-4237-b18d-fe31272ea1ee.

Council of Science Editors:

Taouil M(. A hardware Accelerator for the OpenFOAM Sparse Matrix-Vector Product. [Masters Thesis]. Delft University of Technology; 2009. Available from: http://resolver.tudelft.nl/uuid:ce583533-45ea-4237-b18d-fe31272ea1ee

15.
Black Silva, Edgar.
*Sparse**matrix*-*vector* multiplication by specialization.

Degree: MS, 0112, 2013, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/45518

► Program specialization is the process of generating optimized programs based on available inputs. It is particularly applicable when some input data are used repeatedly while…
(more)

Subjects/Keywords: sparse matrix-vector multiplication; program specialization; run-time code generation.

…performing *sparse* *matrix*–dense *vector* multiplication,
2
including methods that are specialized… …Unfolding
The simplest *sparse* *matrix*-*vector* multiplication method is to create a straightline… …*vector* multiplication by specialization relative to the *matrix* M , using matrices of… …apply in
general to *sparse* matrices of the kind found in the *Matrix* Market [5] or… …the
University of Florida *Sparse* *Matrix* Collection [6].
The structure of the…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Black Silva, E. (2013). Sparse matrix-vector multiplication by specialization. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/45518

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Black Silva, Edgar. “Sparse matrix-vector multiplication by specialization.” 2013. Thesis, University of Illinois – Urbana-Champaign. Accessed April 11, 2021. http://hdl.handle.net/2142/45518.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Black Silva, Edgar. “Sparse matrix-vector multiplication by specialization.” 2013. Web. 11 Apr 2021.

Vancouver:

Black Silva E. Sparse matrix-vector multiplication by specialization. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2013. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/2142/45518.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Black Silva E. Sparse matrix-vector multiplication by specialization. [Thesis]. University of Illinois – Urbana-Champaign; 2013. Available from: http://hdl.handle.net/2142/45518

Not specified: Masters Thesis or Doctoral Dissertation

University of Southern California

16.
Morris, Gerald Roger.
Mapping *sparse* *matrix* scientific applications onto
FPGA-augmented reconfigurable supercomputers.

Degree: PhD, Electrical Engineering, 2006, University of Southern California

URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/29573/rec/3948

► The large capacity of field programmable gate arrays (FPGAs) has prompted researchers to map computational kernels onto FPGAs. In some instances, these kernels achieve significant…
(more)

Subjects/Keywords: reconfigurable computer; sparse matrix; Jacobi method; FPGA; conjugate gradient; vector reduction

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Morris, G. R. (2006). Mapping sparse matrix scientific applications onto FPGA-augmented reconfigurable supercomputers. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/29573/rec/3948

Chicago Manual of Style (16^{th} Edition):

Morris, Gerald Roger. “Mapping sparse matrix scientific applications onto FPGA-augmented reconfigurable supercomputers.” 2006. Doctoral Dissertation, University of Southern California. Accessed April 11, 2021. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/29573/rec/3948.

MLA Handbook (7^{th} Edition):

Morris, Gerald Roger. “Mapping sparse matrix scientific applications onto FPGA-augmented reconfigurable supercomputers.” 2006. Web. 11 Apr 2021.

Vancouver:

Morris GR. Mapping sparse matrix scientific applications onto FPGA-augmented reconfigurable supercomputers. [Internet] [Doctoral dissertation]. University of Southern California; 2006. [cited 2021 Apr 11]. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/29573/rec/3948.

Council of Science Editors:

Morris GR. Mapping sparse matrix scientific applications onto FPGA-augmented reconfigurable supercomputers. [Doctoral Dissertation]. University of Southern California; 2006. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/29573/rec/3948

17.
Flegar, Goran.
* Sparse* Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction.

Degree: Programa de Doctorat en Informàtica, 2019, Universitat Jaume I

URL: http://hdl.handle.net/10803/667096

► Con el final de la ley de Dennard y el cercano fin de la ley de Moore, la comunidad en computación de altas prestaciones se…
(more)

Subjects/Keywords: High Performance Computing; Graphics Processing Units; Adaptive Precision; Krylov Methods; Sparse Matrix-Vector Product; Preconditioning; Tecnologies de la informació i les comunicacions (TIC); 004

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Flegar, G. (2019). Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction. (Doctoral Dissertation). Universitat Jaume I. Retrieved from http://hdl.handle.net/10803/667096

Chicago Manual of Style (16^{th} Edition):

Flegar, Goran. “Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction.” 2019. Doctoral Dissertation, Universitat Jaume I. Accessed April 11, 2021. http://hdl.handle.net/10803/667096.

MLA Handbook (7^{th} Edition):

Flegar, Goran. “Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction.” 2019. Web. 11 Apr 2021.

Vancouver:

Flegar G. Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction. [Internet] [Doctoral dissertation]. Universitat Jaume I; 2019. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10803/667096.

Council of Science Editors:

Flegar G. Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction. [Doctoral Dissertation]. Universitat Jaume I; 2019. Available from: http://hdl.handle.net/10803/667096

Penn State University

18. Bangalore Srinivasmurthy, Sowmyalatha. Impact of soft errors on scientific simulations .

Degree: 2011, Penn State University

URL: https://submit-etda.libraries.psu.edu/catalog/12404

► The trends in computing processor technology are driving toward multicores through miniaturization that can pack many processors in a given chip area. This miniaturization has…
(more)

Subjects/Keywords: sparse matrix; iterative linear solvers; soft error

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Bangalore Srinivasmurthy, S. (2011). Impact of soft errors on scientific simulations . (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/12404

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Bangalore Srinivasmurthy, Sowmyalatha. “Impact of soft errors on scientific simulations .” 2011. Thesis, Penn State University. Accessed April 11, 2021. https://submit-etda.libraries.psu.edu/catalog/12404.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Bangalore Srinivasmurthy, Sowmyalatha. “Impact of soft errors on scientific simulations .” 2011. Web. 11 Apr 2021.

Vancouver:

Bangalore Srinivasmurthy S. Impact of soft errors on scientific simulations . [Internet] [Thesis]. Penn State University; 2011. [cited 2021 Apr 11]. Available from: https://submit-etda.libraries.psu.edu/catalog/12404.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bangalore Srinivasmurthy S. Impact of soft errors on scientific simulations . [Thesis]. Penn State University; 2011. Available from: https://submit-etda.libraries.psu.edu/catalog/12404

Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology

19.
Sigurbergsson, Bjorn (author).
A Hardware/Software Co-designed Partitioning Algorithm of *Sparse* *Matrix* *Vector* Multiplication into Multiple Independent Streams for Parallel Processing.

Degree: 2018, Delft University of Technology

URL: http://resolver.tudelft.nl/uuid:92cbecec-aed8-40c6-b70c-c4ab7e8e548e

►

The trend of computing faster and more efficiently has been a driver for the computing industry since its beginning. However, it is increasingly difficult to… (more)

Subjects/Keywords: Big data; Sparse matrix; HLS; FPGA; Zynq

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Sigurbergsson, B. (. (2018). A Hardware/Software Co-designed Partitioning Algorithm of Sparse Matrix Vector Multiplication into Multiple Independent Streams for Parallel Processing. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:92cbecec-aed8-40c6-b70c-c4ab7e8e548e

Chicago Manual of Style (16^{th} Edition):

Sigurbergsson, Bjorn (author). “A Hardware/Software Co-designed Partitioning Algorithm of Sparse Matrix Vector Multiplication into Multiple Independent Streams for Parallel Processing.” 2018. Masters Thesis, Delft University of Technology. Accessed April 11, 2021. http://resolver.tudelft.nl/uuid:92cbecec-aed8-40c6-b70c-c4ab7e8e548e.

MLA Handbook (7^{th} Edition):

Sigurbergsson, Bjorn (author). “A Hardware/Software Co-designed Partitioning Algorithm of Sparse Matrix Vector Multiplication into Multiple Independent Streams for Parallel Processing.” 2018. Web. 11 Apr 2021.

Vancouver:

Sigurbergsson B(. A Hardware/Software Co-designed Partitioning Algorithm of Sparse Matrix Vector Multiplication into Multiple Independent Streams for Parallel Processing. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Apr 11]. Available from: http://resolver.tudelft.nl/uuid:92cbecec-aed8-40c6-b70c-c4ab7e8e548e.

Council of Science Editors:

Sigurbergsson B(. A Hardware/Software Co-designed Partitioning Algorithm of Sparse Matrix Vector Multiplication into Multiple Independent Streams for Parallel Processing. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:92cbecec-aed8-40c6-b70c-c4ab7e8e548e

Virginia Tech

20. Kang, Xiaoning. Contributions to Large Covariance and Inverse Covariance Matrices Estimation.

Degree: PhD, Statistics, 2016, Virginia Tech

URL: http://hdl.handle.net/10919/82150

► Estimation of covariance *matrix* and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant…
(more)

Subjects/Keywords: Covariance matrix; modified Cholesky decomposition; sparse estimation

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Kang, X. (2016). Contributions to Large Covariance and Inverse Covariance Matrices Estimation. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/82150

Chicago Manual of Style (16^{th} Edition):

Kang, Xiaoning. “Contributions to Large Covariance and Inverse Covariance Matrices Estimation.” 2016. Doctoral Dissertation, Virginia Tech. Accessed April 11, 2021. http://hdl.handle.net/10919/82150.

MLA Handbook (7^{th} Edition):

Kang, Xiaoning. “Contributions to Large Covariance and Inverse Covariance Matrices Estimation.” 2016. Web. 11 Apr 2021.

Vancouver:

Kang X. Contributions to Large Covariance and Inverse Covariance Matrices Estimation. [Internet] [Doctoral dissertation]. Virginia Tech; 2016. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10919/82150.

Council of Science Editors:

Kang X. Contributions to Large Covariance and Inverse Covariance Matrices Estimation. [Doctoral Dissertation]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/82150

University of Illinois – Urbana-Champaign

21.
Wolf, Michael M.
Hypergraph-Based Combinatorial Optimization of *Matrix*-*Vector* Multiplication.

Degree: PhD, Computer Science, 2009, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/13069

► Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work…
(more)

Subjects/Keywords: matrix-vector multiplication; hypergraphs; combinatorial optimization; parallel data distributions; finite elements; sparse matrix computations; combinatorial scientific computing

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Wolf, M. M. (2009). Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/13069

Chicago Manual of Style (16^{th} Edition):

Wolf, Michael M. “Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication.” 2009. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed April 11, 2021. http://hdl.handle.net/2142/13069.

MLA Handbook (7^{th} Edition):

Wolf, Michael M. “Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication.” 2009. Web. 11 Apr 2021.

Vancouver:

Wolf MM. Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2009. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/2142/13069.

Council of Science Editors:

Wolf MM. Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2009. Available from: http://hdl.handle.net/2142/13069

University of Lethbridge

22.
University of Lethbridge. Faculty of Arts and Science.
Bi-directional determination of *sparse* Jacobian matrices : algorithms and lower bounds
.

Degree: 2015, University of Lethbridge

URL: http://hdl.handle.net/10133/3760

► Efficient estimation of large *sparse* Jacobian matrices is a requisite in many large-scale scientific and engineering problems. It is known that estimation of non-zeroes of…
(more)

Subjects/Keywords: sparse matrix; Jacobian matrix; row and column compressions; bi-directional partitioning

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Science, U. o. L. F. o. A. a. (2015). Bi-directional determination of sparse Jacobian matrices : algorithms and lower bounds . (Thesis). University of Lethbridge. Retrieved from http://hdl.handle.net/10133/3760

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Science, University of Lethbridge. Faculty of Arts and. “Bi-directional determination of sparse Jacobian matrices : algorithms and lower bounds .” 2015. Thesis, University of Lethbridge. Accessed April 11, 2021. http://hdl.handle.net/10133/3760.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Science, University of Lethbridge. Faculty of Arts and. “Bi-directional determination of sparse Jacobian matrices : algorithms and lower bounds .” 2015. Web. 11 Apr 2021.

Vancouver:

Science UoLFoAa. Bi-directional determination of sparse Jacobian matrices : algorithms and lower bounds . [Internet] [Thesis]. University of Lethbridge; 2015. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10133/3760.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Science UoLFoAa. Bi-directional determination of sparse Jacobian matrices : algorithms and lower bounds . [Thesis]. University of Lethbridge; 2015. Available from: http://hdl.handle.net/10133/3760

Not specified: Masters Thesis or Doctoral Dissertation

Georgia State University

23.
Wu, Xiaolong.
Optimizing *Sparse* *Matrix*-Matrix Multiplication on a Heterogeneous CPU-GPU Platform.

Degree: MS, Computer Science, 2015, Georgia State University

URL: https://scholarworks.gsu.edu/cs_theses/84

► *Sparse* *Matrix*-Matrix multiplication (SpMM) is a fundamental operation over irregular data, which is widely used in graph algorithms, such as finding minimum spanning trees…
(more)

Subjects/Keywords: Sparse matrix-matrix multiplication; Data locality; Pipelining; GPU

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Wu, X. (2015). Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform. (Thesis). Georgia State University. Retrieved from https://scholarworks.gsu.edu/cs_theses/84

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Wu, Xiaolong. “Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform.” 2015. Thesis, Georgia State University. Accessed April 11, 2021. https://scholarworks.gsu.edu/cs_theses/84.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Wu, Xiaolong. “Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform.” 2015. Web. 11 Apr 2021.

Vancouver:

Wu X. Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform. [Internet] [Thesis]. Georgia State University; 2015. [cited 2021 Apr 11]. Available from: https://scholarworks.gsu.edu/cs_theses/84.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Wu X. Optimizing Sparse Matrix-Matrix Multiplication on a Heterogeneous CPU-GPU Platform. [Thesis]. Georgia State University; 2015. Available from: https://scholarworks.gsu.edu/cs_theses/84

Not specified: Masters Thesis or Doctoral Dissertation

University of Tennessee – Knoxville

24. Peyton, Jonathan Lawrence. Programming Dense Linear Algebra Kernels on Vectorized Architectures.

Degree: MS, Computer Engineering, 2013, University of Tennessee – Knoxville

URL: https://trace.tennessee.edu/utk_gradthes/1666

► The high performance computing (HPC) community is obsessed over the general *matrix*-matrix *multiply* (GEMM) routine. This obsession is not without reason. Most, if not…
(more)

Subjects/Keywords: MIC; Vectorization; Linear Algebra; Matrix Multiply; Cholesky; Computer and Systems Architecture; Computer Engineering; Numerical Analysis and Scientific Computing

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Peyton, J. L. (2013). Programming Dense Linear Algebra Kernels on Vectorized Architectures. (Thesis). University of Tennessee – Knoxville. Retrieved from https://trace.tennessee.edu/utk_gradthes/1666

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Peyton, Jonathan Lawrence. “Programming Dense Linear Algebra Kernels on Vectorized Architectures.” 2013. Thesis, University of Tennessee – Knoxville. Accessed April 11, 2021. https://trace.tennessee.edu/utk_gradthes/1666.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Peyton, Jonathan Lawrence. “Programming Dense Linear Algebra Kernels on Vectorized Architectures.” 2013. Web. 11 Apr 2021.

Vancouver:

Peyton JL. Programming Dense Linear Algebra Kernels on Vectorized Architectures. [Internet] [Thesis]. University of Tennessee – Knoxville; 2013. [cited 2021 Apr 11]. Available from: https://trace.tennessee.edu/utk_gradthes/1666.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Peyton JL. Programming Dense Linear Algebra Kernels on Vectorized Architectures. [Thesis]. University of Tennessee – Knoxville; 2013. Available from: https://trace.tennessee.edu/utk_gradthes/1666

Not specified: Masters Thesis or Doctoral Dissertation

25. Karakasis, Vasileios. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.

Degree: 2012, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)

URL: http://hdl.handle.net/10442/hedi/34819

►

This thesis focuses on the optimization of the *Sparse* *Matrix*-*Vector* Multiplication kernel (SpMV) for modern multicore architectures. We perform an in-depth performance analysis of the…
(more)

Subjects/Keywords: Υπολογιστικά συστήματα υψηλών επιδόσεων; Επιστημονικές εφαρμογές; Πολλαπλασιασμός αραιού πίνακα με διάνυσμα; Πολυπύρηνες αρχιτεκτονικές; Συμπίεση δεδομένων; Ενεργειακή απόδοση; High performance computing; Scientific applications; Sparse matrix-vector multiplication; Multicore; Data compression; Energy-efficiency; SpMV; CSX; HPC

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Karakasis, V. (2012). Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/34819

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Karakasis, Vasileios. “Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.” 2012. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed April 11, 2021. http://hdl.handle.net/10442/hedi/34819.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Karakasis, Vasileios. “Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών.” 2012. Web. 11 Apr 2021.

Vancouver:

Karakasis V. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2012. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/10442/hedi/34819.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karakasis V. Βελτιστοποίηση του υπολογιστικού πυρήνα πολλαπλασιασμού αραιού πίνακα με διάνυσμα σε σύγχρονες πολυπύρηνες αρχιτεκτονικές υπολογιστών. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2012. Available from: http://hdl.handle.net/10442/hedi/34819

Not specified: Masters Thesis or Doctoral Dissertation

University of Newcastle

26. Fitzpatrick, Chris. Firmwares for high-speed signal processing applications.

Degree: MPhil, 2016, University of Newcastle

URL: http://hdl.handle.net/1959.13/1312018

►

Masters Research - Master of Philosophy (MPhil)

*Matrix*-*vector* multiplication is widely used in science and engineering. With the constant increase in data throughput rates, computing…
(more)

Subjects/Keywords: VHDL; FPGA; matrix; vector; floating point; MAC

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Fitzpatrick, C. (2016). Firmwares for high-speed signal processing applications. (Masters Thesis). University of Newcastle. Retrieved from http://hdl.handle.net/1959.13/1312018

Chicago Manual of Style (16^{th} Edition):

Fitzpatrick, Chris. “Firmwares for high-speed signal processing applications.” 2016. Masters Thesis, University of Newcastle. Accessed April 11, 2021. http://hdl.handle.net/1959.13/1312018.

MLA Handbook (7^{th} Edition):

Fitzpatrick, Chris. “Firmwares for high-speed signal processing applications.” 2016. Web. 11 Apr 2021.

Vancouver:

Fitzpatrick C. Firmwares for high-speed signal processing applications. [Internet] [Masters thesis]. University of Newcastle; 2016. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/1959.13/1312018.

Council of Science Editors:

Fitzpatrick C. Firmwares for high-speed signal processing applications. [Masters Thesis]. University of Newcastle; 2016. Available from: http://hdl.handle.net/1959.13/1312018

Rice University

27.
Luo, Shangyu.
Adding *Vector* and *Matrix* Support to SimSQL.

Degree: MS, Engineering, 2016, Rice University

URL: http://hdl.handle.net/1911/96227

► In this thesis, I consider the problem of making linear algebra simple to use and efficient to run in a relational database management system. Relational…
(more)

Subjects/Keywords: Vector/Matrix; Linear Algebra; RDBMS; SQL

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Luo, S. (2016). Adding Vector and Matrix Support to SimSQL. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/96227

Chicago Manual of Style (16^{th} Edition):

Luo, Shangyu. “Adding Vector and Matrix Support to SimSQL.” 2016. Masters Thesis, Rice University. Accessed April 11, 2021. http://hdl.handle.net/1911/96227.

MLA Handbook (7^{th} Edition):

Luo, Shangyu. “Adding Vector and Matrix Support to SimSQL.” 2016. Web. 11 Apr 2021.

Vancouver:

Luo S. Adding Vector and Matrix Support to SimSQL. [Internet] [Masters thesis]. Rice University; 2016. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/1911/96227.

Council of Science Editors:

Luo S. Adding Vector and Matrix Support to SimSQL. [Masters Thesis]. Rice University; 2016. Available from: http://hdl.handle.net/1911/96227

Penn State University

28.
Kabir, Humayun.
HIERARCHICAL *SPARSE* GRAPH COMPUTATIONS ON MULTICORE PLATFORMS.

Degree: 2018, Penn State University

URL: https://submit-etda.libraries.psu.edu/catalog/15185hzk134

► Graph analysis is widely used to study connectivity, centrality, community and path analysis of social networks, biological networks, communication networks and any interacting objects that…
(more)

Subjects/Keywords: k-core; k-truss; multicore; sparse matrix; network analysis; graph analysis

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Kabir, H. (2018). HIERARCHICAL SPARSE GRAPH COMPUTATIONS ON MULTICORE PLATFORMS. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/15185hzk134

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Kabir, Humayun. “HIERARCHICAL SPARSE GRAPH COMPUTATIONS ON MULTICORE PLATFORMS.” 2018. Thesis, Penn State University. Accessed April 11, 2021. https://submit-etda.libraries.psu.edu/catalog/15185hzk134.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Kabir, Humayun. “HIERARCHICAL SPARSE GRAPH COMPUTATIONS ON MULTICORE PLATFORMS.” 2018. Web. 11 Apr 2021.

Vancouver:

Kabir H. HIERARCHICAL SPARSE GRAPH COMPUTATIONS ON MULTICORE PLATFORMS. [Internet] [Thesis]. Penn State University; 2018. [cited 2021 Apr 11]. Available from: https://submit-etda.libraries.psu.edu/catalog/15185hzk134.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kabir H. HIERARCHICAL SPARSE GRAPH COMPUTATIONS ON MULTICORE PLATFORMS. [Thesis]. Penn State University; 2018. Available from: https://submit-etda.libraries.psu.edu/catalog/15185hzk134

Not specified: Masters Thesis or Doctoral Dissertation

29. bi, xiaofei. Compressed Sampling for High Frequency Receivers Applications.

Degree: Mathematics and Natural Sciences, 2011, University of Gävle

URL: http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877

► In digital signal processing field, for recovering the signal without distortion, Shannon sampling theory must be fulfilled in the traditional signal sampling. However, in…
(more)

Subjects/Keywords: Compressive Sampling (CS); sparse representation; measurement matrix; signal reconstruction.

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

bi, x. (2011). Compressed Sampling for High Frequency Receivers Applications. (Thesis). University of Gävle. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

bi, xiaofei. “Compressed Sampling for High Frequency Receivers Applications.” 2011. Thesis, University of Gävle. Accessed April 11, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

bi, xiaofei. “Compressed Sampling for High Frequency Receivers Applications.” 2011. Web. 11 Apr 2021.

Vancouver:

bi x. Compressed Sampling for High Frequency Receivers Applications. [Internet] [Thesis]. University of Gävle; 2011. [cited 2021 Apr 11]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

bi x. Compressed Sampling for High Frequency Receivers Applications. [Thesis]. University of Gävle; 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877

Not specified: Masters Thesis or Doctoral Dissertation

30.
Yang, Tao.
The gap between necessity and sufficiency for stability of *sparse* *matrix* systems: simulation studies.

Degree: MS, Electrical & Computer Engr, 2015, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/78554

► *Sparse* *matrix* systems (SMSs) are potentially very useful for graph analysis and topological representations of interaction and communication among elements within a system. Such systems’…
(more)

Subjects/Keywords: Sparse Matrix Systems

…the Motivation and Algorithms
3.1 Symmetric *Sparse* *Matrix* Systems
As suggested above, the… …necessary condition is not able to guarantee that the *sparse* *matrix*
system is stable. However, for… …of zeros in this symmetric *sparse* *matrix* system is less than or equal to the planned number… …zeros in this *sparse* *matrix* system is odd, there also will be an odd number of diagonal… …configuration of symmetric *sparse* *matrix* system
will fail. Second, if there is only one available…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Yang, T. (2015). The gap between necessity and sufficiency for stability of sparse matrix systems: simulation studies. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/78554

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Yang, Tao. “The gap between necessity and sufficiency for stability of sparse matrix systems: simulation studies.” 2015. Thesis, University of Illinois – Urbana-Champaign. Accessed April 11, 2021. http://hdl.handle.net/2142/78554.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Yang, Tao. “The gap between necessity and sufficiency for stability of sparse matrix systems: simulation studies.” 2015. Web. 11 Apr 2021.

Vancouver:

Yang T. The gap between necessity and sufficiency for stability of sparse matrix systems: simulation studies. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2015. [cited 2021 Apr 11]. Available from: http://hdl.handle.net/2142/78554.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yang T. The gap between necessity and sufficiency for stability of sparse matrix systems: simulation studies. [Thesis]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/78554

Not specified: Masters Thesis or Doctoral Dissertation