26
Sparse
Memory Efficiency
Linear Algebra
Sparse Matrix Operations
When a matrix is mostly zeros (>90% sparsity), storing it as a dense tensor wastes enormous memory. Deepbox's CSRMatrix class uses Compressed Sparse Row format — only non-zero values, their column indices, and row pointers are stored.
Deepbox Modules Used
deepbox/ndarrayWhat You Will Learn
- CSR format stores only non-zero values — huge memory savings for sparse data
- CSRMatrix.fromCOO() creates from coordinate triplets (row, col, value)
- matvec() multiplies sparse matrix by dense vector efficiently
- toDense() converts back when you need the full matrix
- Use for adjacency matrices, document-term matrices, and feature matrices with many zeros
Source Code
26-sparse-matrices/index.ts
1import { CSRMatrix, tensor } from "deepbox/ndarray";23const expectFloat64Array = (value: unknown): Float64Array => {4 if (!(value instanceof Float64Array)) {5 throw new Error("Expected Float64Array");6 }7 return value;8};910console.log("=== Sparse Matrix Operations ===\n");1112// Create a sparse matrix using COO (Coordinate) format13console.log("1. Creating sparse matrices");14// Matrix:15// [1, 0, 0, 2]16// [0, 3, 0, 0]17// [0, 0, 4, 0]18// [5, 0, 0, 6]19const sparse = CSRMatrix.fromCOO({20 rows: 4,21 cols: 4,22 rowIndices: new Int32Array([0, 0, 1, 2, 3, 3]),23 colIndices: new Int32Array([0, 3, 1, 2, 0, 3]),24 values: new Float64Array([1, 2, 3, 4, 5, 6]),25});26console.log(" Created 4x4 sparse matrix with 6 non-zero elements");27console.log(` Sparsity: ${((1 - sparse.nnz / (sparse.rows * sparse.cols)) * 100).toFixed(1)}%`);2829// Element access30console.log("\n2. Element access");31console.log(` Element at (0,0): ${sparse.get(0, 0)}`);32console.log(` Element at (0,3): ${sparse.get(0, 3)}`);33console.log(` Element at (1,1): ${sparse.get(1, 1)}`);3435// Arithmetic operations36console.log("\n3. Sparse matrix addition");37// Matrix:38// [0, 1, 0, 0]39// [2, 0, 0, 0]40// [0, 0, 0, 3]41// [0, 4, 0, 0]42const sparse2 = CSRMatrix.fromCOO({43 rows: 4,44 cols: 4,45 rowIndices: new Int32Array([0, 1, 2, 3]),46 colIndices: new Int32Array([1, 0, 3, 1]),47 values: new Float64Array([1, 2, 3, 4]),48});49const sum = sparse.add(sparse2);50console.log(` Result has ${sum.nnz} non-zero elements`);5152console.log("\n4. Scalar multiplication");53const scaled = sparse.scale(2);54console.log(` Scaled matrix has ${scaled.nnz} non-zero elements`);55console.log(` Element at (0,0): ${scaled.get(0, 0)} (was ${sparse.get(0, 0)})`);5657console.log("\n5. Element-wise multiplication (Hadamard product)");58const product = sparse.multiply(sparse2);59console.log(` Product has ${product.nnz} non-zero elements`);6061// Matrix-vector multiplication62console.log("\n6. Matrix-vector multiplication");63const vec = tensor([1, 2, 3, 4]);64const result = sparse.matvec(vec);65const resultData = expectFloat64Array(result.data);66console.log(` Result: [${Array.from(resultData).join(", ")}]`);6768// Matrix-matrix multiplication69console.log("\n7. Matrix-matrix multiplication");70// Matrix B (4x2):71// [1, 0]72// [0, 1]73// [1, 1]74// [0, 1]75const B = tensor([76 [1, 0],77 [0, 1],78 [1, 1],79 [0, 1],80]);81const matmul = sparse.matmul(B);82console.log(` Result shape: [${matmul.shape.join(", ")}]`);83console.log(` Result is a dense tensor`);8485// Transpose86console.log("\n8. Transpose");87const transposed = sparse.transpose();88console.log(` Original: ${sparse.rows}x${sparse.cols}`);89console.log(` Transposed: ${transposed.rows}x${transposed.cols}`);90console.log(` Non-zero elements preserved: ${transposed.nnz}`);9192// Convert back to dense93console.log("\n9. Convert to dense");94const densified = sparse.toDense();95console.log(" Converted back to dense tensor");96console.log(` Shape: [${densified.shape.join(", ")}]`);9798console.log("\n=== Benefits of Sparse Matrices ===");99console.log("- Memory efficient for matrices with many zeros");100console.log("- Faster operations when sparsity is high");101console.log("- Common in scientific computing, ML, and graph algorithms");Console Output
$ npx tsx 26-sparse-matrices/index.ts
=== Sparse Matrix Operations ===
1. Creating sparse matrices
Created 4x4 sparse matrix with 6 non-zero elements
Sparsity: 62.5%
2. Element access
Element at (0,0): 1
Element at (0,3): 2
Element at (1,1): 3
3. Sparse matrix addition
Result has 10 non-zero elements
4. Scalar multiplication
Scaled matrix has 6 non-zero elements
Element at (0,0): 2 (was 1)
5. Element-wise multiplication (Hadamard product)
Product has 0 non-zero elements
6. Matrix-vector multiplication
Result: [9, 6, 12, 29]
7. Matrix-matrix multiplication
Result shape: [4, 2]
Result is a dense tensor
8. Transpose
Original: 4x4
Transposed: 4x4
Non-zero elements preserved: 6
9. Convert to dense
Converted back to dense tensor
Shape: [4, 4]
=== Benefits of Sparse Matrices ===
- Memory efficient for matrices with many zeros
- Faster operations when sparsity is high
- Common in scientific computing, ML, and graph algorithms