Numerical Tensors

Working with high-performance numerical tensor calculations in iTensor for data processing and computational science.

Introduction to Numerical Tensors

Numerical tensors in iTensor are designed for high-performance computation with concrete numeric data. Unlike symbolic tensors which work with abstract mathematical expressions, numerical tensors operate on actual numbers, making them ideal for data processing, scientific computing, and machine learning applications.

iTensor's numerical tensor module is built on NumPy, providing:

  • Fast array-based computations optimized for performance
  • Support for large datasets and complex numerical operations
  • Efficient memory management for handling large tensors
  • Hardware acceleration capabilities
  • Interoperability with other scientific computing libraries

Numerical tensors excel at processing concrete data and performing complex calculations that require floating-point precision rather than symbolic manipulation.

Basic Structure of Numerical Tensors

In iTensor, numerical tensors are represented as multi-dimensional arrays of concrete numeric values. These arrays have a defined shape and data type that determine their structure and memory requirements.

Components of a Numerical Tensor

Shape

The dimensions of the tensor, represented as an array of integers. For example, a scalar has shape [], a vector might have shape [3], a matrix shape [2, 2], and so on.

Values

The actual numeric data stored in the tensor. These can be integers, floating-point numbers, complex numbers, or boolean values. All values in a tensor typically share the same data type.

Data Type

The type of values stored in the tensor, such as float32, float64, int32, or complex64. The data type affects precision, memory usage, and computation speed.

Name (Optional)

A label for the tensor that can be useful for tracking different tensors in complex calculations.

Creating Numerical Tensors

iTensor provides several methods to create numerical tensors for your calculations. Here are the main approaches:

Method 1: Direct Value Specification

{ "operation": "create", "tensor": { "name": "A", "shape": [2, 2], "values": [ [1.0, 2.0], [3.0, 4.0] ], "dtype": "float64" } }

In this example, we create a 2×2 matrix with explicit numeric values at each position. The "dtype" parameter specifies that these values should be stored as 64-bit floating-point numbers.

Method 2: Creating Special Tensors

{ "operation": "zeros", "tensor": { "name": "Z", "shape": [3, 3], "dtype": "float32" } }

This creates a 3×3 tensor filled with zeros. Other special creation operations include:

  • ones - Create a tensor filled with ones
  • identity - Create an identity matrix
  • random - Create a tensor with random values
  • arange - Create a tensor with a sequence of values

Method 3: Loading from Data

{ "operation": "load", "source": { "type": "csv", "path": "data/measurements.csv", "options": { "delimiter": ",", "skipHeader": true } }, "tensor": { "name": "D", "dtype": "float32" } }

You can load numerical tensors directly from data sources such as CSV files, JSON, HDF5, or other formats. This is particularly useful for real-world data processing tasks.

Method 4: Converting from Symbolic Tensors

{ "operation": "evaluate", "symbolic_tensor": { "name": "S", "shape": [2, 2], "values": [["a", "b"], ["c", "d"]] }, "values": { "a": 1.0, "b": 2.0, "c": 3.0, "d": 4.0 }, "result_tensor": { "name": "N", "dtype": "float64" } }

If you have a symbolic tensor and want to convert it to a numerical tensor by substituting values for the symbols, you can use the "evaluate" operation.

Operations with Numerical Tensors

iTensor supports a comprehensive set of operations for numerical tensors. Here are some of the most common:

Arithmetic Operations

Basic arithmetic operations like addition, subtraction, multiplication, and division.

{ "operation": "add", "operands": [ { "name": "A", "shape": [2, 2], "values": [[1.0, 2.0], [3.0, 4.0]] }, { "name": "B", "shape": [2, 2], "values": [[5.0, 6.0], [7.0, 8.0]] } ] }

Matrix Operations

Matrix-specific operations like matrix multiplication, inverse, determinant, etc.

{ "operation": "matmul", "operands": [ { "name": "A", "shape": [2, 3], "values": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] }, { "name": "B", "shape": [3, 2], "values": [[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]] } ] }

Tensor Contraction

Contracting indices of tensors, generalizing matrix multiplication to higher dimensions.

{ "operation": "contract", "operands": [ { "name": "T1", "shape": [2, 3, 4] }, { "name": "T2", "shape": [4, 3, 5] } ], "subscripts": "ijk,kjl->il" }

Reduction Operations

Operations that reduce dimensions, like sum, mean, max, min, etc.

{ "operation": "sum", "tensor": { "name": "A", "shape": [3, 3], "values": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]] }, "axis": 0 }

Elementwise Operations

Operations that apply to each element independently.

{ "operation": "apply", "function": "exp", "tensor": { "name": "A", "shape": [2, 2], "values": [[0.0, 1.0], [2.0, 3.0]] } }

Reshaping Operations

Operations that change the shape of tensors.

{ "operation": "reshape", "tensor": { "name": "A", "shape": [2, 3], "values": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]] }, "new_shape": [3, 2] }

Performance Considerations

When working with numerical tensors, especially large ones, performance becomes a critical consideration. Here are some tips for optimizing your tensor operations:

Choose the Right Data Type

Using lower precision data types (like float32 instead of float64) can significantly improve performance and reduce memory usage, but at the cost of precision. Choose the appropriate data type based on your application's requirements.

Vectorize Operations

Whenever possible, use vectorized operations instead of explicit loops. iTensor's operations are optimized for array-based calculations, and vectorized code typically runs much faster.

Use In-place Operations

When applicable, use in-place operations to modify tensors instead of creating new ones. This can save memory and reduce overhead from allocating new arrays.

{ "operation": "add_inplace", "target": { "name": "A", "shape": [2, 2] }, "operand": { "name": "B", "shape": [2, 2] } }

Consider Hardware Acceleration

iTensor can leverage hardware acceleration like CUDA for GPUs or MKL for CPUs. For large-scale computations, using hardware acceleration can provide dramatic performance improvements.

Integration with Symbolic Tensors

One of iTensor's powerful features is the ability to integrate numerical and symbolic calculations. Here are some ways you can combine both approaches:

Evaluating Symbolic Expressions

You can define relationships symbolically and then evaluate them with numerical values:

// First define symbolic relationship { "operation": "create", "tensor": { "name": "S", "shape": [2, 2], "values": [["sin(theta)", "cos(theta)"], ["-cos(theta)", "sin(theta)"]] } } // Then evaluate with numerical values { "operation": "evaluate", "symbolic_tensor": "S", "values": { "theta": 0.5 }, "result_tensor": { "name": "R", "dtype": "float64" } }

Optimizing Functions

You can symbolically differentiate a function and then use numerical optimization on the result:

// Define function symbolically { "operation": "create", "function": { "name": "f", "variables": ["x", "y"], "expression": "x^2 + y^2" } } // Compute gradient symbolically { "operation": "gradient", "function": "f", "result_function": "grad_f" } // Use in numerical optimization { "operation": "minimize", "function": "f", "gradient": "grad_f", "initial_point": [1.0, 1.0], "method": "gradient_descent" }

Hybrid Approaches

Some problems are best solved using a hybrid approach, where parts of the computation are handled symbolically and others numerically:

  • Use symbolic tensors to derive equations and simplify expressions
  • Convert to numerical tensors for efficient concrete calculations
  • Switch back to symbolic representation for further analysis

Example Applications

Numerical tensors are widely used in various domains. Here are some common applications:

Machine Learning

Numerical tensors are the foundation of machine learning models, especially deep learning, where weight matrices and feature vectors are represented as tensors.

Scientific Computing

In physics, engineering, and other scientific fields, tensors represent physical quantities and their transformations.

Image Processing

Images can be represented as tensors (height × width × channels), and operations like convolutions are naturally expressed as tensor operations.

Signal Processing

Signals and their transformations (like Fourier transforms) can be efficiently computed using tensor operations.

Data Analysis

Multi-dimensional data analysis often requires tensor operations to identify patterns and relationships.

Robotics and Computer Vision

Spatial transformations, sensor data processing, and object detection all rely on efficient tensor operations.