Quantum Machine Learning

Modern AI models are becoming increasingly large, demanding substantial computational resources and memory. This creates a gap between the computational demands of these models and the available hardware capabilities. Pruning addresses this gap by reducing model size, memory footprint, and ultimately, energy consumption.

Course link

This briefing document summarizes key concepts in Quantum Machine Learning (QML), drawing from lectures and associated materials. It covers:

  • 🧱 Fundamental building blocks of quantum computation
  • 🌀 Construction and parameterization of quantum circuits
  • 🏋️‍♂️ Methods for training QML models
  • 🛠️ Challenges of working with current quantum hardware

The lectures emphasize a systems-level approach, considering both algorithm design and hardware limitations.


🔑 Key Themes and Ideas

⚛️ Quantum Computing Fundamentals

  • Qubits: The basic unit of quantum information. Unlike classical bits (0 or 1), qubits exist in a superposition of both states simultaneously.

    “…with only one qubit, we can represent many different states, not just 0 or 1, but all combinations between them.”

  • Quantum Gates: Operations manipulating qubit states, analogous to logic gates but reversible. Examples:
    • Pauli Gates (X, Y, Z): Represent rotations on the Bloch sphere.
    • Hadamard Gate (H): Creates superposition states.
    • Controlled-NOT (CNOT) Gate: Creates entanglement between qubits.
  • Quantum Circuits: Sequences of quantum gates applied to qubits for computation.

    “…you can have a bunch of qubits and quantum gates to manipulate their states…”

  • Measurement: Extracts information from qubits, collapsing their state into 0 or 1 (probabilistic).

    “…when we measure a qubit, it collapses to either 0 or 1.”


🤖 Current Quantum Computing Era (NISQ)

  • Noisy Intermediate-Scale Quantum (NISQ): Small, noisy qubits dominate the current era.

    “…we are still in an early stage with a lot of noise and limited qubits.”

  • Challenges:
    • Noise: Environmental disturbances lead to high error rates (e.g., 10⁻³ for single-qubit gates).
    • Limited Qubits: Not enough qubits for advanced algorithms.
    • Connectivity: Operations requiring “swap” operations introduce overhead.
  • Focus on Error Correction: Logical qubits add redundancy to tolerate errors.

    “…recent advancements focus on logical qubits leveraging error correction codes.”


🔄 Parameterized Quantum Circuits (PQCs)

  • Tunable Gates: Contain adjustable parameters (e.g., rotation angles).
  • Expressivity: Measures how well PQCs explore the Hilbert space.

    “The extent to which states generated deviate from uniform distribution.”

  • Entangling Capability: Measures qubit entanglement using the Meyer-Wallach measure (0 to 1).
  • Hardware Efficiency: Assesses how well PQCs map to hardware.

🧩 Data Encoding Techniques

  1. Basis Encoding: Encodes classical data into binary quantum states.
    “Similar to binary representation in classical machines: X = 2 → binary ‘10’ → quantum state 10⟩.”
  2. Amplitude Encoding: Maps values to quantum state amplitudes.

    “Numbers are encoded as the state vector of the qubits.”

  3. Angle Encoding: Maps values to rotation angles of quantum gates.

    “Encodes classical values using rotation gates.”

  4. Arbitrary Encoding: Combines other encoding methods.

🏋️‍♂️ Training Quantum Models

  • Gradient Descent: Optimizes PQC parameters.
  • Parameter Shift Rule: Computes gradients by shifting parameters.

    “Shift θ twice to calculate gradients.”

  • SPSA: Approximates gradients by perturbing all parameters, speeding up training.

    “…perturb all parameters together to obtain the gradient vector.”

  • Backpropagation: Used on classical simulators of quantum systems.

    “Only usable on classical simulators via differentiable linear algebra.”


🧗 Challenges in QML Training

  • Barren Plateaus: Gradients vanish as circuit size grows.

    “…with more qubits, gradient variance decreases exponentially.”

  • Noise Impact: Leads to unreliable gradients and reduced accuracy.

🎯 Noise-Aware Training Techniques:

  1. Probabilistic Gradient Pruning: Prunes unreliable gradients for faster training.

    “Small gradients have large relative errors. Use pruning windows.”

  2. On-Chip Training: Trains directly on quantum devices to avoid simulation limits.

⚡ TorchQuantum (TQ) Library

  • PyTorch-based: Develop QML models with PyTorch.
  • Features:
    • Various data encoders and quantum gates.
    • Conversion to frameworks like Qiskit.
    • GPU acceleration.

🔍 Quantum Architecture Search (QNAS)

  • Challenge: Finding optimal quantum circuits for tasks.
  • SuperCircuit Approach: Trains a single supercircuit with all design choices.

    “SuperCircuit = circuit with the most gates in the design space.”

  • Noise-Aware Evolutionary Search: Optimizes subcircuits while considering noise.

🎯 Key Takeaways

  • Quantum computing provides unique opportunities for machine learning but poses significant challenges.
  • Current hardware limitations (noise, qubits) demand innovative error correction and noise-aware techniques.
  • Parameterized quantum circuits (PQCs) allow flexible QML model development.
  • Efficient data encoding is essential for leveraging quantum systems.
  • Rapid advancements are underway in hardware, algorithms, and software tools to improve QML.