Numerical Quantum Circuit Learner

Module name: qmlt.numerical

Code author: Maria Schuld <maria@xanadu.ai>

This module contains a class to train models for machine learning and optimization based on variational quantum circuits. The optimization is executed by scipy’s numerical optimisation library. The user defines a function that computes the outputs of the variational circuit, as well as the training objective, and specifies the model and training hyperparameters.

There are three basic functionalities. The circuit can be trained, run with the current parameters, and scored.

The numerical learner module has been designed for the training of continuous-variable circuits written in StrawberryFields or BlackBird (using any backend), but is in principle able to train any user-provided model coded in python.

Note

Numerical differentiation is not robust, which means that some models fail to be trained. For example, the approximations of gradients for gradient-based methods are not precise enough to find the steepest descent in plateaus of the optimization landscape. This can sometimes be rectified by choosing good hyperparameters, but ultimately poses a limit to training quantum circuits with numerical methods.

CircuitLearner class

train_circuit([X, Y, steps, batch_size, seed]) Train the learner, optionally using input data X and target outputs Y.
run_circuit([X, outputs_to_predictions]) Get the outcomes when running the circuit with the current circuit parameters.
score_circuit([X, Y, outputs_to_predictions]) Score the circuit.
get_circuit_parameters([only_print]) Get the current circuit parameters of the learner.

Helper methods

check(hp) Checks if the hyperparameter dictionary has all required keys, and adds default settings for missing entries.
check_X(X) Checks if inputs have the right format.
check_Y(Y, X) Checks if targets have the right format.
check_steps(steps) Checks if step argument has the right format.
check_batch_size(batch_size, X) Checks if batch_size argument has the right format.
check_logs(logs) Checks if logs argument has the right format.

Code details

class qmlt.numerical.CircuitLearner(hyperparams, model_dir='logsNUM/')[source]

Defines a circuit learner based on numerical differentiation. The core model is a variational quantum circuit provided by the user.

Parameters:
  • hyperparams (dict) –

    Dictionary of the following keys:

    • circuit (python function): Function that computes the output of the variational circuit with the following keywords:
      • If task=’optimization’ use circuit(params)
      • If task=’unsupervised’ use circuit(params)
      • If task=’supervised’ use circuit(X, params)

      Here, params is a list of scalar circuit parameters. X is an ndarray representing a batch of training inputs. The name of the function is not mandatory. The function can return a dictionary of variables to log as a second argument, where the key is the name to be displayed during logging, and the value is the variable itself.

    • init_circuit_params (int or list of dictionaries): If int is given, it is interpreted as the number of circuit parameters that are generated with the default values. If list of dictionaries is given, each dictionary represents a circuit parameter and specifies the following optional keys:
      • ’init’ (float): Initial value of this parameter. Defaults to normally distributed parameter with variance 0.1 and mean 0.
      • ’name’ (string): Name given to this parameter. Defaults to random string
      • ’regul’ (boolean): Indicates whether this parameter is regularized. Defaults to False.
      • ’monitor’ (boolean): Indicates whether this parameter is monitored for visualisation. Defaults to False.
    • task (str): One of ‘optimization’, ‘unsupervised’ or ‘supervised’.
    • loss (python function): Loss function that outputs a scalar which measures the quality of a model. Default is a lambda function that returns zero. The name of the function is not mandatory. The function must have the following keywords:
      • If task=’optimization’, use myloss(circuit_output)
      • If task=’unsupervised’, use myloss(circuit_output, X)
      • If task=’supervised’, use myloss(circuit_output, targets)

      Here, circuit_output is the (first) output of circuit(), inputs is a 2-d ndarray representing a batch of inputs, and targets are the target outputs.

    • optimizer (string): ‘SGD’ or name of these optimizers accepted by scipy’s scipy.minimize() method: ‘Nelder-Mead’, ‘CG’, ‘BFGS’, ‘L-BFGS-B’, ‘TNC’, ‘COBYLA’, ‘SLSQP’. Defaults to ‘SGD’.
    • regularizer (function): Regularizer function of the form
      • myregularizer(regularized_params)

      that maps a 1-d list of circuit parameters marked for regularization to a scalar. The name of the function is not mandatory. Default is a lambda function that returns zero.

    • regularization_strength (float): Strength of regularization. Defaults to 0.
    • init_learning_rate (float): Initial learning rate used if optimizer=’SGD’. Defaults to 0.1.
    • decay (float): Reduce the learning rate to 1/(1+decay*step) in each SGD step. Defaults to 0.
    • adaptive_learning_rate_threshold=0 (float): If optimizer=’SGD’, and if all gradients are smaller than this value, multiply learning rate by factor 10. Defaults to 0.
    • print_log (boolean): If true, prints out information on settings and training progress. Defaults to True.
    • log_every (int): Log results to file every log_every training step. Defaults to 1.
    • warm_start (boolean): If true, load the initial parameters from the last stored model. Defaults to False.
    • epsilon (float): Step size for finite difference method for SGD optimizer. Defaults to 1e-06.
    • plot (boolean): If True, plot default values and monitored circuit parameters every log_every steps. If False, do not plot. Defaults to False.
  • model_dir (str) – Relative path to directory in which the circuit parameters are saved. If the directory does not exist, it is created. Defaults to ‘logsNUM/’ in current working directory.
train_circuit(X=None, Y=None, steps=None, batch_size=None, seed=None)[source]

Train the learner, optionally using input data X and target outputs Y.

Parameters:
  • X (ndarray) – Array of inputs.
  • Y (ndarray) – Array of targets.
  • steps (int) – Maximum number of steps of the algorithm.
  • batch_size (int) – Number of training inputs that are subsampled and used in each training step. Must be smaller than the first dimension of X.
  • seed (float) – Seed for sampling batches of training data in each SGD step.
run_circuit(X=None, outputs_to_predictions=None)[source]

Get the outcomes when running the circuit with the current circuit parameters.

Parameters:
  • X (ndarray) – Array of inputs.
  • outputs_to_predictions (function) – Function of the form outputs_to_predictions(outps) that takes a single output and maps it to a prediction that can be compared to the targets in order to compute the accuracy of a classification task. If None, run_circuit will return the outputs only.
Returns:

Dictionary of different outcomes. Always contains the key ‘outputs’ which is the (first) argument returned by the circuit function.

Return type:

Dictionary

score_circuit(X=None, Y=None, outputs_to_predictions=None)[source]

Score the circuit. For unsupervised and supervised learning, the score is computed with regards to some input data.

Parameters:
  • X (ndarray) – Array of inputs.
  • Y (ndarray) – Array of targets.
  • outputs_to_predictions (function) – Function of the form outputs_to_predictions(outps) that takes a single output and maps it to a prediction that can be compared to the targets in order to compute the accuracy of a classification task. If None, no ‘accuracy’ is added to score metrics.
Returns:

Dictionary with score metrics ‘cost’, ‘loss’, ‘regularization’, accuracy (if outputs_to_predictions is given) and the logs indicated by custom logging.

Return type:

Dictionary

get_circuit_parameters(only_print=False)[source]

Get the current circuit parameters of the learner.

Parameters:only_print (boolean) – If True, print the variables and return nothing. Defaults to False.
Returns:If only_print is False, return a dictionary of variable names and values. Else None.
Return type:None or Dictionary
qmlt.numerical.check(hp)

Checks if the hyperparameter dictionary has all required keys, and adds default settings for missing entries.

The final hyperparameters are printed.

Parameters:hp (dict) – Dictionary of hyperparameters
qmlt.numerical.check_X(X)

Checks if inputs have the right format.

qmlt.numerical.check_Y(Y, X)

Checks if targets have the right format.

qmlt.numerical.check_steps(steps)

Checks if step argument has the right format.

qmlt.numerical.check_batch_size(batch_size, X)

Checks if batch_size argument has the right format.

qmlt.numerical.check_logs(logs)

Checks if logs argument has the right format.