Cornell & NTT’s Physical Neural Networks: a “Radical Alternative for Implementing Deep Neural Networks” That Enables Arbitrary Physical Systems Training

Deep neural networks (DNNs) already provide the best solutions for many complex problems in image recognition, speech recognition, and natural language processing. Now, DNNs are entering the physical arena. DNNs and physical processes share numerous structural similarities, such as hierarchy, approximate symmetries, redundancy and nonlinearity, suggesting the potential for DNNs to operate effectively on data from the physical world.

In the paper Deep Physical Neural Networks Enabled by a Backpropagation Algorithm for Arbitrary Physical Systems, a research team from Cornell University and NTT Research proposes that the controlled evolutions of physical systems are well-suited to the realization of deep learning models, and introduces Physical Neural Networks (PNN), a novel framework that leverages a backpropagation algorithm to train arbitrary, real physical systems to execute deep neural networks.

The principle behind backpropagation algorithms is the modelling of mathematical operations by modifying input signal weights to produce an expected output signal. Determining the optimal parameter updates makes it possible to improve model performance by computing the gradient descend.

The proposed PNN framework is enabled by a Physics-Aware Training (PAT) approach based on a novel hybrid physical-digital algorithm that can execute the backpropagation algorithm efficiently and accurately on any sequence of physical input-output transformations. Essentially this means a problem is solved by applying backpropagation algorithms to train sequences of real physical operations to perform desired physical functions.


The PAT training process comprises five steps:

  1. Training input data is input to the physical system along with parameters.
  2. In a forward pass, the physical system applies its transformation to produce an output.
  3. The physical output is compared to the intended output to compute the error.
  4. Using a differentiable digital model to estimate the gradients of the physical system, the gradient of the loss is computed with respect to the controllable parameters.
  5. The parameters are updated according to the inferred gradient.

The process is repeated during training, iterating over training examples until the error is reduced to a pre-defined threshold.


The researchers evaluated PNNs’ generality using three diverse physical systems — optical, mechanical, and electrical.

In one experiment, the team tests a PNN that uses broadband optical second harmonic generation (SHG) with shaped femtosecond pulses. The PNN is tasked with learning to predict spoken vowels from 12-dimensional input data vectors of formant frequencies extracted from audio recordings, then classify the spoken vowels according to their formant frequencies. The results showed that the proposed SHG-PNN is able to perform classification of vowels to 93 percent accuracy.

On the MNIST handwritten digit classification task, the trainable SHG transformations boost the performance of digital operations from roughly 90 percent accuracy to 97 percent.

The team believes PNNs provide a basis for hardware-physics-software co-design in ML and have the potential to facilitate the development of novel ML hardware that is orders of magnitude faster and more energy-efficient than conventional electronic processors.

The paper Deep Physical Neural Networks Enabled by a Backpropagation Algorithm for Arbitrary Physical Systems is on arXiv.

Author: Hecate He | Editor: Michael Sarazen, Chain Zhang

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

Related Articles

Back to top button