tensor-on-tensor regression: riemannian optimization, over-parameterization, statistical-computational gap and their interplay

tensor-on-tensor regression: riemannian optimization, over-parameterization, statistical-computational gap and their interplay

1 thought on “tensor-on-tensor regression: riemannian optimization, over-parameterization, statistical-computational gap and their interplay”

  1. The purpose of the tensor-on-tensor regression, which we examine, is to relate tensor responses to tensor covariates with a low Tucker rank parameter tensor/matrix without being aware of its intrinsic rank beforehand.
    By examining the impact of rank over-parameterization, we suggest the Riemannian Gradient Descent (RGD) and Riemannian Gauss-Newton (RGN) methods to address the problem of unknown rank. By demonstrating that RGD and RGN, respectively, converge linearly and quadratically to a statistically optimal estimate in both rank correctly-parameterized and over-parameterized scenarios, we offer the first convergence guarantee for the generic tensor-on-tensor regression. According to our theory, Riemannian optimization techniques automatically adjust to over-parameterization without requiring implementation changes.
    Learn more about tensor-on-tensor here
    #SPJ4

    Reply

Leave a Comment