ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.01883
48
15

A General Derivative Identity for the Conditional Mean Estimator in Gaussian Noise and Some Applications

5 April 2021
Alex Dytso
H. Vincent Poor
S. Shamai
ArXiv (abs)PDFHTML
Abstract

Consider a channel Y=X+N{\bf Y}={\bf X}+ {\bf N}Y=X+N where X{\bf X}X is an nnn-dimensional random vector, and N{\bf N}N is a Gaussian vector with a covariance matrix KN{\bf \mathsf{K}}_{\bf N}KN​. The object under consideration in this paper is the conditional mean of X{\bf X}X given Y=y{\bf Y}={\bf y}Y=y, that is y→E[X∣Y=y]{\bf y} \to E[{\bf X}|{\bf Y}={\bf y}]y→E[X∣Y=y]. Several identities in the literature connect E[X∣Y=y]E[{\bf X}|{\bf Y}={\bf y}]E[X∣Y=y] to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities. In the first part of the paper, a general derivative identity for the conditional mean is derived. Specifically, for the Markov chain U↔X↔Y{\bf U} \leftrightarrow {\bf X} \leftrightarrow {\bf Y}U↔X↔Y, it is shown that the Jacobian of E[U∣Y=y]E[{\bf U}|{\bf Y}={\bf y}]E[U∣Y=y] is given by KN−1Cov(X,U∣Y=y){\bf \mathsf{K}}_{{\bf N}}^{-1} {\bf Cov} ( {\bf X}, {\bf U} | {\bf Y}={\bf y})KN−1​Cov(X,U∣Y=y). In the second part of the paper, via various choices of U{\bf U}U, the new identity is used to generalize many of the known identities and derive some new ones. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. Third, a new connection between the conditional cumulants and the conditional expectation is shown. In particular, it is shown that the kkk-th derivative of E[X∣Y=y]E[X|Y=y]E[X∣Y=y] is the (k+1)(k+1)(k+1)-th conditional cumulant. The third part of the paper considers some applications. In a first application, the power series and the compositional inverse of E[X∣Y=y]E[X|Y=y]E[X∣Y=y] are derived. In a second application, the distribution of the estimator error (X−E[X∣Y])(X-E[X|Y])(X−E[X∣Y]) is derived. In a third application, we construct consistent estimators (empirical Bayes estimators) of the conditional cumulants from an i.i.d. sequence Y1,...,YnY_1,...,Y_nY1​,...,Yn​.

View on arXiv
Comments on this paper