KOUNADES-BASTIAN et al.: VARIATIONAL EM ALGORITHM FOR THE SEPARATION OF TIME-VARYING CONVOLUTIVE AUDIO MIXTURES 1409 is reminiscent of pioneering works such as (9). This allows one to drastically reduce the number of model parameters and to alleviate the source permutation problem.

This paper addresses the problem of separating audio sources from time-varying convolutive mixtures. We propose a probabilistic framework based on the local complex-Gaussian model combined with non.

The sound sources are separated by means of Wiener filters, built from the estimators provided by the proposed VEM algorithm. Preliminary experiments with simulated data show that, while for static sources we obtain results comparable with the base-line method of Ozerov et al., in the case of moving source our method outperforms a piece-wise version of the baseline method.

Instead of the original variational Bayes EM algorithm (VBEM) for solving this model, we propose a new variational EM algorithm (VEM). Comparing with the VBEM algorithm, our proposed VEM algorithm can reduce the number of parameters when fitting the model, which brings the lower computational complexity and easier implementation in practice.

EM and variational inference are not the same. In EM, you maximize the likelihood or posterior wrt. the parameters with the hidden variables marginalized. In VI, the parameters are also regarded as hidden variables, and you want to approximate the posterior of the hidden variables by a variational distribution.

I rarely leave answers on Quora these days but the answer that Salfo Bikienga gave is quite misleading. EM algorithm can be used for complex models with a lot of latent variables included. The difference lies mainly in that EM algorithm is a gener.

Variational EM is useful when the conditional distributions of the latent variables can't be written down explicitly, so EM would require numerical integration or monte carlo sampling of some kind. Variational Bayes gives you an approximate posterior over your parameters as well, essentially treating everything as a latent variable in the usual Bayesian way.