Abstract : The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. This paper introduces a novel EM algorithm, called SPIDER-EM, for inference from a training set of size n, n ≫ 1. At the core of our algorithm is an estimator of the full conditional expectation in the E-step, adapted from the stochastic path-integrated differential estimator (SPIDER) technique. We derive finite-time complexity bounds for smooth non-convex likelihood: we show that for convergence to an ǫ-approximate stationary point, the complexity scales as K Opt (n, ǫ) = O(ǫ −1) and K CE (n, ǫ) = n + √ nO(ǫ −1), where K Opt (n, ǫ) and K CE (n, ǫ) are respectively the number of M-steps and the number of per-sample conditional expectations evaluations. This improves over the state-of-the-art algorithms. Numerical results support our findings.