Dense 3D reconstruction from RGB photographs historically assumes static digicam pose estimates. This assumption has endured, whilst current works have more and more centered on real-time strategies for cellular gadgets. Nonetheless, the idea of 1 pose per picture doesn’t maintain for on-line execution: poses from real-time SLAM are dynamic and could also be up to date following occasions reminiscent of bundle adjustment and loop closure. This has been addressed within the RGB-D setting, by de-integrating previous views and re-integrating them with up to date poses, however it stays largely untreated within the RGB-only setting. We formalize this downside to outline the brand new activity of on-line reconstruction from dynamically-posed photographs. To assist additional analysis, we introduce a dataset known as LivePose containing the dynamic poses from a SLAM system operating on ScanNet. We choose three current reconstruction techniques and apply a framework primarily based on de-integration to adapt every one to the dynamic-pose setting. As well as, we suggest a novel, non-linear de-integration module that learns to take away stale scene content material. We present that responding to pose updates is vital for high-quality reconstruction, and that our de-integration framework is an efficient answer.