Residential Demand Response (DR) Programs have been validated as a viable technology to improve energy efficiency and the reliability of electric power distribution. However, various technical and organizational challenges hinder their full techno-economic potential. In practice, these challenges are related to the small-scale, distributed, heterogeneous, and stochastic nature of residential DR resources. This article investigates state-of-the-art online and reinforcement learning methods that are capable of overcoming these challenges in the context of DR pricing, scheduling, and cybersecurity.