Automatic video understanding is becoming more important for applications where real-time performance is crucial and compute is limited. Yet, accurate solutions so far have been computationally intensive. We propose efficient models for videos - Tiny Video Networks - which are video architectures, automatically designed to comply with fast runtimes and, at the same time are effective at video recognition tasks. The Tiny Video Networks run at faster-than-real-time speeds and demonstrate strong performance across several video benchmarks. These models not only provide new tools for real-time video applications, but also enable fast research and development in video understanding. Code and models are available.