Building end-to-end speech synthesisers for Indian languages is challenging, given the lack of adequate clean training data and multiple grapheme representations across languages. This work explores the importance of training multilingual and multi-speaker text-to-speech (TTS) systems based on language families. Such a study is crucial not only from a low resource perspective but also to reduce the overall consumption of computing power. TTS systems are trained separately for Indo-Aryan and Dravidian language families, and their performance is compared to that of a combined Indo-Aryan+Dravidian voice. We also investigate the amount of training data required for a language in a multilingual setting. We want to see if we can do more with less data without compromising the quality and intelligibility of the synthesised speech. The voices are easily extendable to new languages with limited data. Same-family and cross-family synthesis and adaptation to new languages are analysed. The analyses show that language family-wise training of Indic systems is the way forward for the Indian subcontinent, where a large number of languages are spoken.