In this study, we present a novel approach to enable high-throughput characterization of transition-metal dichalcogenides (TMDs) across various layers, including mono-, bi-, tri-, four, and multilayers, utilizing a generative deep learning-based image-to-image translation method. Graphical features, including contrast, color, shapes, flake sizes, and their distributions, were extracted using color-based segmentation of optical images, and Raman and photoluminescence spectra of chemical vapor deposition-grown and mechanically exfoliated TMDs. The labeled images to identify and characterize TMDs were generated using the pix2pix conditional generative adversarial network (cGAN), trained only on a limited data set. Furthermore, our model demonstrated versatility by successfully characterizing TMD heterostructures, showing adaptability across diverse material compositions. Impact statement Deep learning has been used to identify and characterize transition-metal dichalcogenides (TMDs). Although studies leveraging convolutional neural networks have shown promise in analyzing the optical, physical, and electronic properties of TMDs, they need extensive data sets and show limited generalization capabilities with smaller data sets. This work introduces a transformative approach-a generative deep learning (DL)-based image-toimage translation method-for high-throughput TMD characterization. Our method, employing a DL-based pix2pix cGAN network, transcends traditional limitations by offering insights into the graphical features, layer numbers, and distributions of TMDs, even with limited data sets. Notably, we demonstrate the scalability of our model through successful characterization of different heterostructures, showcasing its adaptability across diverse material compositions.