Commenting on the document is possible without registration, but for editing, you need to:Register on Authorea: https://www.authorea.com/Join the DiversityNet group: https://www.authorea.com/inst/18886Come back hereCode: https://github.com/startcrowd/DiversityNetBlog post: https://medium.com/the-ai-lab/diversitynet-a-collaborative-benchmark-for-generative-ai-models-in-chemistry-f1b9cc669cbaTelegram chat: https://t.me/joinchat/Go4mTw0drJBrCdal0JWu1gGenerative AI models in chemistry are increasingly popular in the research community. They have applications in drug discovery and organic materials (solar cells, semi-conductors). Their goal is to generate virtual molecules with desired chemical properties (more details in this blog post). However, this flourishing literature still lacks a unified benchmark. Such benchmark would provide a common framework to evaluate and compare different generative models. Moreover, this would allow to formulate best practices for this emerging industry of ‘AI molecule generators’: how much training data is needed, for how long the model should be trained, and so on.That’s what the DiversityNet benchmark is about. DiversityNet continues the tradition of data science benchmarks, after the MoleculeNet benchmark (Stanford) for predictive models in chemistry, and the ImageNet challenge (Stanford) in computer vision.