Industry strives to have trustworthy AI systems, then to recognize and implement Responsible AI principles to achieve that goal. It is important for us to find solutions that will help implement trustworthy AI systems. Standardization activities are an important element for us in this regard, as they provide a platform for discussion for industry towards use and help work out practical rules and requirements. This article first provides an overview of the bias issue in Artificial Intelligence, and presents the concept of Responsible AI. Second, we present examples of interesting ongoing works by international standardization organizations on bias in AI systems. Then we identify a gap between the principles defined by high-level legal studies on responsible AI, including the European AI Act proposal, and practical implementations of these principles. Finally we state how international standardization may fill this gap. And perhaps the travel still to conduct to fill it?