This paper addresses the distributed optimal coordination problem for a network of heterogeneous agents. To the best of our knowledge, few works where the control design requires no information regarding agents dynamics can be found in the literature. Compared to these works in which agents are only allowed to be of relative degree one, this paper extends the results for networks that are mixtures of agents with different relative degrees. The main goal is that all agents' outputs globally converge to the minimizer of the global cost function. We introduce a novel distributed two-layer control policy to do so. The top layer searches for the minimizer and generates reference signals for the bottom layer. The top layer is identical for all agents. The bottom layer provides each agent with an adaptive controller, enabling the agent to track its associated reference signal. Local cost functions are assumed to be strictly convex and have smooth gradients. The proposed control policy is fully distributed, meaning agents only depend on their own and their neighbors' information to reach a consensus on the global minimizer. Numerical simulations validate the theoretical results.