In this letter, we investigate dynamic resource selection in dense deployments of a recent 6G mobile in-X subnetworks (inXSs). We cast resource selection in inXSs as a multi-objective optimization problem involving maximization of per inXS sum capacities. Since inXSs are expected to be autonomous, selection decisions are made by each inXS based on its local information without signalling from other inXSs. A multi-agent Q-learning (MAQL) method based on limited sensing information (SI) is then developed resulting in significant reduction in the overhead associated with intra-subnetwork SI exchanges. We perform simulations with focus on two similar but distinct resource allocation problems: joint channel and transmit power selection and channel selection with aggregation. The results indicate that: 1) appropriate settings of Q-learning parameters leads to fast convergence of the MAQL method even with 1-bit quantization of the SI; 2) the proposed MAQL approach offer similar performance and is more robust to sensing delays than the best baseline heuristic with full SI.