Harnessing the potential of smart grid networks relies on the efficient and secure communication between distributed energy resources (DERs) and the energy management system (EMS), particularly under variable channel conditions and adversarial interference. This interference is further exacerbated with the advent of artificial intelligence (AI)-driven adversarial devices capable of adaptively disrupting communication in real time. To address these challenges, in this study, we formulate the distributed channel access problem, incorporating dynamic channels and intelligent adversarial interference as a partially observable Markov game (POMG). Specifically, we propose a distributed framework based on a centralized training and distributed execution (CTDE) multi-agent reinforcement learning (MARL), enabling DERs to autonomously adapt to dynamic channels and mitigate intelligent interference using only local observations, i.e., without direct information sharing among DERs. Our simulation results indicate that, by integrating an advanced policy evaluation technique and a tailored utility maximization strategy, DERs can collaboratively optimize their transmission decisions, improving the network's aggregate packet success rate (APSR) and resilience. Additionally, the proposed framework outperforms existing methods, ensuring robust and scalable communication in smart grids under diverse conditions.