In the burgeoning field of human-AI interaction, trust emerges as a cornerstone because many think that it is critical to the effectiveness of collaboration and the acceptance of AI systems. Traditional methods of assessing trust have predominantly relied on self-reported measures, requiring participants to articulate their perceptions and attitudes through questionnaires. However, these explicit methods may not fully capture the nuanced dynamics of trust, especially in real-time and complex interaction environments. This paper introduces an innovative approach to evaluating trust in human-AI teams, pivoting from the conventional reliance on verbal or written feedback to analyzing gameplay behaviors as implicit indicators of trust levels. Utilizing the Overcooked-AI environment, our study explores how participants' interactions with AI agents of varying performance levels can reveal underlying trust mechanisms without a single query posed to the human players. This approach not only bypasses the efficiency challenges posed by repetitive and lengthy trust assessment methods, but also provides insights comparable to them. We highlight the potential of non-verbal cues and action patterns as reliable trust indicators by comparing the predictive accuracies of questionnaire-based models with those derived from gameplay behavior analysis. Furthermore, our findings suggest that these implicit measures can be integrated into adaptive systems and algorithms for realtime trust calibration in human-agent teaming settings. This shift towards an action-oriented trust assessment challenges existing paradigms and opens new avenues for understanding and enhancing human-AI collaboration.