This position paper conjectures that for beneficial AGI, it is necessary and sufficient for AGI systems to care about people and to employ goals whose success is collaboratively determined by the others involved in the situation. Moreover, I posit that any goal whose success can be determined without the consensual feedback of those concerned is likely to lead to the manifestation of dark factor traits. Integrating care reduces the risk that an AGI will be incentivized to seek harmful shortcuts to obtaining satisfactory feedback. Employing collaborative goals reduces the risk that an AGI will optimize for superficial features of success and proxy goals. Together, these ideas propose a fundamental shift away from the traditional control-centric AI Safety strategies. This paradigm not only promotes more beneficial outcomes but also enables AGIs to learn from and adapt to complex moral landscapes, thus continuously improving their capacity to contribute positively to the wellbeing of humans and other sentient beings.