Do Large Language Models have cognitive abilities? Do Large Language Models have understanding? Is the correct recognition of verbal contexts or visual objects, based on pre-learning on a large training dataset, a manifestation of the ability to solve cognitive tasks? Or is any LLM just a statistical approximator that compiles averaged texts from its huge dataset close to the specified prompts? The answers to these questions require rigorous formal definitions of the cognitive concepts of "knowledge", "understanding" and related terms.