Using AI adequately is necessary for user companies to remain competitive. Studies show that nevertheless many companies are hesitant in this regard. In relation to the assumption that people’s ability to act is influenced by a lack of trust, particularly in the context of AI, we conducted a study as part of the TrustKI research project to analyze which factors are relevant to documenting trustworthiness in the context of AI. Our evaluation revealed that users demand holistic transparency; the provision of relevant information on the AI solution and proof of technical expertise is not sufficient to build trust, but there is a demand from users for specific information about the respective company. Based on the generally recognized components, we were able to identify further dimensions to provide the required information even more precisely. Thus, the study allows us to propose a preliminary set of information requirements for AI providers.