Generative artificial intelligence (AI) tools such as ChatGPT, Claude etc are now widely used in academic writing and publishing. This article examines what this means for language diversity in global research. Through a scholarly dialogue involving five sociolinguists, the paper explores how AI tools influence the way academic texts are written, edited and reviewed. While AI can help researchers draft and improve manuscripts, the article shows that these tools are largely trained on dominant forms of English, especially American English. As a result, they often favour a single standard style of writing and may overlook or misrepresent other legitimate varieties of English used around the world. The discussion highlights both opportunities and risks. AI may support multilingual researchers who lack access to editorial support, but it may also reinforce existing inequalities in global academic publishing. The article therefore calls for more critical and responsible use of AI, as well as greater attention to linguistic diversity when these technologies are developed and used.