Submit Manuscript Journal of Leadership, Ethics, Governance, and Sustainable Innovation (JLEGSI) Register
Responsible Use of Artificial Intelligence

The Journal of Leadership, Ethics, Governance, and Sustainable Innovation (JLEGSI) recognises the significant role that Artificial Intelligence (AI), including Large Language Models (LLMs), plays in contemporary research, data analysis and academic writing. While such tools can enhance productivity, clarity and discovery, their use must be transparent, ethical and accountable.

The misuse of AI has implications for research integrity, authorship accountability and data privacy. Therefore, JLEGSI upholds strict ethical standards in line with COPE’s “Ethical Guidelines for the Use of AI in Research and Publication” and international best practices.

Disclosure of AI Use

All authors are required to provide a full and transparent disclosure of any AI tool or LLM used in the preparation of their manuscript. This disclosure must appear in both the Acknowledgements and the Methods (or Data and Tools) section of the article, as appropriate.

Examples of acceptable disclosures include:

  • “Portions of the manuscript’s grammar and syntax were refined using Grammarly Premium (version 1.2, Grammarly Inc.).”
  • “The literature review summary was supported by Scite.ai and Elicit to identify relevant peer-reviewed sources; all references were manually verified.”
  • ChatGPT (GPT-4) was used to rephrase specific sections for clarity and coherence under the direct supervision of the authors. All factual content and interpretations were validated independently.”
  • “Statistical code suggestions were generated using GitHub Copilot and validated by the research team before analysis.”

Examples of AI/LLM tools that must be disclosed (non-exhaustive):

  • Large Language Models (LLMs): ChatGPT (OpenAI), Gemini (Google DeepMind), Claude (Anthropic), LLaMA (Meta AI), Mistral, Pi (Inflection AI), Copilot (Microsoft) and Perplexity AI.
  • AI-supported Writing Tools: Grammarly, QuillBot, Jasper, Wordtune, Sudowrite, Writefull.
  • AI Research and Analysis Tools: Scite.ai, Elicit.org, Consensus, Research Rabbit, Litmaps, Semantic Scholar AI, Scispace.
  • AI Coding Tools: GitHub Copilot, Amazon CodeWhisperer, Replit Ghostwriter, Tabnine.

All such tools must be acknowledged, specifying the exact nature of their contribution (e.g., grammar checking, data visualisation, code suggestion, literature retrieval).

AI Authorship and Accountability

AI systems cannot be considered or listed as authors, co-authors, or contributors under any circumstances.

  • Authorship is reserved for humans who can take public responsibility for the work’s accuracy, originality and ethical integrity.
  • Human authors remain fully responsible for verifying the correctness of all content, including AI-generated text, translations, figures, or data analyses.
  • Accountability extends to ensuring that no hallucinated, biased, or unverified information produced by AI tools is included in the publication.

Example statement for authorship disclosure:

“The authors confirm that no AI system or LLM was listed as a co-author and all intellectual responsibility for the manuscript’s content lies with the human authors.”

Acceptable Use of AI Tools

JLEGSI supports the ethical and limited use of AI where it serves as a supporting instrument rather than a creative or interpretative agent. Acceptable uses include:

  • Language polishing, spelling and grammar correction (e.g., Grammarly, Writefull).
  • Formatting references or citation management (e.g., EndNote AI, Zotero AI plugin).
  • Assisting with idea organisation, outline generation, or summarisation of large text sets, provided the author reviews and confirms accuracy.
  • Querying large datasets through AI interfaces for discovery purposes, as long as sources are verified and cited appropriately.

Authors must always ensure human oversight, manual verification and intellectual interpretation of AI-assisted outputs.

 Prohibited and Unethical AI Use

The following constitute unethical or prohibited uses of AI in research and publishing:

  1. Data Fabrication or Manipulation:
    Using AI to generate synthetic, falsified, or unverified data, figures, or statistical results.
  2. Fake or Hallucinated Citations:
    Including AI-invented references, URLs, or DOI numbers.
  3. Peer-Review Manipulation:
    Using AI tools to generate or falsify peer-review reports, reviewer identities, or submission correspondence.
  4. Content Creation Without Disclosure:
    Incorporating AI-generated sections (e.g., abstracts, discussions, literature summaries) without full disclosure.
  5. Bias Amplification:
    Using AI outputs that reinforce bias or discrimination in sensitive research topics such as gender, race, or socio-economic studies.
  6. Ethical Decision Automation:
    Delegating ethical or methodological judgements to AI instead of qualified researchers.
  7. Privacy Breach:
    Uploading confidential manuscripts, personal data, or unpublished results into public AI systems without consent.

Violations may result in manuscript rejection, retraction, or notification of institutional misconduct, depending on severity.

AI Use Disclosure Template (for Authors)

Authors should include a statement at the end of their manuscript such as:

“The authors used ChatGPT (GPT-4) to improve the clarity and language of the manuscript’s Introduction and Conclusion. The tool was used under author supervision and all content was reviewed and verified for accuracy and originality. No data, analysis, or citations were generated by the AI system.”

or

“AI tools such as Elicit and Consensus were used to identify peer-reviewed articles relevant to the literature review. All references were manually validated and the final synthesis reflects independent human interpretation.”

 

AI Policy Enforcement

  • Manuscripts that show evidence of undisclosed AI-generated content, fake citations, or manipulated data will be flagged for investigation.
  • Depending on the severity, sanctions may include revision requests, rejection, or institutional notification under COPE’s Misconduct Flowcharts.
  • JLEGSI reserves the right to employ AI-detection tools (e.g., GPTZero, Turnitin AI Writing Detector) to assess manuscript integrity.

Continuous Policy Review

Given the rapid evolution of AI technologies, this policy will be reviewed annually or whenever new guidance from COPE, ICMJE, or other recognised bodies emerges. The journal will update its author guidelines to reflect these changes transparently

How can we help you?

Should you require further assistance. Please don’t hesitate to reach out.