Generative AI Policy

Introduction

This policy is established to ensure scientific integrity, transparency, and accountability in the use of Generative Artificial Intelligence (AI) tools throughout all stages of the publication process at the International Journal of Didactic Mathematics in Distance Education (IJDMDE). This policy is aligned with standards set by Elsevier, Springer Nature, the Committee on Publication Ethics (COPE), and the requirements for Scopus indexing.

1. Definitions and Scope

Generative Artificial Intelligence (Generative AI), within the context of this policy, refers to systems based on Large Language Models (LLMs) and similar tools capable of generating text, images, code, or other content. Examples of tools covered by this definition include, but are not limited to:

  • Text-based AI tools: ChatGPT, Claude, Gemini, Microsoft Copilot, and equivalents

  • Image-based AI tools: DALL-E, Midjourney, Stable Diffusion, and equivalents

  • AI-based code generation tools: GitHub Copilot, Cursor, and equivalents

  • AI-based paraphrasing and grammar tools: Grammarly AI, QuillBot, and equivalents

 This policy applies to all components of a scholarly manuscript, including but not limited to: abstract, introduction, literature review, methodology, data analysis, discussion, conclusion, and references.

2. Policy for Authors

2.1 Core Principles

Authors bear full responsibility for the accuracy, originality, and integrity of all content submitted in their manuscripts, including any sections generated or assisted by Generative AI tools. The accountability for the work cannot be transferred or delegated to any AI system.

2.2 Permitted Uses

Authors are permitted to use Generative AI tools for the following purposes, provided that such use is explicitly and transparently disclosed:

  • Improving grammar, writing style, and spelling (language editing and proofreading)

  • Enhancing readability and clarity of text written by the authors

  • Assisting with translation from the authors' native language into English as a supplementary tool, subject to thorough human review and revision

  • Generating a preliminary outline or structural framework that is subsequently developed entirely by the authors

  • Assisting with visualization and formatting of data already collected and analyzed by the authors

2.3 Prohibited Uses

Authors are strictly prohibited from:

  • Listing any Generative AI tool as an author or co-author of the manuscript

  • Generating research data, empirical findings, or statistical analysis results entirely through AI tools

  • Creating, fabricating, or manipulating research figures, tables, or graphs using AI without full disclosure

  • Using AI to produce a literature review without manual verification against primary sources

  • Submitting a manuscript whose content is wholly or substantially generated by AI without meaningful intellectual contribution from the human authors

  • Using AI to circumvent plagiarism detection (e.g., automated mass paraphrasing of existing works)

2.4 Disclosure Requirements

All use of Generative AI tools must be explicitly disclosed within the manuscript. The disclosure statement must be placed in a dedicated section after the Conclusion and before the References, using the following format:

Sample AI Use Disclosure Statement:

 

"During the preparation of this manuscript, the authors used [name of AI tool, e.g., ChatGPT-4o, OpenAI] for [specific purpose, e.g., language editing of the entire manuscript]. All AI-generated content was subsequently reviewed, revised, and the authors take full responsibility for the content of the publication. The authors declare that no AI tool was used as an author or co-author in this work."

If no AI tools were used, authors must still include a negative declaration: "The authors declare that no Generative AI tools were used in the preparation of this manuscript."

2.5 Authorship Statement

In the Author Contribution Statement, each author must specifically identify their individual intellectual contribution using the CRediT (Contributor Roles Taxonomy) framework, and explicitly affirm that scientific accountability cannot be delegated to any AI tool.

3. Policy for Reviewers

3.1 Principles of Confidentiality and Integrity

Reviewers receive manuscripts in a strictly confidential capacity. The use of Generative AI tools during the peer review process poses serious risks of breaching the confidentiality of unpublished manuscript content. Reviewers are therefore subject to heightened restrictions regarding AI use.

3.2 Permitted Uses

Reviewers are permitted to use Generative AI tools solely for:

  • Improving the language and writing quality of review reports that have been independently written by the reviewer

  • Searching for general contextual information from publicly available and already-published literature

3.3 Prohibited Uses

Reviewers are strictly prohibited from:

  • Uploading, copying, or entering any part or the entirety of the manuscript's content into any Generative AI platform

  • Using AI to automatically generate review comments, evaluations, or editorial recommendations

  • Delegating the intellectual process of peer review to any AI system

  • Using AI in any manner that could expose confidential manuscript content to third parties

3.4 Reporting Obligations

If a reviewer uses AI tools for language editing purposes within their review report, this must be reported to the handling editor through the journal management system. Reviewers who are unable to comply with this policy must promptly return the manuscript to the editor without conducting a review.

4. Policy for Editors

4.1 Editorial Responsibilities

Editors are responsible for upholding and enforcing this AI policy at all stages of the editorial process, from initial desk review through to the final publication decision. Editors serve as the primary guardians of the journal's integrity standards.

4.2 Permitted Uses

Editors may use Generative AI tools for:

  • Checking language clarity and readability of editorial communications (decision letters, reviewer invitations)

  • Assisting with the identification of potential reviewers based on general keyword analysis from publicly available information

  • Analysis of submission metadata and editorial trends for planning purposes

4.3 Prohibited Uses

Editors are prohibited from:

  • Using AI to make editorial decisions (accept, revise, reject) without substantive human intellectual evaluation

  • Entering manuscript content that is under review into any AI system without explicit author consent

  • Using AI to replace or substitute for the genuine peer review process

4.4 Editorial Verification Procedures

Editors must carry out the following verification procedures for each manuscript received:

  1. Verify the completeness and adequacy of the AI use disclosure statement provided by the authors

  2. Run AI detection checks using approved tools (e.g., iThenticate, Turnitin AI Detection, or equivalent) as part of the initial desk review screening

  3. Evaluate the consistency between the declared AI use and the actual content of the manuscript

  4. Contact the corresponding author if there are indications of undisclosed AI use

  5. Document all decisions related to AI use in the editorial management system

 

5. Violations and Consequences

Violations of this policy will be handled in accordance with the COPE (Committee on Publication Ethics) guidelines and may result in the following consequences:

 For Authors:

  • Immediate rejection of the manuscript without further review

  • Retraction of the published article if a violation is discovered post-publication

  • A submission ban for a period of two (2) years

  • Formal notification to the author's affiliated institution

 For Reviewers:

  • Withdrawal of the reviewer invitation and removal from the reviewer database

  • Notification to editors of other journals within the publishing consortium

 For Editors:

  • Re-examination of the relevant editorial decision

  • Disciplinary action in accordance with the policies of the journal's managing institution

 

6. Legal Basis and Reference Standards

This policy has been developed with reference to the following international standards and guidelines:

Reference

Relevance

COPE Guidelines on AI and Authorship (2023)

Ethical guidance on authorship and the use of AI in scholarly publications

Elsevier AI Policy (2024)

Prohibition of AI as author; mandatory disclosure requirements

Springer Nature AI Policy (2023)

Standards for transparency in the use of generative AI

Scopus Content Policy

Scientific integrity requirements for indexing eligibility

ICMJE Recommendations

Authorship criteria and author responsibilities

UNESCO Recommendation on AI Ethics (2021)

Ethical framework for the development and use of AI

7. Policy Review and Updates

Given the rapid pace of development in Generative AI technology, this policy will be reviewed and updated on a regular basis, at a minimum every twelve (12) months, or whenever there are significant changes in scholarly publishing industry standards or Scopus indexing requirements.

All journal stakeholders (authors, reviewers, and editors) will be formally notified of any policy changes through the journal's official website and via official electronic correspondence.

8. Agreement and Acceptance of Policy

By submitting a manuscript, accepting an invitation to serve as a reviewer, or carrying out editorial duties at IJDiMaDE, all parties are deemed to have read, understood, and fully agreed to this policy.

 

Issued by:

Editorial Board of the International Journal of Didactic Mathematics in Distance Education

Policy Edition: 2025  |  Next Review: 2026