Editors must not upload manuscripts or any manuscript-related communications into non-secure AI tools, as doing so can compromise author confidentiality, violate intellectual property rights, and expose sensitive information contained in the editorial workflow. This prohibition applies to all forms of editorial communication, including decision letters, reviewer invitations, internal editorial notes, and any portion of the manuscript itself, since these materials often contain unpublished data, private correspondence, or confidential assessments. For example, if an editor were to paste an author’s revised manuscript or a decision letter into an AI platform to “improve the wording,” the text could be stored or used for model training, inadvertently revealing private peer-review comments or unpublished research. To protect the integrity of the editorial process and the confidentiality entrusted to the journal, all editorial analysis and communication must be handled securely and without sharing content with external AI systems.
Generative AI Policy
Generative AI and AI-Assisted Tools Policy
Ettisal : Journal of Communication is committed to upholding the highest standards of research integrity, transparency, and academic ethics. In recognition of the growing role of generative artificial intelligence (AI) and AI-assisted technologies (“AI Tools”) in scholarly work, this policy outlines the permitted and prohibited uses of these tools by authors, reviewers, and editors involved in the publication workflow of Ettisal : Journal of Communication.
This policy applies to all submissions to Ettisal : Journal of Communication, including original articles, reviews, and other scholarly contributions.
1. Policy for Author(s)
Ettisal : Journal of Communication recognizes that AI Tools—when used responsibly—can support authors in various research tasks, such as literature synthesis, idea generation, organization of content, and language refinement. However, AI may not replace human critical thinking, evaluation, and originality in any part of the scholarly process.
1.1 Author Accountability
Authors are fully responsible for the integrity and accuracy of their manuscripts. Authors are responsible for verifying all AI-generated content to ensure its accuracy, completeness, neutrality, and proper sourcing, as AI tools may produce incorrect statements, incomplete explanations, or fabricated references that could undermine the credibility of the manuscript. Any text, ideas, or suggestions produced by AI must be thoroughly edited, adapted, and meaningfully integrated into the authors’ own scholarly reasoning rather than being used verbatim, ensuring that the final work reflects the authors’ critical thinking. Maintaining originality is essential; the manuscript must represent the authors’ unique intellectual contribution and not rely excessively on AI-generated material. Authors must also safeguard rights, privacy, and confidentiality by carefully reviewing the terms of use of any AI tool they employ—for instance, avoiding the upload of proprietary research data into platforms that lack clear privacy protections. Additionally, authors must ensure that AI tools are not granted rights over their unpublished materials or outputs in ways that could limit future publication or transfer control of their intellectual property, such as through terms-of-service clauses allowing the tool to reuse or train on user-submitted content.
1.2 Responsible Use of AI Tools
Authors must use AI tools responsibly by ensuring that no sensitive, personal, confidential, or unpublished data—such as private survey responses, patient information, internal documents, or draft manuscripts—is uploaded into AI systems that do not guarantee privacy, as doing so may expose protected information. Authors must also refrain from generating or reproducing copyrighted or identifiable images, voices, or personal likenesses using AI, for example by creating images that resemble real individuals without consent or replicating proprietary visual materials. Any AI-generated output must be thoroughly checked for factual accuracy, logical coherence, and potential bias, since AI systems can produce fabricated references, incorrect interpretations, or culturally biased statements. Finally, all AI tools must be used in compliance with scholarly publishing ethics and applicable laws, meaning authors should follow journal policies, respect intellectual property rights, avoid misuse of proprietary content, and ensure that AI involvement does not compromise the integrity, originality, or ethical soundness of the research.
1.3 Disclosure Requirements
All uses of AI tools in the preparation of a manuscript must be transparently disclosed so that readers, reviewers, and editors understand how the technology contributed to the work. Authors are required to include an AI Usage Declaration at the time of submission, specifying the name of the AI tool used, the purpose for which it was employed—such as language refinement, summarizing large bodies of literature, or generating an outline—and describing the extent of author oversight to demonstrate that human judgment and verification guided the process. Minor proofreading through automated grammar or spell-check features, such as those built into word processors, does not require disclosure because these tools do not generate substantive content. However, when AI tools are used as part of the research process itself, for example in data analysis, coding assistance, or methodological steps, this use must be thoroughly described in the Methods section to ensure transparency, reproducibility, and accountability.
1.4 Authorship
AI tools may not be listed as authors or co-authors under any circumstances, as authorship requires responsibilities that only humans can fulfill, including accountability for the integrity, interpretation, and originality of the work. Likewise, AI tools cannot be cited as authors of any content, since they do not possess independent agency, cannot consent to authorship, and cannot take responsibility for the material they generate. Human authors must therefore retain full responsibility for all aspects of the manuscript—its accuracy, scholarly contribution, ethical compliance, and submission—ensuring that any AI-assisted content has been critically reviewed, verified, and integrated through the authors’ own intellectual judgment.
1.5 Use of AI in Figures, Images, and Artwork
Ettisal : Journal of Communication does not permit the use of AI-generated or AI-modified images in submitted manuscripts, except in specific cases where an illustration or a reconstruction of historical content is necessary—such as the visual preservation or enhancement of old manuscripts—and in such cases the author must clearly declare this use in the image description. Aside from this limited exception, authors may not use AI tools to insert, remove, or alter any visual elements within an image, as such modifications can compromise authenticity and scholarly integrity. Only basic, non-manipulative adjustments—such as changes to brightness, contrast, or color balance—are permitted, and only when they do not distort, obscure, or alter the original meaning or informational value of the image.
Exception: AI-generated or AI-enhanced images may be used in Ettisal : Journal of Communication only when they are an essential part of the research methodology itself—for example, when machine-learning imaging techniques are applied to analyze old images or other historical artefacts—and such use must be described in a clear and reproducible manner in the Methods section. Authors are required to specify the name and version of the AI model or tool used, the developer, and any relevant parameters, and they must be prepared to provide the raw or original data upon request to ensure transparency and verifiability. However, AI-generated artwork that is not part of the research process, such as graphical abstracts or decorative imagery, is not permitted. AI-generated cover art may be considered only with prior written approval from the editor and after the author has demonstrated that all necessary rights, permissions, and licenses have been properly secured.
Clause on the Use of Generative Artificial Intelligence (Generative AI)
-
All authors are required to clearly state in the Author’s Declaration or Statement of Concern if any part of the writing, preparation, or development of the manuscript was carried out with the assistance of generative artificial intelligence (Generative AI), whether partially or entirely.
-
Authors are prohibited from using Generative AI to create or automatically generate the content of the article without an explicit declaration.
-
If, during evaluation or after publication, it is found that the article was written, produced, or relied upon Generative AI without such declaration, the author shall be subject to an administrative penalty in the form of a fine of Rp1,000,000 (one million rupiah) or USD 80.
-
The journal reserves the right to take additional actions, including but not limited to article retraction, rejection of future submissions, or reporting the violation to the author’s affiliated institution.
2. Policy for Reviewer(s)
The peer review process in Ettisal : Journal of Communication is confidential and anchored in human scholarly judgment.
2.1 Confidentiality
Reviewers must not upload a manuscript or any portion of it into an AI tool under any circumstances, as doing so can breach the strict confidentiality of the peer-review process, violate the intellectual property rights of the authors, and risk exposing personally identifiable or sensitive data contained within the submission. This restriction applies not only to the full manuscript draft but also to tables, figures, short excerpts, or even the reviewer’s own report, since these materials may still contain unpublished ideas or information not intended for public release. For example, a reviewer who uploads a manuscript paragraph into a generative AI tool to “summarize it more quickly” could unintentionally cause that text to be stored, used for model training, or displayed to other users, thereby revealing the authors’ unpublished arguments or data before official publication. To protect author rights and preserve the integrity of the review process, all evaluation, analysis, and report writing must be conducted without sharing manuscript content with external AI systems.
2.2 Use of AI Tools in Review
Reviewers must not use AI tools to evaluate, summarize, critique, or provide scientific assessments of submitted manuscripts, as doing so replaces the reviewer’s scholarly judgment with machine-generated interpretations that may be incomplete, inaccurate, or biased; the only allowable use of AI in the review process is to check whether an author’s draft contains signs of generative AI usage when journal policy permits such screening. Reviewers must also refrain from relying on AI to draft or edit their peer-review reports, since the quality, accuracy, confidentiality, and fairness of the review must remain entirely the responsibility of the human reviewer. For example, a reviewer who uploads a full manuscript into an AI system to “generate a quick critique” may inadvertently expose confidential data, receive an inaccurate evaluation that misrepresents the methodology, or produce a report shaped by AI-generated bias rather than professional expertise. To preserve the integrity of peer review, all substantive analysis, interpretation, and written feedback must be produced solely by the reviewer.
2.3 Reviewers' Responsibilities
Reviewers must conduct all manuscript evaluations personally, relying on their own scholarly expertise rather than delegating analytical or interpretive tasks to AI tools. Their feedback must be original, unbiased, and free from AI-generated phrasing or influence, ensuring that the review reflects an independent and professionally informed assessment of the work. Reviewers are also responsible for notifying the editorial office if they suspect inappropriate or undisclosed use of AI by authors, such as detecting unusually formulaic writing patterns or identifying fabricated references. For instance, if a reviewer notices that a manuscript contains references that do not exist or text that appears machine-generated, they should report this to the editor so the issue can be investigated according to journal policy.
3. Policy for Editor(s)
Editors play a central role in maintaining research integrity at Ettisal : Journal of Communication. Their work involves confidential handling of manuscripts and sensitive decision-making.
3.1 Confidentiality Obligations
3.2 Editorial Judgement
Editorial decision-making in Ettisal : Journal of Communication must be carried out solely by human editors, and AI tools may not be used to evaluate submissions, judge scholarly merit, determine acceptance or rejection, or analyze the methodological validity of a manuscript, as these tasks require expert judgment, contextual understanding, and ethical reasoning that AI cannot reliably provide. Editors may use AI only for limited, non-interpretive purposes—such as technical checks approved by the journal—but AI must never replace the editor’s critical thinking or common sense in assessing academic quality. For example, an editor must not rely on an AI system to “score” a manuscript’s originality or scientific soundness or to recommend a final decision, because such automated assessments may be biased, incomplete, or misleading. Human oversight, expert evaluation, and professional discretion remain essential to maintaining the integrity and fairness of the editorial process.
3.3 Allowed Internal AI Tools
Editors may use only journal-approved, in-house AI systems that meet strict privacy, ethical, and data-security standards, including tools designed for reviewer matching, detecting duplicate submissions, performing technical checks, and screening for plagiarism, all of which operate within secure, identity-protected environments that protect confidential manuscript data. These tools assist with administrative and integrity-related tasks but never replace editorial judgment. If an editor suspects that an author or reviewer has misused AI—for instance, noticing that a manuscript contains fabricated references typical of generative AI output or that a reviewer’s report appears machine-generated—they must immediately inform the editorial office or the publisher so that appropriate investigation and corrective action can be taken in accordance with journal policy.
4. Ettisal's use of AI in Publication Workflows
Ettisal : Journal of Communication may employ AI-assisted tools internally to improve publication quality and workflow efficiency, using only systems that comply with strict privacy, confidentiality, and research-integrity standards to ensure that manuscript data remains secure. These approved tools may be used for technical submission checks—such as verifying format, completeness, and adherence to author guidelines—conducting research integrity screening through plagiarism detection software, suggesting potential reviewers based on expertise, and supporting post-acceptance processes like copy-editing consistency checks and proofing aids. For example, an in-house plagiarism detector may flag sections that resemble previously published material, allowing editors to investigate further while still maintaining full confidentiality. These internal tools operate strictly as assistants, never as substitutes for human judgment in the editorial or peer-review process.

