Using Retrieved Sources for Semantic and Lexical Plagiarism Detection

Authors

DOI:

https://doi.org/10.24996/ijs.2023.64.6.41

Keywords:

Plagiarism, Plagiarism detection, Term Frequency-Inverse Document Frequency, Doc2vec, Sentence Bidirectional Encoder Representations from Transformers

Abstract

     Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will leverage Jaccard similarity and Term Frequency-Inverse Document Frequency (TFIDF) techniques, while for semantic plagiarism detection, Doc2Vec and Sentence Bidirectional Encoder Representations from Transformers (SBERT) intelligent text representation models will be used. Following that, the system compares the suspicious text to the corpus text. Finally, a generated plagiarism report will show the total plagiarism ratio, the plagiarism ratio from each source, and other details.

Downloads

Published

2023-06-30

Issue

Section

Computer Science

How to Cite

Using Retrieved Sources for Semantic and Lexical Plagiarism Detection. (2023). Iraqi Journal of Science, 64(6), 3136-3152. https://doi.org/10.24996/ijs.2023.64.6.41

Similar Articles

1-10 of 941

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)