I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON.
The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these.
I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD
So my question is, is it dishonest?
Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct.
I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.
Using AI for literature review is a great tool, but I think the onus is on you to to both verify the output, AND disclose usage of said tool. Clearly describing your methodologies is it important skill for writing papers anyways.
I’ve even drafted the acknowledgment part with brief explanation of how I used AI tools
The only part I’m concerned about is the stigma around the AI use and that it can be treated as misconduct
LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.)
I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations?
At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not.
Thats being said, I feel like I’m feeling more productive it terms of generating insights apart from what the AI said. I also have a chat interface where I basically can ask anything I want from the PDF (and yeah I’m aware of the NotebookLM, I just don’t trust Gemini)
At the same time I would recommend, document your methodology explicitly in the dissertation, describe the verification pipeline, and make it clear what you reviewed manually versus what was automated. That transparency converts "dishonest?" into "methodologically rigorous."
Here is the thing, academic policy is NOT really about honesty. It is about trust. Universities cannot distinguish your workflow from someone who prompted GPT to write their lit review wholesale.
More than the ethical distinction, I believe the rule around AI usage is blunt because enforcement is pretty hard.
I really don’t get the point of the necessity of typing manually. Can you explain?
> I run ChatGPT Pro to collect all relevant papers
Any literature review must be reproducible. If you can't say exactly what queries you ran against exactly what databases, you'll get into trouble. Whether or not that's the way things should be is irrelevant: it's the way things are.
You should ask your supervisor if your approach is okay. If necessary, ask it from a theoretical perspective: "would it be okay if I were to....?" If your supervisor is unavailable then seek advice from their colleagues.
Since you mention ADHD, you're likely to be strongly motivated by novelty. Don't spend time building a dashboard that you could spend on writing your thesis. If you're not getting support from your university, get it now. It might not help, but it's a signal to the university that you're engaging with the system.
That's totally at odds with my understanding, but perhaps this differs between fields.
[0] https://www.prisma-statement.org/prisma-2020-flow-diagram
I thought it’s the experiments that have to be able to reproduce, not the literature review
As for the experiments, yes, in experimental fields. But in all (most?) fields, including non-experimental, the whole process should be well documented so it could be reproduced end-to-end if possible. If it's not reproducible there should be good, well explained reasons why not.
Note that reproduciblity does not necessarily mean the exact same answer will definitely emerge, just that the methods can be followed closely.
My advice is to talk to your dissertation committee chair to understand whether they think it is dishonest. Furthermore, read your university's AI usage policies. If they don't consider what you are doing a permissible use of AI, no amount of assurance on HN or any online forum is gonna help you.