2022] Automated Scholarly Paper Review: Possibility and Challenges [Lin+ 2022] Can Large Language Models Provide Useful Feedback on Research Papers? A Large-Scale Empirical Analysis [Liang+ 2023] Reviewergpt? an Exploratory Study on Using Large Language Models for Paper Reviewing [Liu+ 2023] Aries: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews [D’Arcy+ 2023] Gpt4 is Slightly Helpful for Peer-Review Assistance: A Pilot Study [Robertson 2023] AgentReview: Exploring Peer Review Dynamics with LLM Agents [Jin+ 2024] Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [Tan+ 2024] RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [Couto+ 2024] MARG: Multi-Agent Review Generation for Scientific Papers [D'Arcy+ 2024] Generative Adversarial Reviews: When LLMs Become the Critic [Bougie+ 2024] The AI Review Lottery: Widespread AI-Assisted Peer Reviews Boost Paper Scores and Acceptance Rates [Latona+ 2024] Usefulness of LLMs as an Author Checklist Assistant for Scientific Papers: NeurIPS’24 Experiment [Goldberg+ 2024] What Can Natural Language Processing Do for Peer Review? [Kuznetsov+ 2024] ReviewFlow: Intelligent Scaffolding to Support Academic Peer Reviewing [Sun+ 2024] Prompting LLMs to Compose Meta-Review Drafts from Peer-Review Narratives of Scholarly Manuscripts [Santu+ 2024] OpenReviewer: A Specialized Large Language Model for Generating Critical Scientific Paper Reviews [Idahl+ 2024] LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing [Du+ 2024] Are We There Yet? Revealing the Risks of Utilizing Large Language Models in Scholarly Peer Review [Ye+ 2024] Is LLM a Reliable Reviewer? A Comprehensive Evaluation of LLM on Automatic Paper Reviewing Tasks [Zhou+ 2024] ... and more! 査読(研究評価)の自動化とその評価の研究もたくさん