papers
Papers that I am reading, have read, or will read
Research papers are more than printed pages; they are portals into someone else’s thinking. As Zora Neale Hurston once said, “Research is formalized curiosity. It is poking and prying with a purpose.” And as Albert Szent-Györgyi reminds us, “Research is to see what everybody else has seen, and to think what nobody else has thought.”
Reasoning Papers
- Explaining Answers with Entailment Trees - Dalvi et al., EMNLP 2023
- Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning - Tafjord et al., EMNLP 2022
- Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic - Weir et al., EMNLP 2024
Neuro-Symbolic Papers
- Learning to Compose Neural Networks for Question Answering - Andreas et al., NAACL 2016
- ADAPT: As-Needed Decomposition and Planning with Language Models - Prasad et al., NAACL Findings 2024
- code2vec: Learning Distributed Representations of Code - Alon et al., POPL 2019
- Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning - Theodoropoulos et al., CoNLL 2021
- Weakly-Supervised Modeling of Contextualized Event Embedding for Discourse Relations - Lee et al., Findings of ACL 2020
- EventRAG: Enhancing LLM Generation with Event Knowledge Graphs - Yang et al., ACL 2025
- Augmenting Neural Networks with First-order Logic - Li and Srikumar, ACL 2019
- Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models - Wang et al., Findings of ACL 2023
- A Logic-Driven Framework for Consistency of Neural Models - Li et al., EMNLP 2019
- Logically Consistent Language Models via Neuro-Symbolic Integration - Calanzone et al., ICLR 2025
- Harnessing Deep Neural Networks with Logic Rules - Hu et al., ACL 2016
- Symbolic Knowledge Distillation: from General Language Models to Commonsense Models - West et al., NAACL 2022
- Autoregressive Structured Prediction with Language Models - Liu et al., EMNLP Findings 2022