obed junias
I’m a graduate researcher in Computer Science at the University of Colorado Boulder, where I work within the BLAST Lab under the supervision of Dr. Maria L. Pacheco. My work sits at the intersection of natural language reasoning, LLM safety, and agentic systems.
Currently, a primary focus of my research is the long-term reliability of foundation models. I am exploring data quality and model collapse, investigating how synthetic data proliferation affects model reliability over successive training generations. I specifically study how this collapse manifests during foundation model pretraining, approaching it from a systems perspective to understand how robust pretraining pipelines can be implemented and evaluated at scale in open, transparent, and sustainable model ecosystems. I aim to extend these investigations into reflective post-training frameworks, including reinforcement learning (RL)-based approaches.
This research builds upon my established foundation in natural language processing. I develop interpretable reasoning systems and benchmarks for commonsense and logical inference [ACL 2026], exploring structured frameworks as a way to make machine reasoning more transparent, structured, and logically grounded.
In parallel, I work with Dr. Theodora Chaspari on bias detection and fairness evaluation in LLMs, particularly within the mental health domain—work that reflects my broader commitment to building AI systems that remain trustworthy and equitable across diverse populations.
Looking ahead, my overarching vision is to advance the frontiers of interpretable reasoning and LLM safety. I strive to develop methodologies that ensure AI systems are not only highly capable, but remain transparent, logically grounded, and deeply aligned with positive human outcomes.
research interests
- Interpretable Reasoning & Natural Language Understanding: Commonsense and logical inference, structured and neuro-symbolic reasoning
- LLM Safety & Model Collapse: Fundamental reliability of pretraining, synthetic data proliferation, recursive training degradation
- Responsible AI & Fairness in ML: Bias mitigation in healthcare, social and moral alignment, ethical development of AI
- Agentic Systems & Post-Training Frameworks: Self-reflective agents, reinforcement learning (RL) for alignment, transparent model ecosystems
I’m actively seeking opportunities in NLP and related areas.
Feel free to reach out for research collaborations or other opportunities.
news
| Apr 07, 2026 | Thrilled to announce that our work, “LOGICAL-COMMONSENSEQA: A Benchmark for Logical Commonsense Reasoning,” has been accepted for publication at the ACL 2026 Main Conference. See you in person! arXiv:2601.16504 |
|---|---|
| Jan 27, 2026 | New preprint alert! We introduce LOGICAL-COMMONSENSEQA, a benchmark designed to isolate multi-fact and compositional inference in LLMs using logical operators (AND, OR, NEITHER/NOR). Our evaluation reveals that fluent reasoning often masks underlying logical failures, especially in negation-based tasks. Check the paper on arXiv! |
| Dec 08, 2025 | Our paper, Assessing Algorithmic Bias in Language-Based Depression Detection, has been officially published in the 2025 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI) proceedings. You can find the full paper on IEEE Xplore. |
| Oct 20, 2025 | I will be in Atlanta for the IEEE-EMBS BHI Conference from October 26-29, 2025, where I will be presenting my work on evaluating and mitigating bias in LLMs. |
| Oct 15, 2025 | I have been selected by IEEE-BHI as an NSF–EMBS–Google Young Professional NextGen Scholar as an early researcher. |