obed junias

obed_junias_pfp.jpg

I’m a graduate researcher in Computer Science at the University of Colorado Boulder, where I work within the BLAST Lab under the supervision of Dr. Maria L. Pacheco. My work sits at the intersection of natural language reasoning, LLM safety, and agentic systems.

Currently, my primary research focuses on the long-term reliability of foundation models, specifically data quality and model collapse. I investigate how synthetic data proliferation affects model reliability across successive training generations. My work examines how this collapse manifests during foundation model pretraining from a systems perspective, aiming to implement and evaluate robust pretraining pipelines for open, transparent, and sustainable model ecosystems. This research extends into reflective post-training frameworks, utilizing reinforcement learning (RL)-based approaches.

This work builds upon my background in natural language processing. I develop interpretable reasoning systems and benchmarks for commonsense and logical inference [ACL 2026], exploring structured frameworks to make machine reasoning more transparent and logically grounded.

In parallel, I collaborate with Dr. Theodora Chaspari on bias detection and fairness evaluation in LLMs within the mental health domain—work that reflects my broader commitment to building AI systems that remain trustworthy and equitable across diverse populations.

My broader objective is to advance interpretable reasoning and LLM safety, developing methodologies to ensure AI systems are highly capable, transparent, logically grounded, and aligned with positive human outcomes.

research interests

  • Interpretable Reasoning & Natural Language Understanding: Commonsense and logical inference, structured and neuro-symbolic reasoning
  • LLM Safety & Model Collapse: Fundamental reliability of pretraining, synthetic data proliferation, recursive training degradation
  • Responsible AI & Fairness in ML: Bias mitigation in healthcare, social and moral alignment, ethical development of AI
  • Agentic Systems & Post-Training Frameworks: Self-reflective agents, reinforcement learning (RL) for alignment, transparent model ecosystems

I’m actively seeking opportunities in NLP and related areas.

Feel free to reach out for research collaborations or other opportunities.

news

Apr 07, 2026 Thrilled to announce that our work, “LOGICAL-COMMONSENSEQA: A Benchmark for Logical Commonsense Reasoning,” has been accepted for publication at the ACL 2026 Main Conference. See you in person! arXiv:2601.16504
Jan 27, 2026 New preprint alert! We introduce LOGICAL-COMMONSENSEQA, a benchmark designed to isolate multi-fact and compositional inference in LLMs using logical operators (AND, OR, NEITHER/NOR). Our evaluation reveals that fluent reasoning often masks underlying logical failures, especially in negation-based tasks. Check the paper on arXiv!
Dec 08, 2025 Our paper, Assessing Algorithmic Bias in Language-Based Depression Detection, has been officially published in the 2025 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI) proceedings. You can find the full paper on IEEE Xplore.

selected publications

  1. Assessing Algorithmic Bias in Language-Based Depression Detection: A Comparison of DNN and LLM Approaches
    Obed Junias, Prajakta Kini, and Theodora Chaspari
    In 2025 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), 2025
  2. LOGICAL-COMMONSENSEQA: A Benchmark for Logical Commonsense Reasoning
    Obed Junias and Maria Leonor Pacheco
    arXiv preprint arXiv:2601.16504, 2026