My name is Jorge Quesada. I am a PhD candidate in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. I work in the OLIVES Lab with Professor Ghassan AlRegib on machine learning for scientific imaging, where my current is focus on seismic interpretation.
My current work at the OLIVES lab centers on building self-supervised and domain-robust representation learning methods and stress-testing them under real distribution shift. In past projects, I have also explored human-in-the-loop systems, such as uncertainty-aware annotations that capture labeler expertise and prompting strategies for the Segment Anything Model in order to make workflows more reliable and interpretable in practice.
Beyond these, I have also worked in representation learning for neuroimaging, as well as inverse problems and mathematical optimization. I’m broadly motivated by questions at the intersection of natural and artificial intelligence—how brains and machines perceive, learn, and generalize—and I’m always excited to collaborate across domains.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Jorge Quesada, Chen Zhou, Prithwijit Chowdhury, Mohammad Alotaibi, Ahmad Mustafa, Yusuf Kumakov, Mohit Prabhushankar, Ghassan AlRegib
Submitted to IEEE Access 2025
We present the first large-scale benchmarking study for geological fault delineation. The benchmark evaluates over 200 model–dataset–strategy combinations under varying domain shift conditions, providing new insights into generalizability, training dynamics, and evaluation practices in seismic interpretation.
Jorge Quesada, Chen Zhou, Prithwijit Chowdhury, Mohammad Alotaibi, Ahmad Mustafa, Yusuf Kumakov, Mohit Prabhushankar, Ghassan AlRegib
Submitted to IEEE Access 2025
We present the first large-scale benchmarking study for geological fault delineation. The benchmark evaluates over 200 model–dataset–strategy combinations under varying domain shift conditions, providing new insights into generalizability, training dynamics, and evaluation practices in seismic interpretation.
Jorge Quesada*, Zoe Fowler*, Mohammad Alotaibi, Mohit Prabhushankar, Ghassan AlRegib (* equal contribution)
IEEE International Conference on Big Data 2024
We compare human-driven and automated prompting strategies in the Segment Anything Model (SAM). Through large-scale benchmarking, we identify prompting patterns that maximize segmentation accuracy across diverse visual domains.
Jorge Quesada*, Zoe Fowler*, Mohammad Alotaibi, Mohit Prabhushankar, Ghassan AlRegib (* equal contribution)
IEEE International Conference on Big Data 2024
We compare human-driven and automated prompting strategies in the Segment Anything Model (SAM). Through large-scale benchmarking, we identify prompting patterns that maximize segmentation accuracy across diverse visual domains.
Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik Johnson, Eva Dyer
NeurIPS Datasets and Benchmarks Track 2022
We introduce MTNeuro, a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography of mouse thalamocortical regions. The benchmark spans diverse prediction tasks—including brain-region classification and microstructure segmentation—and offers insights into the representation capabilities of supervised and self-supervised models across multiple abstraction levels.
Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik Johnson, Eva Dyer
NeurIPS Datasets and Benchmarks Track 2022
We introduce MTNeuro, a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography of mouse thalamocortical regions. The benchmark spans diverse prediction tasks—including brain-region classification and microstructure segmentation—and offers insights into the representation capabilities of supervised and self-supervised models across multiple abstraction levels.