Jorge Quesada
Logo PhD Student at the Georgia Institute of Technology

My name is Jorge Quesada. I am a PhD candidate in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. I work in the OLIVES Lab with Professor Ghassan AlRegib on machine learning for scientific imaging, where my current is focus on seismic interpretation.

My current work at the OLIVES lab centers on building self-supervised and domain-robust representation learning methods and stress-testing them under real distribution shift. In past projects, I have also explored human-in-the-loop systems, such as uncertainty-aware annotations that capture labeler expertise and prompting strategies for the Segment Anything Model in order to make workflows more reliable and interpretable in practice.

Beyond these, I have also worked in representation learning for neuroimaging, as well as inverse problems and mathematical optimization. I’m broadly motivated by questions at the intersection of natural and artificial intelligence—how brains and machines perceive, learn, and generalize—and I’m always excited to collaborate across domains.

Curriculum Vitae
Resume

Education
  • Georgia Institute of Technology
    Georgia Institute of Technology
    Department of Electrical and Computer Engineering
    Ph.D. Student in Machine Learning
    Aug 2021 - present
  • Pontifical Catholic University of Peru
    Pontifical Catholic University of Peru
    M.S.. in Signal Processing
    graduation 2018
  • Pontifical Catholic University of Peru
    Pontifical Catholic University of Peru
    B.S. in Electrical Engineering
    graduation 2015
Experience
  • Georgia Institute of Technology
    Georgia Institute of Technology
    Graduate Research Assistant
    Aug. 2021 - present
  • Sentinel
    Sentinel
    Computer Vision Data Scientist
    Jul 2020 - Dec. 2020
  • Pontifical Catholic University of Peru
    Pontifical Catholic University of Peru
    Graduate Researcher and Lecturer
    Mar 2016 - Jun. 2020
  • Los Alamos National Laboratory
    Los Alamos National Laboratory
    Research Intern
    Jan. 2017 - Apr. 2017
  • Jicamarca Radio Observatory
    Jicamarca Radio Observatory
    Research Assistant
    Jan. 2015 - Apr. 2015
Teaching
  • Generative and Geometric Deep Learning ECE 8803
    Generative and Geometric Deep Learning ECE 8803
    Course Developer and Graduate Teaching Assistant
    Fall 2023
  • Fundamentals of Machine Learning ECE 4252/8803
    Fundamentals of Machine Learning ECE 4252/8803
    Graduate Teaching Assistant
    Spring 2024
  • Senior Analog Laboratory ECE 4043
    Senior Analog Laboratory ECE 4043
    Graduate Teaching Assistant
    Spring/Summer 2023
  • Digital Signal Processing
    Digital Signal Processing
    Main Instructor
    Spring/Fall 2019
Honors & Awards
  • Cadence Diversity in Technology Scholarship Recipient
    2023
  • Computational Neural Engineering Training Program (CNTP) Scholar
    2021
  • ICASSP 2018 Student Travel Grant
    2018
  • Marco Polo Scholarship
    2016
  • ICASSP 2016 Student Travel Grant
    2016
Selected Publications (view all )
A Large-scale Benchmark on Geological Fault Delineation Models: Domain Shift, Training Dynamics, Generalizability, Evaluation and Inferential Behavior
A Large-scale Benchmark on Geological Fault Delineation Models: Domain Shift, Training Dynamics, Generalizability, Evaluation and Inferential Behavior

Jorge Quesada, Chen Zhou, Prithwijit Chowdhury, Mohammad Alotaibi, Ahmad Mustafa, Yusuf Kumakov, Mohit Prabhushankar, Ghassan AlRegib

Submitted to IEEE Access 2025

We present the first large-scale benchmarking study for geological fault delineation. The benchmark evaluates over 200 model–dataset–strategy combinations under varying domain shift conditions, providing new insights into generalizability, training dynamics, and evaluation practices in seismic interpretation.

A Large-scale Benchmark on Geological Fault Delineation Models: Domain Shift, Training Dynamics, Generalizability, Evaluation and Inferential Behavior

Jorge Quesada, Chen Zhou, Prithwijit Chowdhury, Mohammad Alotaibi, Ahmad Mustafa, Yusuf Kumakov, Mohit Prabhushankar, Ghassan AlRegib

Submitted to IEEE Access 2025

We present the first large-scale benchmarking study for geological fault delineation. The benchmark evaluates over 200 model–dataset–strategy combinations under varying domain shift conditions, providing new insights into generalizability, training dynamics, and evaluation practices in seismic interpretation.

Benchmarking Human and Automated Prompting in the Segment Anything Model
Benchmarking Human and Automated Prompting in the Segment Anything Model

Jorge Quesada*, Zoe Fowler*, Mohammad Alotaibi, Mohit Prabhushankar, Ghassan AlRegib (* equal contribution)

IEEE International Conference on Big Data 2024

We compare human-driven and automated prompting strategies in the Segment Anything Model (SAM). Through large-scale benchmarking, we identify prompting patterns that maximize segmentation accuracy across diverse visual domains.

Benchmarking Human and Automated Prompting in the Segment Anything Model

Jorge Quesada*, Zoe Fowler*, Mohammad Alotaibi, Mohit Prabhushankar, Ghassan AlRegib (* equal contribution)

IEEE International Conference on Big Data 2024

We compare human-driven and automated prompting strategies in the Segment Anything Model (SAM). Through large-scale benchmarking, we identify prompting patterns that maximize segmentation accuracy across diverse visual domains.

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction
MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik Johnson, Eva Dyer

NeurIPS Datasets and Benchmarks Track 2022

We introduce MTNeuro, a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography of mouse thalamocortical regions. The benchmark spans diverse prediction tasks—including brain-region classification and microstructure segmentation—and offers insights into the representation capabilities of supervised and self-supervised models across multiple abstraction levels.

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik Johnson, Eva Dyer

NeurIPS Datasets and Benchmarks Track 2022

We introduce MTNeuro, a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography of mouse thalamocortical regions. The benchmark spans diverse prediction tasks—including brain-region classification and microstructure segmentation—and offers insights into the representation capabilities of supervised and self-supervised models across multiple abstraction levels.

All publications