Dr. Bryar Shareef is an Assistant Professor in the Department of Computer Science at the University of Nevada, Las Vegas. He received his Ph.D. in Computer Science from the University of Idaho. Research interests center on practical and trustworthy AI for healthcare and forensics, with an emphasis on multimodal learning, medical imaging, and clinical decision support.

Dr. Shareef directs the Advanced AI Research Lab (AAR Lab) in the Howard R. Hughes College of Engineering at UNLV, which supports Ph.D., M.S., and undergraduate researchers. The lab focuses on large vision (vision–language) models, multimodal deep learning, multitask learning, graph neural networks, agentic AI, and human-in-the-loop systems for real-world deployment.

Research Interests

Large Vision Models (Vision–Language) and Multimodal Learning

Multimodal learning methods that integrate images, text, structured descriptors, and clinical context, with current directions in vision–language learning for segmentation and reporting, robust fusion under missing modalities, and grounding strategies that reduce hallucinations in clinical outputs.

Medical Image Analysis across Modalities

Deep learning for diagnosis, segmentation, and risk prediction using medical imaging modalities including ultrasound, mammography, and CBCT. Healthcare applications include breast cancer, Alzheimer’s disease, brain cancer, histopathology, and image-guided clinical workflows.

Multitask Learning and Graph Neural Networks

Architectures for multitask prediction (e.g., detection + segmentation + reporting) and graph-based modeling of relationships among anatomical structures, clinical findings, and longitudinal patterns across patients and time.

Electronic Health Records (EHRs) and Clinical Time-Series Modeling

Clinical risk modeling using EHR time series for early warning and decision support, including in-hospital cardiac arrest (IHCA) prediction. Key challenges include missingness, imbalance, interpretability, and evaluation under realistic deployment constraints.

Agentic AI and Human-in-the-Loop Systems with AR/VR/MR

Agentic AI pipelines for evidence-aware reasoning and structured outputs, paired with human-in-the-loop workflows that support interactive review, guided annotation, and explainable decision support. AR/VR/MR interfaces are of interest for interactive visualization and annotation in both healthcare and forensic settings.

Forensic Analysis Using AI

AI-driven analysis of craniofacial and dental imaging (including CBCT) for morphometric measurement, population/sex/age estimation, and structured reporting, with an emphasis on multimodal and evidence-centered pipelines that incorporate contextual metadata.

🚀 Prospective Students

AAR Lab is recruiting motivated M.S. and Ph.D. students with strong programming and mathematical foundations. Interests aligned with vision–language models, multimodal learning, medical imaging, EHR modeling, graph neural networks, agentic AI, and human-in-the-loop AR/VR/MR applications are especially welcome. Please email bryar.shareef[at]unlv.edu with a CV and a brief description of research goals.