Building intelligent systems at the intersection of computer vision, deep learning, and edge computing. 4+ years of experience turning research into real-world impact — from airforce runways to autonomous vehicles.
I'm a Master's student in Artificial Intelligence at Northeastern University's Khoury College of Computer Sciences, graduating May 2027. My journey began with building robots that could see and understand the world — and that curiosity has only deepened since.
My most significant work to date is a Foreign Object Debris detection system for the Indian Air Force Academy — a custom YOLOv8 model running on an NVIDIA Jetson Xavier NX achieving 93% precision at ~21 FPS, deployed on an autonomous EV runway patrol vehicle.
I'm deeply interested in the intersection of computer vision, GANs, and edge computing — building systems that work in the real world, not just on benchmarks. I hold an AWS certification and have filed 5 patents, one of which has been commercialised.
Providing real-time technical and AV support to professors during classroom instruction at Khoury College of Computer Sciences. Managing ServiceNow ticketing for AV and classroom tech issues, ensuring zero-downtime teaching experiences across the department.
Led the design and deployment of an autonomous Foreign Object Debris detection vehicle to enhance runway safety. Built a custom YOLOv8 model with hyperparameter tuning deployed on NVIDIA Jetson Xavier NX, paired with e-con system cameras for real-time video processing. The vehicle uses an electric platform for wobble-free movement and autonomously patrols airstrips, identifying hazards that could damage aircraft.
Designed and executed a full computer vision pipeline for retail shelf product detection. Defined image collection criteria, annotated datasets with LabelImg, and stored data on Amazon S3 while product metadata was ingested into MongoDB. Built a custom TensorFlow Object Detection API model using transfer learning, achieving strong mAP scores for real-time product recognition on retail shelves.
Designed and built a greeting robot combining facial recognition, computer vision, and IoT hardware. Implemented OpenCV for visual processing, reducing processing time by 25%, and used SVMs for facial classification achieving a 15% accuracy improvement. Integrated Raspberry Pi, Pi cameras, servo motors, and temperature sensors for a fully interactive visitor experience.
A full ADAS suite with six components: drowsiness detection, lane detection & departure warning, lane keeping assist, object recognition, and collision warning — all running on Jetson hardware with real-time inference.
IoT-integrated visitor management robot. Detects and identifies faces using SVM classifiers, greets known visitors with a handshake, measures body temperature via sensor integration, and marks attendance automatically. Published at IEEE ICCCI 2024.
Real-time hand gesture recognition system that replaces the physical mouse. Index finger controls the cursor, multi-finger combinations trigger left click and drag-drop, while thumb-index distance modulates system volume — no hardware required.
Webcam-based Rock Paper Scissors game where the player competes against the computer using live hand gestures captured and classified in real-time. Finger configuration detection with score tracking and animated UI.
An intelligent rover that autonomously patrols farmland, captures crop imagery, detects disease at early growth stages using CNNs, alerts farmers with disease names and treatments, and connects them to the nearest testing facilities.
Autonomous electric vehicle for airstrip safety. Custom YOLOv8 with hyperparameter tuning, live inference on NVIDIA Jetson Xavier NX, capturing high-res imagery via e-con cameras. Achieved 93% precision, recognised by the Indian Air Force Academy.
Graduating May 2027. Actively seeking AI/ML Engineer roles in Boston and beyond.