I am a Masters Student in Robotics & AI at Georgia Tech. Currently, I am a Researcher at the LIDAR Lab supervised by Prof. Ye Zhao and Zhaoyuan Gu on learning-based control and motion planning for embodied AI. My work spans the full robotics pipeline from algorithm design and simulation to real-world deployment, developing scalable sim-to-real frameworks for loco-manipulation tasks in real world environments.
Previously, I worked at Samsung as a Robotics & ML Software Engineer for 2 years, where I led end-to-end development of autonomous systems, from perception and motion planning to real-world deployment on various robots. I received my Bachelors in Mechanical Engineering from Seoul National University during which I was a Research Intern at
Dynamic Robotics System Lab, advised by Prof. Jaeheung Park and Dr. Daegyu Lim focusing on model-based priors guided Reinforcement Learning for legged robots. I also interned at Soft Robotics & Bionics Lab under the guidance of Prof. Yong-Lae Park on soft robotic multi-modal sensing for industrial robots.
I also co-organized a non-profit organization dedicated to AI education, AI Tech Play, and hosted the first nationwide AI camp focused on autonomous racing competitions for high school students.
I'm currently looking for internships, feel free to reach out!
Developed hierarchical control framework combining diffusion policies with RL fine-tuning to address distribution shift in contact-rich humanoid tasks, achieving 85% success rate on door opening, object transport (up to 5kg), and dynamic climbing on Booster platform (Submitted to RA-L/IROS).
Built Dockerized cloud infrastructure and custom Isaac Lab environments for 31-DOF humanoid training, enabling 2x faster iteration cycles and reproducible training across distributed compute clusters.
Designed VR teleoperation system with real-time retargeting for expert demonstration collection, reducing data collection time by 60% and generating 500+ high-quality demonstrations for policy training.
PyTorch · Isaac Sim/Lab · USD · Docker · VR Systems · MuJoCo · Diffusion Models · PPO/SAC
Experience
Samsung Robotics & ML Software Engineer Mar 2024 - Aug 2025 (1 yr 6 mos)
Spearheaded end-to-end YOLOv8 perception pipeline for mobile robots in harsh industrial environments, from dataset creation (10K+ images) to on-device optimization and CI/CD integration, achieving 92% detection accuracy with 30ms inference latency across 5+ Samsung sites.
Led development of precision control and visual SLAM-based localization for 7-DOF manipulator in GPS-denied environments, reducing positioning error by 15% and earning $10,000 award in Smart Construction Challenge from Korean Government.
Engineered ROS2 autonomous navigation stack combining RRT* and Hybrid A* planning with real-time obstacle avoidance, achieving 92% successful delivery rate across 300+ robot fleet handling 200kg payloads in dynamic factory environments.
Built Isaac Sim digital twin workflows with domain randomization for synthetic data generation, reducing data collection costs by 40% while improving model generalization through cross-team sim-to-real validation.
PyTorch · ROS2 · Isaac Sim · YOLOv8 · SLAM · Embedded ML · CI/CD
Developed real-time heat anomaly detection system for semiconductor manufacturing equipment using custom ML architecture and GPU-accelerated pipelines, achieving 95% detection accuracy with <100ms latency for proactive maintenance alerts.
Deployed and optimized Segment Anything (SAM) on industrial edge hardware for collision-aware cluttered-bin retrieval, achieving 3x inference speedup (1.2s to 400ms) through quantization and TensorRT acceleration while maintaining 92% segmentation IoU.
Engineered sensor-fusion collision avoidance system for AGVs combining IMU, LiDAR, and camera data, reducing collision incidents by 80% and enabling safe operation at 1.5m/s in dynamic factory environments.
PyTorch · SAM · OpenCV · GPU Optimization · Sensor Fusion · Real-Time Systems
Proposed novel model-free RL sim-to-real framework for energy-efficient bipedal locomotion with weak actuators, combining meta-RL optimization with physics-based actuator models to achieve 19% speed improvement and 22% energy cost reduction.
Developed software and system integration for capacitive touch-sensing grid as force-control interface for industrial sewing robots, increasing operation speed by 20% and reducing operator training time by 30% through real-time feedback optimization.
Designed sensor-fusion algorithms with filtering and calibration for stable control signals, collaborating cross-functionally on Software-in-the-Loop validation to reduce control latency by 40% (50ms to 30ms) and improve measurement stability by 25%.
Python · C++ · Sensor Fusion · Capacitive Sensing · Real-Time Control
Research
My research centers on robot learning for contact-rich manipulation in real-world environments. I develop scalable frameworks that combine physics-aware learning with learned representations for long-horizon reasoning, enabling robots to acquire dexterous and agile motor skills. Ultimately, I aim to bridge the gap between human and robot capabilities—empowering machines to perform complex tasks in unpredictable settings with human-like adaptability.
Developed joint optimization framework combining diffusion policies with reinforcement learning for humanoid loco-manipulation. The approach fine-tunes offline diffusion policies with online RL interaction, adapting to new scenarios beyond training data. Achieved 85% success rate on door opening, box transport (up to 5kg), and table climbing tasks on Booster humanoid platform. System demonstrates robust whole-body coordination in contact-rich scenarios beyond training distribution.
Led development of autonomous drilling robot for construction sites, deploying rule-based computer vision and motion planning for precise surface drilling with ±2mm accuracy. System handles 20kg kg payloads and operates in GPS-denied cluttered environments, reducing human exposure to hazardous tasks by 80%. Deployed across 5+ Samsung Factory sites.
Led development and deployment of adaptive AMR fleet (300+ robots) for material transport in construction sites. Implemented safety-aware navigation achieving 92% successful delivery rate in dynamic, GPS-denied environments. System handles 200kg payloads with real-time obstacle avoidance and multi-robot coordination.
Developed meta-RL optimization framework combining model-free learning with physics-based actuator models for bipedal locomotion. Achieved 40% energy reduction in simulated bipedal walking while optimizing parallel elastic actuator stiffness parameters. Framework demonstrated successful sim-to-real transfer potential for weak actuation scenarios.
Developed Task-Invariant Agent (TIA) network for multi-task RL, enabling rapid adaptation to new tasks using model dynamics. The architecture integrates a modified DQN policy network, an encoder for latent task representation from experience sequences, and a model predictor for system dynamics. Achieved 3x faster adaptation to new reward functions compared to baseline DQN, demonstrating robust generalization across CartPole task variants.
Implemented Heuristics Integrated Deep RL approach for online 2D bin packing with placement constraints. Trained PPO agent to learn optimal packing strategies that outperform traditional heuristics. Achieved 15% improvement in space utilization over baseline greedy algorithms.
Implemented 10+ humanoid control algorithms including ZMP-based walking pattern generation, Linear Inverted Pendulum Model, preview control, and whole-body operational space control. Developed CoM estimation using complementary filters and capture point-based stabilization for dynamic walking on simulated bipedal robots.
C++ · MATLAB · Whole-Body Control · QP Solvers · Trajectory Optimization
Developed autonomous driving system for RC Car Racing Challenge using LiDAR-only perception for mapless navigation. Implemented behavior cloning with Gaussian Process Regression to learn driving policy from expert demonstrations. Trained end-to-end control policy mapping raw sensor observations to steering and throttle commands. Achieved top-3 finish in class competition with average lap speed of 2.5 m/s while maintaining safe wall clearance of 15cm.
Led research estimating soil health from satellite imagery for agricultural policy enforcement. Managed $8K grant and directed field surveys across 50+ sites, collecting 100GB of GIS and satellite data. Engineered 30+ novel features from multi-spectral satellite data and GIS sources, training machine learning models (XGBoost, Random Forest) for regression. Achieved 60% improvement over baseline estimates (R² = 0.78) using ensemble methods.
Developed novel video generation algorithm that synthesizes realistic video sequences from a single input image using sequential structure learning. Integrated optical flow estimation with temporal consistency constraints to eliminate awkward motion artifacts common in frame-by-frame generation approaches.