SLAM Robotics

Simultaneous Localization and Mapping
makes robot envision future

SLAM Demo(soon) 3D Gaussian Splatting Demo(soon) Code

Vision

SLAM Robotics specializes in developing advanced SLAM solutions integrating LiDAR and visual sensors on both quadruped and wheeled robotic platforms. Our systems support robust localization and mapping in dynamic and unstructured environments through tightly coupled sensor fusion.

We are actively researching 3D Gaussian Splatting to accelerate high-fidelity reconstruction and simulation of real-world environments, enabling scalable, photorealistic scene understanding for autonomous agents.

Our work bridges modern SLAM algorithms for real-time semantic 3D mapping, scene graph construction, and navigation-aware perception. We deploy these capabilities on mobile robotics platforms with real-time monitoring and control via a custom iOS interface.

What We Do

We focus on the perception stack of autonomous robots—covering SLAM, semantic mapping, and 3D reconstruction—with emphasis on ROS 2-based systems integration. Our SLAM pipeline incorporates multi-sensor fusion (LiDAR, RGB-D, IMU, and GNSS) and integrates with Navigation2 for fully autonomous exploration and path planning.

We also explore online 3D reconstruction techniques for map densification, loop closure with semantic consistency, and real-time relocalization using learned descriptors. Our team actively contributes to scalable 3D map representation formats compatible with long-term autonomy and simulation-in-the-loop systems.

A key part of our platform is the integration of robotics middleware with mobile systems. We leverage the ROS2iOS bridge to develop iOS applications for real-time state monitoring, teleoperation, and visualization of robot maps and trajectories.

Core Areas: SLAM (LiDAR, vison sensor), Semantic 3D Mapping, 3D Gaussian Splatting, ROS2 to iOS

Company Logo
[Enable JavaScript]