[Multiple Versions Selection] The rosmaster x3 has multiple versions, please carefully check the version information to avoid purchasing the wrong version.
[Features of rosmaster x3] The rosmaster x3 features a suspended aluminum alloy chassis design, 65mm Mecanum wheels, SLAMTEC lidar, and is equipped with different configurations including the Astra pro plus depth camera, a voice module, and a touch screen. It effortlessly achieves omnidirectional movement for unmanned driving of a compact robot car.
[Deep Learning with ROS1 and ROS2] The rosmaster x3 adult robot kit can be used with Raspberry Pi, Jetson Nano, or Jetson Orin development boards to operate the Robot Operating System (ROS) for deep learning. Combined with MediaPipe development, it supports YOLO model training and utilizes TensorRT for acceleration. This combination enables a variety of 3D machine vision applications, including autonomous driving, human feature recognition, and KCF object tracking.
[Impressive Functions] The powerful configuration of Rosmaster x3 enables precise multi-point navigation, dynamic obstacle avoidance, and TEB path planning. By utilizing 3D vision, it can capture point cloud images of the environment and achieve RTAB 3D mapping and navigation. The customized voice module sensor allows for fascinating human-robot interaction applications, including voice wake-up, 360° sound source localization, and voice-controlled map navigation.
[Detailed Tutorials] We provide an extensive collection of bilingual tutorials and online technical support to assist you. These tutorials cover a wide range of topics, including setup, Linux operating system, ROS, OpenCV, depth cameras, laser radar mapping and navigation, SLAM algorithms, 3D visual interaction, and voice interaction courses. Help you can delve into the world of robotics technology, starting from the basics and gradually advancing to advanced concepts.
Product Specifications
Product Dimensions: 2 x 2 x 2 inches
ASIN: B0CCYHY8QJ
Manufacturer recommended age: 18 years and up
Best Sellers Rank: #1,447,539 in Toys & Games (See Top 100 in Toys & Games) #1,634 in Remote- & App-Controlled Robots
Manufacturer: Yahboom
Product Information
Product description
ROSMASTER X3 is an educational robot based on the robot operating system with Mecanum Wheel, compatible with Jetson NANO/Xavier NX/TX2 NX and Raspberry Pi 4B. It is equipped with lidar, depth camera, voice interaction module and other high-performance hardware modules. Using Python programming, ROSMASTER X3 can realize mapping and navigation, following or avoiding, Autopilot and human body posture detection. It support APP remote control, APP mapping navigation, handle remote control, ROS system PC control and other cross-platform remote control methods. We provide 103 video courses and a large number of codes, which can allow users to learn artificial intelligence programming and ROS systems.
ORBSLAM2+Octomap mapping:ORB-SLAM2 is an open source SLAM framework that supports monocular, binocular, and RGB-D cameras. It can calculate the pose of the camera in real time and reconstruct the surrounding environment sparsely in 3D at the same time. In RGB-D mode, the real scale information can be obtained.
RTAB-Map 3D Visual mapping and navigation:Using the RTAB algorithm to integrate vision and radar, the robot realizes 3D visual mapping and navigation and obstacle avoidance, and supports global relocation and autonomous positioning.
MediaPipe development:Through the MediaPipe development framework, the functions of hand detection, posture detection, overall detection, face detection, 3D detection and recognition are completed.
Lidar mapping and navigation avoiding:It can realize gmapping, hector, karto, cartographer mapping algorithms, support path planning, dynamic obstacle avoidance, single-point and multi-point navigation.
Multi-robot navigation:Multiple robots are on the same map to achieve single-point navigation, multi-point navigation, and dynamic obstacle avoidance.
ORBSLAM2 mapping:Fully automatic initialization using ORB feature extraction method.
Depth image data / point cloud image:The depth map, color map, and point cloud map of the camera can be obtained through the corresponding nodes.
Multi-robot synchronization control:One handle controls multiple robots in real time, completes neat and uniform movements, and realizes the function of synchronous control.
Multi-robot queue show:Multiple robots maintain three formations in real time.
RRT explores and builds maps independently:Set the exploration area, and use the RRT algorithm to realize autonomous exploration and mapping, map saving, and return to the origin point.
KCF target tracking:Based image correlation KCF algorithm can select any object in the image and follow the target in real time.
Autopilot:Supports custom color selection, and the robot can automatically identify the color area to follow the line.
Color Recognition/Tracking:Select a specific color area in the screen and let the robot track this color in real time.
AR tag recognition:It supports dynamic tracking and detection of QR code labels, and obtains the pose coordinates of QR code in real time.
AR reality augmentation:Select the corresponding graphics through the APP, and let the graphics appear on the checkerboard paper through AR enhancement technology.
Visual image beautification:The video image is transformed by OpenCV, and the corresponding algorithm is used to achieve the effect of image beautification.
Lidar guard:Targets that are closer to the lidar will be locked, and the front of the robot car will always face this target.
Lidar obstacle avoidance:Lidar detects the surrounding environment in real time and plans apath to avoid obstacles.