代码补充

main
zhaoxin 2 months ago
parent 7d064d8294
commit a4754b1c71

@ -0,0 +1,101 @@
# Details
Date : 2025-06-30 19:06:11
Directory d:\\软件工程\\实践代码\\UVPTCCS
Total : 86 files, 3306 codes, 337 comments, 605 blanks, all 4248 lines
[Summary](results.md) / Details / [Diff Summary](diff.md) / [Diff Details](diff-details.md)
## Files
| filename | language | code | comment | blank | total |
| :--- | :--- | ---: | ---: | ---: | ---: |
| [doc/README.md](/doc/README.md) | Markdown | 1 | 0 | 2 | 3 |
| [src/laserscan\_to\_point\_pulisher/README.md](/src/laserscan_to_point_pulisher/README.md) | Markdown | 74 | 0 | 17 | 91 |
| [src/laserscan\_to\_point\_pulisher/laserscan\_to\_point\_pulisher/\_\_init\_\_.py](/src/laserscan_to_point_pulisher/laserscan_to_point_pulisher/__init__.py) | Python | 0 | 0 | 1 | 1 |
| [src/laserscan\_to\_point\_pulisher/laserscan\_to\_point\_pulisher/laserscan\_to\_point\_publish.py](/src/laserscan_to_point_pulisher/laserscan_to_point_pulisher/laserscan_to_point_publish.py) | Python | 47 | 3 | 16 | 66 |
| [src/laserscan\_to\_point\_pulisher/package.xml](/src/laserscan_to_point_pulisher/package.xml) | XML | 16 | 0 | 3 | 19 |
| [src/laserscan\_to\_point\_pulisher/setup.cfg](/src/laserscan_to_point_pulisher/setup.cfg) | Properties | 4 | 0 | 1 | 5 |
| [src/laserscan\_to\_point\_pulisher/setup.py](/src/laserscan_to_point_pulisher/setup.py) | Python | 24 | 0 | 3 | 27 |
| [src/laserscan\_to\_point\_pulisher/test/test\_copyright.py](/src/laserscan_to_point_pulisher/test/test_copyright.py) | Python | 7 | 13 | 4 | 24 |
| [src/laserscan\_to\_point\_pulisher/test/test\_flake8.py](/src/laserscan_to_point_pulisher/test/test_flake8.py) | Python | 9 | 13 | 4 | 26 |
| [src/laserscan\_to\_point\_pulisher/test/test\_pep257.py](/src/laserscan_to_point_pulisher/test/test_pep257.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboom\_app\_save\_map/README.md](/src/yahboom_app_save_map/README.md) | Markdown | 89 | 0 | 30 | 119 |
| [src/yahboom\_app\_save\_map/launch/yahboom\_app\_save\_map.launch.py](/src/yahboom_app_save_map/launch/yahboom_app_save_map.launch.py) | Python | 17 | 1 | 5 | 23 |
| [src/yahboom\_app\_save\_map/package.xml](/src/yahboom_app_save_map/package.xml) | XML | 17 | 0 | 3 | 20 |
| [src/yahboom\_app\_save\_map/setup.cfg](/src/yahboom_app_save_map/setup.cfg) | Properties | 4 | 0 | 1 | 5 |
| [src/yahboom\_app\_save\_map/setup.py](/src/yahboom_app_save_map/setup.py) | Python | 26 | 0 | 3 | 29 |
| [src/yahboom\_app\_save\_map/test/test\_copyright.py](/src/yahboom_app_save_map/test/test_copyright.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboom\_app\_save\_map/test/test\_flake8.py](/src/yahboom_app_save_map/test/test_flake8.py) | Python | 9 | 13 | 4 | 26 |
| [src/yahboom\_app\_save\_map/test/test\_pep257.py](/src/yahboom_app_save_map/test/test_pep257.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboom\_app\_save\_map/yahboom\_app\_save\_map/\_\_init\_\_.py](/src/yahboom_app_save_map/yahboom_app_save_map/__init__.py) | Python | 0 | 0 | 1 | 1 |
| [src/yahboom\_app\_save\_map/yahboom\_app\_save\_map/yahboom\_app\_save\_map.py](/src/yahboom_app_save_map/yahboom_app_save_map/yahboom_app_save_map.py) | Python | 45 | 1 | 12 | 58 |
| [src/yahboom\_app\_save\_map/yahboom\_app\_save\_map/yahboom\_app\_save\_map\_client.py](/src/yahboom_app_save_map/yahboom_app_save_map/yahboom_app_save_map_client.py) | Python | 36 | 0 | 12 | 48 |
| [src/yahboomcar\_ctrl/launch/keyboart\_ctrl\_launch-Copy1.py](/src/yahboomcar_ctrl/launch/keyboart_ctrl_launch-Copy1.py) | Python | 9 | 0 | 2 | 11 |
| [src/yahboomcar\_ctrl/launch/yahboomcar\_joy\_launch.py](/src/yahboomcar_ctrl/launch/yahboomcar_joy_launch.py) | Python | 9 | 0 | 2 | 11 |
| [src/yahboomcar\_ctrl/package.xml](/src/yahboomcar_ctrl/package.xml) | XML | 16 | 0 | 3 | 19 |
| [src/yahboomcar\_ctrl/setup.cfg](/src/yahboomcar_ctrl/setup.cfg) | Properties | 4 | 0 | 1 | 5 |
| [src/yahboomcar\_ctrl/setup.py](/src/yahboomcar_ctrl/setup.py) | Python | 28 | 0 | 3 | 31 |
| [src/yahboomcar\_ctrl/test/test\_copyright.py](/src/yahboomcar_ctrl/test/test_copyright.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboomcar\_ctrl/test/test\_flake8.py](/src/yahboomcar_ctrl/test/test_flake8.py) | Python | 9 | 13 | 4 | 26 |
| [src/yahboomcar\_ctrl/test/test\_pep257.py](/src/yahboomcar_ctrl/test/test_pep257.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboomcar\_ctrl/yahboomcar\_ctrl/\_\_init\_\_.py](/src/yahboomcar_ctrl/yahboomcar_ctrl/__init__.py) | Python | 0 | 0 | 1 | 1 |
| [src/yahboomcar\_ctrl/yahboomcar\_ctrl/yahboom\_joy\_R2.py](/src/yahboomcar_ctrl/yahboomcar_ctrl/yahboom_joy_R2.py) | Python | 126 | 19 | 17 | 162 |
| [src/yahboomcar\_ctrl/yahboomcar\_ctrl/yahboom\_joy\_X3.py](/src/yahboomcar_ctrl/yahboomcar_ctrl/yahboom_joy_X3.py) | Python | 126 | 18 | 15 | 159 |
| [src/yahboomcar\_ctrl/yahboomcar\_ctrl/yahboom\_keyboard.py](/src/yahboomcar_ctrl/yahboomcar_ctrl/yahboom_keyboard.py) | Python | 122 | 4 | 9 | 135 |
| [src/yahboomcar\_nav/launch/cartographer\_launch.py](/src/yahboomcar_nav/launch/cartographer_launch.py) | Python | 54 | 0 | 9 | 63 |
| [src/yahboomcar\_nav/launch/display\_map\_launch.py](/src/yahboomcar_nav/launch/display_map_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_nav/launch/display\_nav\_launch.py](/src/yahboomcar_nav/launch/display_nav_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_nav/launch/display\_rtabmap\_map\_launch.py](/src/yahboomcar_nav/launch/display_rtabmap_map_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_nav/launch/display\_rtabmap\_nav\_launch.py](/src/yahboomcar_nav/launch/display_rtabmap_nav_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_nav/launch/laser\_bringup\_launch.py](/src/yahboomcar_nav/launch/laser_bringup_launch.py) | Python | 65 | 1 | 6 | 72 |
| [src/yahboomcar\_nav/launch/map\_cartographer\_launch.py](/src/yahboomcar_nav/launch/map_cartographer_launch.py) | Python | 14 | 0 | 5 | 19 |
| [src/yahboomcar\_nav/launch/map\_gmapping\_4ros\_s2\_launch.py](/src/yahboomcar_nav/launch/map_gmapping_4ros_s2_launch.py) | Python | 27 | 1 | 5 | 33 |
| [src/yahboomcar\_nav/launch/map\_gmapping\_a1\_launch.py](/src/yahboomcar_nav/launch/map_gmapping_a1_launch.py) | Python | 16 | 0 | 6 | 22 |
| [src/yahboomcar\_nav/launch/map\_gmapping\_launch.py](/src/yahboomcar_nav/launch/map_gmapping_launch.py) | Python | 34 | 0 | 5 | 39 |
| [src/yahboomcar\_nav/launch/map\_rtabmap\_launch.py](/src/yahboomcar_nav/launch/map_rtabmap_launch.py) | Python | 14 | 0 | 5 | 19 |
| [src/yahboomcar\_nav/launch/navigation\_dwa\_launch.py](/src/yahboomcar_nav/launch/navigation_dwa_launch.py) | Python | 31 | 0 | 6 | 37 |
| [src/yahboomcar\_nav/launch/navigation\_rtabmap\_launch.py](/src/yahboomcar_nav/launch/navigation_rtabmap_launch.py) | Python | 21 | 0 | 6 | 27 |
| [src/yahboomcar\_nav/launch/navigation\_teb\_launch.py](/src/yahboomcar_nav/launch/navigation_teb_launch.py) | Python | 31 | 0 | 6 | 37 |
| [src/yahboomcar\_nav/launch/occupancy\_grid\_launch.py](/src/yahboomcar_nav/launch/occupancy_grid_launch.py) | Python | 29 | 0 | 6 | 35 |
| [src/yahboomcar\_nav/launch/rtabmap\_localization\_launch.py](/src/yahboomcar_nav/launch/rtabmap_localization_launch.py) | Python | 66 | 5 | 16 | 87 |
| [src/yahboomcar\_nav/launch/rtabmap\_nav\_launch.py](/src/yahboomcar_nav/launch/rtabmap_nav_launch.py) | Python | 31 | 0 | 6 | 37 |
| [src/yahboomcar\_nav/launch/rtabmap\_sync\_launch.py](/src/yahboomcar_nav/launch/rtabmap_sync_launch.py) | Python | 66 | 13 | 17 | 96 |
| [src/yahboomcar\_nav/launch/rtabmap\_viz\_launch.py](/src/yahboomcar_nav/launch/rtabmap_viz_launch.py) | Python | 40 | 2 | 10 | 52 |
| [src/yahboomcar\_nav/launch/save\_map\_launch.py](/src/yahboomcar_nav/launch/save_map_launch.py) | Python | 26 | 1 | 8 | 35 |
| [src/yahboomcar\_nav/maps/yahboomcar.yaml](/src/yahboomcar_nav/maps/yahboomcar.yaml) | YAML | 7 | 0 | 0 | 7 |
| [src/yahboomcar\_nav/maps/yahboomcar2.yaml](/src/yahboomcar_nav/maps/yahboomcar2.yaml) | YAML | 7 | 0 | 0 | 7 |
| [src/yahboomcar\_nav/package.xml](/src/yahboomcar_nav/package.xml) | XML | 19 | 0 | 4 | 23 |
| [src/yahboomcar\_nav/params/dwa\_nav\_params.yaml](/src/yahboomcar_nav/params/dwa_nav_params.yaml) | YAML | 246 | 12 | 16 | 274 |
| [src/yahboomcar\_nav/params/lds\_2d.lua](/src/yahboomcar_nav/params/lds_2d.lua) | Lua | 38 | 1 | 7 | 46 |
| [src/yahboomcar\_nav/params/rtabmap\_nav\_params.yaml](/src/yahboomcar_nav/params/rtabmap_nav_params.yaml) | YAML | 191 | 3 | 11 | 205 |
| [src/yahboomcar\_nav/params/teb\_nav\_params.yaml](/src/yahboomcar_nav/params/teb_nav_params.yaml) | YAML | 305 | 3 | 21 | 329 |
| [src/yahboomcar\_nav/setup.cfg](/src/yahboomcar_nav/setup.cfg) | Properties | 4 | 0 | 1 | 5 |
| [src/yahboomcar\_nav/setup.py](/src/yahboomcar_nav/setup.py) | Python | 30 | 0 | 3 | 33 |
| [src/yahboomcar\_nav/test/test\_copyright.py](/src/yahboomcar_nav/test/test_copyright.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboomcar\_nav/test/test\_flake8.py](/src/yahboomcar_nav/test/test_flake8.py) | Python | 9 | 13 | 4 | 26 |
| [src/yahboomcar\_nav/test/test\_pep257.py](/src/yahboomcar_nav/test/test_pep257.py) | Python | 7 | 13 | 4 | 24 |
| [src/yahboomcar\_nav/yahboomcar\_nav/\_\_init\_\_.py](/src/yahboomcar_nav/yahboomcar_nav/__init__.py) | Python | 0 | 0 | 1 | 1 |
| [src/yahboomcar\_nav/yahboomcar\_nav/scan\_filter.py](/src/yahboomcar_nav/yahboomcar_nav/scan_filter.py) | Python | 35 | 4 | 7 | 46 |
| [src/yahboomcar\_slam/CMakeLists.txt](/src/yahboomcar_slam/CMakeLists.txt) | CMake | 62 | 0 | 14 | 76 |
| [src/yahboomcar\_slam/include/yahboomcar\_slam/point\_cloud.h](/src/yahboomcar_slam/include/yahboomcar_slam/point_cloud.h) | C++ | 101 | 2 | 31 | 134 |
| [src/yahboomcar\_slam/launch/camera\_octomap\_launch.py](/src/yahboomcar_slam/launch/camera_octomap_launch.py) | Python | 27 | 0 | 5 | 32 |
| [src/yahboomcar\_slam/launch/display\_octomap\_launch.py](/src/yahboomcar_slam/launch/display_octomap_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_slam/launch/display\_pcl\_launch.py](/src/yahboomcar_slam/launch/display_pcl_launch.py) | Python | 23 | 0 | 6 | 29 |
| [src/yahboomcar\_slam/launch/octomap\_server\_launch.py](/src/yahboomcar_slam/launch/octomap_server_launch.py) | Python | 20 | 0 | 3 | 23 |
| [src/yahboomcar\_slam/launch/orbslam\_base\_launch.py](/src/yahboomcar_slam/launch/orbslam_base_launch.py) | Python | 26 | 5 | 5 | 36 |
| [src/yahboomcar\_slam/launch/orbslam\_pcl\_map\_launch.py](/src/yahboomcar_slam/launch/orbslam_pcl_map_launch.py) | Python | 29 | 0 | 5 | 34 |
| [src/yahboomcar\_slam/launch/orbslam\_pcl\_octomap\_launch.py](/src/yahboomcar_slam/launch/orbslam_pcl_octomap_launch.py) | Python | 34 | 5 | 6 | 45 |
| [src/yahboomcar\_slam/launch/orbslam\_pose\_launch.py](/src/yahboomcar_slam/launch/orbslam_pose_launch.py) | Python | 16 | 0 | 3 | 19 |
| [src/yahboomcar\_slam/launch/orbslam\_ros\_launch.py](/src/yahboomcar_slam/launch/orbslam_ros_launch.py) | Python | 38 | 3 | 6 | 47 |
| [src/yahboomcar\_slam/package.xml](/src/yahboomcar_slam/package.xml) | XML | 25 | 0 | 6 | 31 |
| [src/yahboomcar\_slam/params/mono.yaml](/src/yahboomcar_slam/params/mono.yaml) | YAML | 27 | 19 | 12 | 58 |
| [src/yahboomcar\_slam/params/pointcloud\_map.yaml](/src/yahboomcar_slam/params/pointcloud_map.yaml) | YAML | 16 | 0 | 2 | 18 |
| [src/yahboomcar\_slam/params/pointcloud\_octomap.yaml](/src/yahboomcar_slam/params/pointcloud_octomap.yaml) | YAML | 16 | 0 | 2 | 18 |
| [src/yahboomcar\_slam/params/rgbd.yaml](/src/yahboomcar_slam/params/rgbd.yaml) | YAML | 32 | 22 | 17 | 71 |
| [src/yahboomcar\_slam/params/usb\_cam\_param.yaml](/src/yahboomcar_slam/params/usb_cam_param.yaml) | YAML | 0 | 0 | 1 | 1 |
| [src/yahboomcar\_slam/src/point\_cloud.cpp](/src/yahboomcar_slam/src/point_cloud.cpp) | C++ | 307 | 33 | 44 | 384 |
| [src/yahboomcar\_slam/src/point\_cloud\_main.cpp](/src/yahboomcar_slam/src/point_cloud_main.cpp) | C++ | 13 | 0 | 3 | 16 |
[Summary](results.md) / Details / [Diff Summary](diff.md) / [Diff Details](diff-details.md)

@ -0,0 +1,15 @@
# Diff Details
Date : 2025-06-30 19:06:11
Directory d:\\软件工程\\实践代码\\UVPTCCS
Total : 0 files, 0 codes, 0 comments, 0 blanks, all 0 lines
[Summary](results.md) / [Details](details.md) / [Diff Summary](diff.md) / Diff Details
## Files
| filename | language | code | comment | blank | total |
| :--- | :--- | ---: | ---: | ---: | ---: |
[Summary](results.md) / [Details](details.md) / [Diff Summary](diff.md) / Diff Details

@ -0,0 +1,2 @@
"filename", "language", "", "comment", "blank", "total"
"Total", "-", , 0, 0, 0
1 filename language comment blank total
2 Total - 0 0 0

@ -0,0 +1,19 @@
# Diff Summary
Date : 2025-06-30 19:06:11
Directory d:\\软件工程\\实践代码\\UVPTCCS
Total : 0 files, 0 codes, 0 comments, 0 blanks, all 0 lines
[Summary](results.md) / [Details](details.md) / Diff Summary / [Diff Details](diff-details.md)
## Languages
| language | files | code | comment | blank | total |
| :--- | ---: | ---: | ---: | ---: | ---: |
## Directories
| path | files | code | comment | blank | total |
| :--- | ---: | ---: | ---: | ---: | ---: |
[Summary](results.md) / [Details](details.md) / Diff Summary / [Diff Details](diff-details.md)

@ -0,0 +1,22 @@
Date : 2025-06-30 19:06:11
Directory : d:\软件工程\实践代码\UVPTCCS
Total : 0 files, 0 codes, 0 comments, 0 blanks, all 0 lines
Languages
+----------+------------+------------+------------+------------+------------+
| language | files | code | comment | blank | total |
+----------+------------+------------+------------+------------+------------+
+----------+------------+------------+------------+------------+------------+
Directories
+------+------------+------------+------------+------------+------------+
| path | files | code | comment | blank | total |
+------+------------+------------+------------+------------+------------+
+------+------------+------------+------------+------------+------------+
Files
+----------+----------+------------+------------+------------+------------+
| filename | language | code | comment | blank | total |
+----------+----------+------------+------------+------------+------------+
| Total | | 0 | 0 | 0 | 0 |
+----------+----------+------------+------------+------------+------------+

@ -0,0 +1,88 @@
"filename", "language", "YAML", "Python", "Properties", "Markdown", "XML", "Lua", "CMake", "C++", "comment", "blank", "total"
"d:\软件工程\实践代码\UVPTCCS\doc\README.md", "Markdown", 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 3
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\README.md", "Markdown", 0, 0, 0, 74, 0, 0, 0, 0, 0, 17, 91
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\__init__.py", "Python", 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\laserscan_to_point_publish.py", "Python", 0, 47, 0, 0, 0, 0, 0, 0, 3, 16, 66
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\package.xml", "XML", 0, 0, 0, 0, 16, 0, 0, 0, 0, 3, 19
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.cfg", "Properties", 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 5
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.py", "Python", 0, 24, 0, 0, 0, 0, 0, 0, 0, 3, 27
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_copyright.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_flake8.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 13, 4, 26
"d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_pep257.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\README.md", "Markdown", 0, 0, 0, 89, 0, 0, 0, 0, 0, 30, 119
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\launch\yahboom_app_save_map.launch.py", "Python", 0, 17, 0, 0, 0, 0, 0, 0, 1, 5, 23
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\package.xml", "XML", 0, 0, 0, 0, 17, 0, 0, 0, 0, 3, 20
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.cfg", "Properties", 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 5
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.py", "Python", 0, 26, 0, 0, 0, 0, 0, 0, 0, 3, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_copyright.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_flake8.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 13, 4, 26
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_pep257.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\__init__.py", "Python", 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map.py", "Python", 0, 45, 0, 0, 0, 0, 0, 0, 1, 12, 58
"d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map_client.py", "Python", 0, 36, 0, 0, 0, 0, 0, 0, 0, 12, 48
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\keyboart_ctrl_launch-Copy1.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 0, 2, 11
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\yahboomcar_joy_launch.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 0, 2, 11
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\package.xml", "XML", 0, 0, 0, 0, 16, 0, 0, 0, 0, 3, 19
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.cfg", "Properties", 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 5
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.py", "Python", 0, 28, 0, 0, 0, 0, 0, 0, 0, 3, 31
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_copyright.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_flake8.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 13, 4, 26
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_pep257.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\__init__.py", "Python", 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_R2.py", "Python", 0, 126, 0, 0, 0, 0, 0, 0, 19, 17, 162
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_X3.py", "Python", 0, 126, 0, 0, 0, 0, 0, 0, 18, 15, 159
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_keyboard.py", "Python", 0, 122, 0, 0, 0, 0, 0, 0, 4, 9, 135
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\cartographer_launch.py", "Python", 0, 54, 0, 0, 0, 0, 0, 0, 0, 9, 63
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_map_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_nav_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_map_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_nav_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\laser_bringup_launch.py", "Python", 0, 65, 0, 0, 0, 0, 0, 0, 1, 6, 72
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_cartographer_launch.py", "Python", 0, 14, 0, 0, 0, 0, 0, 0, 0, 5, 19
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_4ros_s2_launch.py", "Python", 0, 27, 0, 0, 0, 0, 0, 0, 1, 5, 33
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_a1_launch.py", "Python", 0, 16, 0, 0, 0, 0, 0, 0, 0, 6, 22
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_launch.py", "Python", 0, 34, 0, 0, 0, 0, 0, 0, 0, 5, 39
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_rtabmap_launch.py", "Python", 0, 14, 0, 0, 0, 0, 0, 0, 0, 5, 19
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_dwa_launch.py", "Python", 0, 31, 0, 0, 0, 0, 0, 0, 0, 6, 37
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_rtabmap_launch.py", "Python", 0, 21, 0, 0, 0, 0, 0, 0, 0, 6, 27
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_teb_launch.py", "Python", 0, 31, 0, 0, 0, 0, 0, 0, 0, 6, 37
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\occupancy_grid_launch.py", "Python", 0, 29, 0, 0, 0, 0, 0, 0, 0, 6, 35
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_localization_launch.py", "Python", 0, 66, 0, 0, 0, 0, 0, 0, 5, 16, 87
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_nav_launch.py", "Python", 0, 31, 0, 0, 0, 0, 0, 0, 0, 6, 37
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_sync_launch.py", "Python", 0, 66, 0, 0, 0, 0, 0, 0, 13, 17, 96
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_viz_launch.py", "Python", 0, 40, 0, 0, 0, 0, 0, 0, 2, 10, 52
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\save_map_launch.py", "Python", 0, 26, 0, 0, 0, 0, 0, 0, 1, 8, 35
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar.yaml", "YAML", 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar2.yaml", "YAML", 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\package.xml", "XML", 0, 0, 0, 0, 19, 0, 0, 0, 0, 4, 23
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\dwa_nav_params.yaml", "YAML", 246, 0, 0, 0, 0, 0, 0, 0, 12, 16, 274
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\lds_2d.lua", "Lua", 0, 0, 0, 0, 0, 38, 0, 0, 1, 7, 46
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\rtabmap_nav_params.yaml", "YAML", 191, 0, 0, 0, 0, 0, 0, 0, 3, 11, 205
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\teb_nav_params.yaml", "YAML", 305, 0, 0, 0, 0, 0, 0, 0, 3, 21, 329
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.cfg", "Properties", 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 5
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.py", "Python", 0, 30, 0, 0, 0, 0, 0, 0, 0, 3, 33
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_copyright.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_flake8.py", "Python", 0, 9, 0, 0, 0, 0, 0, 0, 13, 4, 26
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_pep257.py", "Python", 0, 7, 0, 0, 0, 0, 0, 0, 13, 4, 24
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\__init__.py", "Python", 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\scan_filter.py", "Python", 0, 35, 0, 0, 0, 0, 0, 0, 4, 7, 46
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\CMakeLists.txt", "CMake", 0, 0, 0, 0, 0, 0, 62, 0, 0, 14, 76
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\include\yahboomcar_slam\point_cloud.h", "C++", 0, 0, 0, 0, 0, 0, 0, 101, 2, 31, 134
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\camera_octomap_launch.py", "Python", 0, 27, 0, 0, 0, 0, 0, 0, 0, 5, 32
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_octomap_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_pcl_launch.py", "Python", 0, 23, 0, 0, 0, 0, 0, 0, 0, 6, 29
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\octomap_server_launch.py", "Python", 0, 20, 0, 0, 0, 0, 0, 0, 0, 3, 23
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_base_launch.py", "Python", 0, 26, 0, 0, 0, 0, 0, 0, 5, 5, 36
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_map_launch.py", "Python", 0, 29, 0, 0, 0, 0, 0, 0, 0, 5, 34
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_octomap_launch.py", "Python", 0, 34, 0, 0, 0, 0, 0, 0, 5, 6, 45
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pose_launch.py", "Python", 0, 16, 0, 0, 0, 0, 0, 0, 0, 3, 19
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_ros_launch.py", "Python", 0, 38, 0, 0, 0, 0, 0, 0, 3, 6, 47
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\package.xml", "XML", 0, 0, 0, 0, 25, 0, 0, 0, 0, 6, 31
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\mono.yaml", "YAML", 27, 0, 0, 0, 0, 0, 0, 0, 19, 12, 58
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_map.yaml", "YAML", 16, 0, 0, 0, 0, 0, 0, 0, 0, 2, 18
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_octomap.yaml", "YAML", 16, 0, 0, 0, 0, 0, 0, 0, 0, 2, 18
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\rgbd.yaml", "YAML", 32, 0, 0, 0, 0, 0, 0, 0, 22, 17, 71
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\usb_cam_param.yaml", "YAML", 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud.cpp", "C++", 0, 0, 0, 0, 0, 0, 0, 307, 33, 44, 384
"d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud_main.cpp", "C++", 0, 0, 0, 0, 0, 0, 0, 13, 0, 3, 16
"Total", "-", 847, 1665, 16, 164, 93, 38, 62, 421, 337, 605, 4248
1 filename language YAML Python Properties Markdown XML Lua CMake C++ comment blank total
2 d:\软件工程\实践代码\UVPTCCS\doc\README.md Markdown 0 0 0 1 0 0 0 0 0 2 3
3 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\README.md Markdown 0 0 0 74 0 0 0 0 0 17 91
4 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\__init__.py Python 0 0 0 0 0 0 0 0 0 1 1
5 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\laserscan_to_point_publish.py Python 0 47 0 0 0 0 0 0 3 16 66
6 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\package.xml XML 0 0 0 0 16 0 0 0 0 3 19
7 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.cfg Properties 0 0 4 0 0 0 0 0 0 1 5
8 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.py Python 0 24 0 0 0 0 0 0 0 3 27
9 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_copyright.py Python 0 7 0 0 0 0 0 0 13 4 24
10 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_flake8.py Python 0 9 0 0 0 0 0 0 13 4 26
11 d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_pep257.py Python 0 7 0 0 0 0 0 0 13 4 24
12 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\README.md Markdown 0 0 0 89 0 0 0 0 0 30 119
13 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\launch\yahboom_app_save_map.launch.py Python 0 17 0 0 0 0 0 0 1 5 23
14 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\package.xml XML 0 0 0 0 17 0 0 0 0 3 20
15 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.cfg Properties 0 0 4 0 0 0 0 0 0 1 5
16 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.py Python 0 26 0 0 0 0 0 0 0 3 29
17 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_copyright.py Python 0 7 0 0 0 0 0 0 13 4 24
18 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_flake8.py Python 0 9 0 0 0 0 0 0 13 4 26
19 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_pep257.py Python 0 7 0 0 0 0 0 0 13 4 24
20 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\__init__.py Python 0 0 0 0 0 0 0 0 0 1 1
21 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map.py Python 0 45 0 0 0 0 0 0 1 12 58
22 d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map_client.py Python 0 36 0 0 0 0 0 0 0 12 48
23 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\keyboart_ctrl_launch-Copy1.py Python 0 9 0 0 0 0 0 0 0 2 11
24 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\yahboomcar_joy_launch.py Python 0 9 0 0 0 0 0 0 0 2 11
25 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\package.xml XML 0 0 0 0 16 0 0 0 0 3 19
26 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.cfg Properties 0 0 4 0 0 0 0 0 0 1 5
27 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.py Python 0 28 0 0 0 0 0 0 0 3 31
28 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_copyright.py Python 0 7 0 0 0 0 0 0 13 4 24
29 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_flake8.py Python 0 9 0 0 0 0 0 0 13 4 26
30 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_pep257.py Python 0 7 0 0 0 0 0 0 13 4 24
31 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\__init__.py Python 0 0 0 0 0 0 0 0 0 1 1
32 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_R2.py Python 0 126 0 0 0 0 0 0 19 17 162
33 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_X3.py Python 0 126 0 0 0 0 0 0 18 15 159
34 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_keyboard.py Python 0 122 0 0 0 0 0 0 4 9 135
35 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\cartographer_launch.py Python 0 54 0 0 0 0 0 0 0 9 63
36 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_map_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
37 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_nav_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
38 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_map_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
39 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_nav_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
40 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\laser_bringup_launch.py Python 0 65 0 0 0 0 0 0 1 6 72
41 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_cartographer_launch.py Python 0 14 0 0 0 0 0 0 0 5 19
42 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_4ros_s2_launch.py Python 0 27 0 0 0 0 0 0 1 5 33
43 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_a1_launch.py Python 0 16 0 0 0 0 0 0 0 6 22
44 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_launch.py Python 0 34 0 0 0 0 0 0 0 5 39
45 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_rtabmap_launch.py Python 0 14 0 0 0 0 0 0 0 5 19
46 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_dwa_launch.py Python 0 31 0 0 0 0 0 0 0 6 37
47 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_rtabmap_launch.py Python 0 21 0 0 0 0 0 0 0 6 27
48 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_teb_launch.py Python 0 31 0 0 0 0 0 0 0 6 37
49 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\occupancy_grid_launch.py Python 0 29 0 0 0 0 0 0 0 6 35
50 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_localization_launch.py Python 0 66 0 0 0 0 0 0 5 16 87
51 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_nav_launch.py Python 0 31 0 0 0 0 0 0 0 6 37
52 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_sync_launch.py Python 0 66 0 0 0 0 0 0 13 17 96
53 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_viz_launch.py Python 0 40 0 0 0 0 0 0 2 10 52
54 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\save_map_launch.py Python 0 26 0 0 0 0 0 0 1 8 35
55 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar.yaml YAML 7 0 0 0 0 0 0 0 0 0 7
56 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar2.yaml YAML 7 0 0 0 0 0 0 0 0 0 7
57 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\package.xml XML 0 0 0 0 19 0 0 0 0 4 23
58 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\dwa_nav_params.yaml YAML 246 0 0 0 0 0 0 0 12 16 274
59 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\lds_2d.lua Lua 0 0 0 0 0 38 0 0 1 7 46
60 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\rtabmap_nav_params.yaml YAML 191 0 0 0 0 0 0 0 3 11 205
61 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\teb_nav_params.yaml YAML 305 0 0 0 0 0 0 0 3 21 329
62 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.cfg Properties 0 0 4 0 0 0 0 0 0 1 5
63 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.py Python 0 30 0 0 0 0 0 0 0 3 33
64 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_copyright.py Python 0 7 0 0 0 0 0 0 13 4 24
65 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_flake8.py Python 0 9 0 0 0 0 0 0 13 4 26
66 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_pep257.py Python 0 7 0 0 0 0 0 0 13 4 24
67 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\__init__.py Python 0 0 0 0 0 0 0 0 0 1 1
68 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\scan_filter.py Python 0 35 0 0 0 0 0 0 4 7 46
69 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\CMakeLists.txt CMake 0 0 0 0 0 0 62 0 0 14 76
70 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\include\yahboomcar_slam\point_cloud.h C++ 0 0 0 0 0 0 0 101 2 31 134
71 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\camera_octomap_launch.py Python 0 27 0 0 0 0 0 0 0 5 32
72 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_octomap_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
73 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_pcl_launch.py Python 0 23 0 0 0 0 0 0 0 6 29
74 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\octomap_server_launch.py Python 0 20 0 0 0 0 0 0 0 3 23
75 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_base_launch.py Python 0 26 0 0 0 0 0 0 5 5 36
76 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_map_launch.py Python 0 29 0 0 0 0 0 0 0 5 34
77 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_octomap_launch.py Python 0 34 0 0 0 0 0 0 5 6 45
78 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pose_launch.py Python 0 16 0 0 0 0 0 0 0 3 19
79 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_ros_launch.py Python 0 38 0 0 0 0 0 0 3 6 47
80 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\package.xml XML 0 0 0 0 25 0 0 0 0 6 31
81 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\mono.yaml YAML 27 0 0 0 0 0 0 0 19 12 58
82 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_map.yaml YAML 16 0 0 0 0 0 0 0 0 2 18
83 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_octomap.yaml YAML 16 0 0 0 0 0 0 0 0 2 18
84 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\rgbd.yaml YAML 32 0 0 0 0 0 0 0 22 17 71
85 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\usb_cam_param.yaml YAML 0 0 0 0 0 0 0 0 0 1 1
86 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud.cpp C++ 0 0 0 0 0 0 0 307 33 44 384
87 d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud_main.cpp C++ 0 0 0 0 0 0 0 13 0 3 16
88 Total - 847 1665 16 164 93 38 62 421 337 605 4248

File diff suppressed because one or more lines are too long

@ -0,0 +1,58 @@
# Summary
Date : 2025-06-30 19:06:11
Directory d:\\软件工程\\实践代码\\UVPTCCS
Total : 86 files, 3306 codes, 337 comments, 605 blanks, all 4248 lines
Summary / [Details](details.md) / [Diff Summary](diff.md) / [Diff Details](diff-details.md)
## Languages
| language | files | code | comment | blank | total |
| :--- | ---: | ---: | ---: | ---: | ---: |
| Python | 59 | 1,665 | 242 | 352 | 2,259 |
| YAML | 10 | 847 | 59 | 82 | 988 |
| C++ | 3 | 421 | 35 | 78 | 534 |
| Markdown | 3 | 164 | 0 | 49 | 213 |
| XML | 5 | 93 | 0 | 19 | 112 |
| CMake | 1 | 62 | 0 | 14 | 76 |
| Lua | 1 | 38 | 1 | 7 | 46 |
| Properties | 4 | 16 | 0 | 4 | 20 |
## Directories
| path | files | code | comment | blank | total |
| :--- | ---: | ---: | ---: | ---: | ---: |
| . | 86 | 3,306 | 337 | 605 | 4,248 |
| doc | 1 | 1 | 0 | 2 | 3 |
| src | 85 | 3,305 | 337 | 603 | 4,245 |
| src\\laserscan_to_point_pulisher | 9 | 188 | 42 | 53 | 283 |
| src\\laserscan_to_point_pulisher (Files) | 4 | 118 | 0 | 24 | 142 |
| src\\laserscan_to_point_pulisher\\laserscan_to_point_pulisher | 2 | 47 | 3 | 17 | 67 |
| src\\laserscan_to_point_pulisher\\test | 3 | 23 | 39 | 12 | 74 |
| src\\yahboom_app_save_map | 11 | 257 | 41 | 79 | 377 |
| src\\yahboom_app_save_map (Files) | 4 | 136 | 0 | 37 | 173 |
| src\\yahboom_app_save_map\\launch | 1 | 17 | 1 | 5 | 23 |
| src\\yahboom_app_save_map\\test | 3 | 23 | 39 | 12 | 74 |
| src\\yahboom_app_save_map\\yahboom_app_save_map | 3 | 81 | 1 | 25 | 107 |
| src\\yahboomcar_ctrl | 12 | 463 | 80 | 65 | 608 |
| src\\yahboomcar_ctrl (Files) | 3 | 48 | 0 | 7 | 55 |
| src\\yahboomcar_ctrl\\launch | 2 | 18 | 0 | 4 | 22 |
| src\\yahboomcar_ctrl\\test | 3 | 23 | 39 | 12 | 74 |
| src\\yahboomcar_ctrl\\yahboomcar_ctrl | 4 | 374 | 41 | 42 | 457 |
| src\\yahboomcar_nav | 34 | 1,562 | 85 | 229 | 1,876 |
| src\\yahboomcar_nav (Files) | 3 | 53 | 0 | 8 | 61 |
| src\\yahboomcar_nav\\launch | 20 | 657 | 23 | 146 | 826 |
| src\\yahboomcar_nav\\maps | 2 | 14 | 0 | 0 | 14 |
| src\\yahboomcar_nav\\params | 4 | 780 | 19 | 55 | 854 |
| src\\yahboomcar_nav\\test | 3 | 23 | 39 | 12 | 74 |
| src\\yahboomcar_nav\\yahboomcar_nav | 2 | 35 | 4 | 8 | 47 |
| src\\yahboomcar_slam | 19 | 835 | 89 | 177 | 1,101 |
| src\\yahboomcar_slam (Files) | 2 | 87 | 0 | 20 | 107 |
| src\\yahboomcar_slam\\include | 1 | 101 | 2 | 31 | 134 |
| src\\yahboomcar_slam\\include\\yahboomcar_slam | 1 | 101 | 2 | 31 | 134 |
| src\\yahboomcar_slam\\launch | 9 | 236 | 13 | 45 | 294 |
| src\\yahboomcar_slam\\params | 5 | 91 | 41 | 34 | 166 |
| src\\yahboomcar_slam\\src | 2 | 320 | 33 | 47 | 400 |
Summary / [Details](details.md) / [Diff Summary](diff.md) / [Diff Details](diff-details.md)

@ -0,0 +1,147 @@
Date : 2025-06-30 19:06:11
Directory : d:\软件工程\实践代码\UVPTCCS
Total : 86 files, 3306 codes, 337 comments, 605 blanks, all 4248 lines
Languages
+------------+------------+------------+------------+------------+------------+
| language | files | code | comment | blank | total |
+------------+------------+------------+------------+------------+------------+
| Python | 59 | 1,665 | 242 | 352 | 2,259 |
| YAML | 10 | 847 | 59 | 82 | 988 |
| C++ | 3 | 421 | 35 | 78 | 534 |
| Markdown | 3 | 164 | 0 | 49 | 213 |
| XML | 5 | 93 | 0 | 19 | 112 |
| CMake | 1 | 62 | 0 | 14 | 76 |
| Lua | 1 | 38 | 1 | 7 | 46 |
| Properties | 4 | 16 | 0 | 4 | 20 |
+------------+------------+------------+------------+------------+------------+
Directories
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+
| path | files | code | comment | blank | total |
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+
| . | 86 | 3,306 | 337 | 605 | 4,248 |
| doc | 1 | 1 | 0 | 2 | 3 |
| src | 85 | 3,305 | 337 | 603 | 4,245 |
| src\laserscan_to_point_pulisher | 9 | 188 | 42 | 53 | 283 |
| src\laserscan_to_point_pulisher (Files) | 4 | 118 | 0 | 24 | 142 |
| src\laserscan_to_point_pulisher\laserscan_to_point_pulisher | 2 | 47 | 3 | 17 | 67 |
| src\laserscan_to_point_pulisher\test | 3 | 23 | 39 | 12 | 74 |
| src\yahboom_app_save_map | 11 | 257 | 41 | 79 | 377 |
| src\yahboom_app_save_map (Files) | 4 | 136 | 0 | 37 | 173 |
| src\yahboom_app_save_map\launch | 1 | 17 | 1 | 5 | 23 |
| src\yahboom_app_save_map\test | 3 | 23 | 39 | 12 | 74 |
| src\yahboom_app_save_map\yahboom_app_save_map | 3 | 81 | 1 | 25 | 107 |
| src\yahboomcar_ctrl | 12 | 463 | 80 | 65 | 608 |
| src\yahboomcar_ctrl (Files) | 3 | 48 | 0 | 7 | 55 |
| src\yahboomcar_ctrl\launch | 2 | 18 | 0 | 4 | 22 |
| src\yahboomcar_ctrl\test | 3 | 23 | 39 | 12 | 74 |
| src\yahboomcar_ctrl\yahboomcar_ctrl | 4 | 374 | 41 | 42 | 457 |
| src\yahboomcar_nav | 34 | 1,562 | 85 | 229 | 1,876 |
| src\yahboomcar_nav (Files) | 3 | 53 | 0 | 8 | 61 |
| src\yahboomcar_nav\launch | 20 | 657 | 23 | 146 | 826 |
| src\yahboomcar_nav\maps | 2 | 14 | 0 | 0 | 14 |
| src\yahboomcar_nav\params | 4 | 780 | 19 | 55 | 854 |
| src\yahboomcar_nav\test | 3 | 23 | 39 | 12 | 74 |
| src\yahboomcar_nav\yahboomcar_nav | 2 | 35 | 4 | 8 | 47 |
| src\yahboomcar_slam | 19 | 835 | 89 | 177 | 1,101 |
| src\yahboomcar_slam (Files) | 2 | 87 | 0 | 20 | 107 |
| src\yahboomcar_slam\include | 1 | 101 | 2 | 31 | 134 |
| src\yahboomcar_slam\include\yahboomcar_slam | 1 | 101 | 2 | 31 | 134 |
| src\yahboomcar_slam\launch | 9 | 236 | 13 | 45 | 294 |
| src\yahboomcar_slam\params | 5 | 91 | 41 | 34 | 166 |
| src\yahboomcar_slam\src | 2 | 320 | 33 | 47 | 400 |
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+
Files
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+
| filename | language | code | comment | blank | total |
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+
| d:\软件工程\实践代码\UVPTCCS\doc\README.md | Markdown | 1 | 0 | 2 | 3 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\README.md | Markdown | 74 | 0 | 17 | 91 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\__init__.py | Python | 0 | 0 | 1 | 1 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\laserscan_to_point_pulisher\laserscan_to_point_publish.py | Python | 47 | 3 | 16 | 66 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\package.xml | XML | 16 | 0 | 3 | 19 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.cfg | Properties | 4 | 0 | 1 | 5 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\setup.py | Python | 24 | 0 | 3 | 27 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_copyright.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_flake8.py | Python | 9 | 13 | 4 | 26 |
| d:\软件工程\实践代码\UVPTCCS\src\laserscan_to_point_pulisher\test\test_pep257.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\README.md | Markdown | 89 | 0 | 30 | 119 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\launch\yahboom_app_save_map.launch.py | Python | 17 | 1 | 5 | 23 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\package.xml | XML | 17 | 0 | 3 | 20 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.cfg | Properties | 4 | 0 | 1 | 5 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\setup.py | Python | 26 | 0 | 3 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_copyright.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_flake8.py | Python | 9 | 13 | 4 | 26 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\test\test_pep257.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\__init__.py | Python | 0 | 0 | 1 | 1 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map.py | Python | 45 | 1 | 12 | 58 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboom_app_save_map\yahboom_app_save_map\yahboom_app_save_map_client.py | Python | 36 | 0 | 12 | 48 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\keyboart_ctrl_launch-Copy1.py | Python | 9 | 0 | 2 | 11 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\launch\yahboomcar_joy_launch.py | Python | 9 | 0 | 2 | 11 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\package.xml | XML | 16 | 0 | 3 | 19 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.cfg | Properties | 4 | 0 | 1 | 5 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\setup.py | Python | 28 | 0 | 3 | 31 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_copyright.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_flake8.py | Python | 9 | 13 | 4 | 26 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\test\test_pep257.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\__init__.py | Python | 0 | 0 | 1 | 1 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_R2.py | Python | 126 | 19 | 17 | 162 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_joy_X3.py | Python | 126 | 18 | 15 | 159 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_ctrl\yahboomcar_ctrl\yahboom_keyboard.py | Python | 122 | 4 | 9 | 135 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\cartographer_launch.py | Python | 54 | 0 | 9 | 63 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_map_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_nav_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_map_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\display_rtabmap_nav_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\laser_bringup_launch.py | Python | 65 | 1 | 6 | 72 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_cartographer_launch.py | Python | 14 | 0 | 5 | 19 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_4ros_s2_launch.py | Python | 27 | 1 | 5 | 33 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_a1_launch.py | Python | 16 | 0 | 6 | 22 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_gmapping_launch.py | Python | 34 | 0 | 5 | 39 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\map_rtabmap_launch.py | Python | 14 | 0 | 5 | 19 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_dwa_launch.py | Python | 31 | 0 | 6 | 37 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_rtabmap_launch.py | Python | 21 | 0 | 6 | 27 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\navigation_teb_launch.py | Python | 31 | 0 | 6 | 37 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\occupancy_grid_launch.py | Python | 29 | 0 | 6 | 35 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_localization_launch.py | Python | 66 | 5 | 16 | 87 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_nav_launch.py | Python | 31 | 0 | 6 | 37 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_sync_launch.py | Python | 66 | 13 | 17 | 96 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\rtabmap_viz_launch.py | Python | 40 | 2 | 10 | 52 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\launch\save_map_launch.py | Python | 26 | 1 | 8 | 35 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar.yaml | YAML | 7 | 0 | 0 | 7 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\maps\yahboomcar2.yaml | YAML | 7 | 0 | 0 | 7 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\package.xml | XML | 19 | 0 | 4 | 23 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\dwa_nav_params.yaml | YAML | 246 | 12 | 16 | 274 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\lds_2d.lua | Lua | 38 | 1 | 7 | 46 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\rtabmap_nav_params.yaml | YAML | 191 | 3 | 11 | 205 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\params\teb_nav_params.yaml | YAML | 305 | 3 | 21 | 329 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.cfg | Properties | 4 | 0 | 1 | 5 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\setup.py | Python | 30 | 0 | 3 | 33 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_copyright.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_flake8.py | Python | 9 | 13 | 4 | 26 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\test\test_pep257.py | Python | 7 | 13 | 4 | 24 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\__init__.py | Python | 0 | 0 | 1 | 1 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_nav\yahboomcar_nav\scan_filter.py | Python | 35 | 4 | 7 | 46 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\CMakeLists.txt | CMake | 62 | 0 | 14 | 76 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\include\yahboomcar_slam\point_cloud.h | C++ | 101 | 2 | 31 | 134 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\camera_octomap_launch.py | Python | 27 | 0 | 5 | 32 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_octomap_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\display_pcl_launch.py | Python | 23 | 0 | 6 | 29 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\octomap_server_launch.py | Python | 20 | 0 | 3 | 23 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_base_launch.py | Python | 26 | 5 | 5 | 36 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_map_launch.py | Python | 29 | 0 | 5 | 34 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pcl_octomap_launch.py | Python | 34 | 5 | 6 | 45 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_pose_launch.py | Python | 16 | 0 | 3 | 19 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\launch\orbslam_ros_launch.py | Python | 38 | 3 | 6 | 47 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\package.xml | XML | 25 | 0 | 6 | 31 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\mono.yaml | YAML | 27 | 19 | 12 | 58 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_map.yaml | YAML | 16 | 0 | 2 | 18 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\pointcloud_octomap.yaml | YAML | 16 | 0 | 2 | 18 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\rgbd.yaml | YAML | 32 | 22 | 17 | 71 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\params\usb_cam_param.yaml | YAML | 0 | 0 | 1 | 1 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud.cpp | C++ | 307 | 33 | 44 | 384 |
| d:\软件工程\实践代码\UVPTCCS\src\yahboomcar_slam\src\point_cloud_main.cpp | C++ | 13 | 0 | 3 | 16 |
| Total | | 3,306 | 337 | 605 | 4,248 |
+----------------------------------------------------------------------------------------------------------------+------------+------------+------------+------------+------------+

@ -0,0 +1,99 @@
# ROS2 节点 README 文档
## 节点名称
**robot_pose_publisher**
## 功能描述
该节点的主要功能是订阅机器人的位姿数据(`PoseWithCovarianceStamped`),并将其转换为简洁的位姿消息(`PoseStamped`)发布到`/robot_pose`话题。同时该节点使用TF变换库来处理坐标变换确保发布的位姿数据在正确的坐标系下。通过定时器节点会定期发布机器人位姿以确保其他节点能够获取到最新的机器人位置信息。
## 订阅接口
| 话题名称 | 消息类型 | 回调函数 | 描述 |
|:------------|:--------------------------------------|:--------------|:-------------------------|
| `/amcl_pose`| `geometry_msgs/msg/PoseWithCovarianceStamped` | `odom_callback` | 订阅机器人的位姿数据 |
## 发布接口
| 话题名称 | 消息类型 | 发布内容描述 |
|:--------------|:--------------------------|:-------------------------------|
| `/robot_pose` | `geometry_msgs/msg/PoseStamped` | 发布机器人在地图坐标系下的位姿数据 |
## 其他信息
### 代码中的问题
1. **异常处理不够完善**
- 在`odom_callback`函数中,异常处理仅记录了警告信息,但没有具体说明错误原因,这在调试时可能会导致问题难以定位。
2. **定时器回调函数逻辑简单**
- `robot_pose_callback`函数只是简单地发布了之前存储的位姿数据,没有考虑数据的时效性和有效性。
3. **TF变换使用但未充分利用**
- 虽然节点中创建了TF缓冲区和监听器但在当前代码中并没有实际使用TF变换功能可能导致资源浪费。
4. **变量命名可以优化**
- 部分变量命名不够直观,例如`robot_pose`可以直接命名为`current_pose`,以提高代码的可读性。
### 修改建议
1. **完善异常处理**
- 在异常处理中添加具体的错误信息,方便调试和问题定位。
- 修改代码:
```python
def odom_callback(self, msg):
try:
# 数据处理逻辑
except Exception as e:
self.get_logger().error(f"Error processing odometry data: {e}")
```
2. **优化定时器回调函数**
- 在定时器回调中添加逻辑,确保发布的位姿数据是最新且有效的。
- 示例修改:
```python
def robot_pose_callback(self):
# 检查是否有新的位姿数据
if self.robot_pose.header.stamp == self.last_published_stamp:
return
self.robot_pose_publisher.publish(self.robot_pose)
self.last_published_stamp = self.robot_pose.header.stamp
```
3. **移除未使用的TF变换代码**
- 如果当前节点不需要TF变换功能可以移除相关的代码减少资源占用和代码复杂度。
- 修改代码:
```python
class RobotPosePublisher(Node):
def __init__(self):
super().__init__('robot_pose_publisher')
self.subscription = self.create_subscription(
PoseWithCovarianceStamped,
'/amcl_pose',
self.odom_callback,
10)
self.robot_pose_publisher = self.create_publisher(
PoseStamped,
'/robot_pose',
10)
timer_period = 2 # 2s的定时间隔
self.timer = self.create_timer(timer_period, self.robot_pose_callback)
self.robot_pose = PoseStamped()
self.last_published_stamp = None
```
4. **优化变量命名和代码结构**
- 使用更直观的变量名,提高代码的可读性。
- 添加详细的注释,解释每个函数和关键步骤的作用,方便后续维护和扩展。
### 使用说明
1. **运行节点**
- 确保ROS2环境已正确配置并安装了必要的依赖项。
- 使用以下命令运行节点:
```bash
ros2 run <package_name> robot_pose_publisher
```
2. **验证节点功能**
- 使用`ros2 topic echo /robot_pose`命令查看发布的机器人位姿数据。
- 确保订阅的位姿数据能够正确处理,并且发布的位姿数据在正确的坐标系下。
3. **参数调整**
- 根据实际应用场景,可以调整节点中的参数,如订阅的话题名称、发布的消息类型、定时器间隔等。
通过以上修改和优化,可以提高节点的稳定性和可维护性,使其更好地适应实际应用需求。

@ -0,0 +1,18 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>robot_pose_publisher</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="parallels@todo.todo">parallels</maintainer>
<license>TODO: License declaration</license>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

@ -0,0 +1,58 @@
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import PoseStamped, TransformStamped, PoseWithCovarianceStamped
from nav_msgs.msg import Odometry
import tf2_ros
import math
class RobotPosePublisher(Node):
def __init__(self):
super().__init__('robot_pose_publisher')
self.subscription = self.create_subscription(
PoseWithCovarianceStamped,
'/amcl_pose',
self.odom_callback,
10)
self.tf_buffer = tf2_ros.Buffer()
self.tf_listener = tf2_ros.TransformListener(self.tf_buffer, self)
self.robot_pose_publisher = self.create_publisher(
PoseStamped,
'/robot_pose',
10)
timer_period = 2 #2s的定时间隔
self.timer = self.create_timer(timer_period, self.robot_pose_callback)
self.robot_pose = PoseStamped()
def odom_callback(self, msg):
try:
self.robot_pose.header.frame_id = 'map'
self.robot_pose.header.stamp = self.get_clock().now().to_msg()
self.robot_pose.pose.position.x = msg.pose.pose.position.x
self.robot_pose.pose.position.y = msg.pose.pose.position.y
self.robot_pose.pose.position.z = msg.pose.pose.position.z
self.robot_pose.pose.orientation.x = msg.pose.pose.orientation.x
self.robot_pose.pose.orientation.y = msg.pose.pose.orientation.y
self.robot_pose.pose.orientation.z = msg.pose.pose.orientation.z
self.robot_pose.pose.orientation.w = msg.pose.pose.orientation.w
self.robot_pose_publisher.publish(self.robot_pose)
except :
self.get_logger().warn("msg err")
def robot_pose_callback(self):
self.robot_pose_publisher.publish(self.robot_pose)
def main(args=None):
rclpy.init(args=args)
robot_pose_publisher = RobotPosePublisher()
rclpy.spin(robot_pose_publisher)
robot_pose_publisher.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()

@ -0,0 +1,4 @@
[develop]
script-dir=$base/lib/robot_pose_publisher
[install]
install-scripts=$base/lib/robot_pose_publisher

@ -0,0 +1,26 @@
from setuptools import setup
package_name = 'robot_pose_publisher'
setup(
name=package_name,
version='0.0.0',
packages=[package_name],
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='parallels',
maintainer_email='parallels@todo.todo',
description='TODO: Package description',
license='TODO: License declaration',
tests_require=['pytest'],
entry_points={
'console_scripts': [
"robot_pose_publisher=robot_pose_publisher.robot_pose_publisher:main"
],
},
)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_copyright.main import main
import pytest
@pytest.mark.copyright
@pytest.mark.linter
def test_copyright():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found errors'

@ -0,0 +1,25 @@
# Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_flake8.main import main_with_errors
import pytest
@pytest.mark.flake8
@pytest.mark.linter
def test_flake8():
rc, errors = main_with_errors(argv=[])
assert rc == 0, \
'Found %d code style errors / warnings:\n' % len(errors) + \
'\n'.join(errors)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_pep257.main import main
import pytest
@pytest.mark.linter
@pytest.mark.pep257
def test_pep257():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found code style errors / warnings'

@ -0,0 +1,30 @@
cmake_minimum_required(VERSION 3.5)
project(robot_pose_publisher_ros2)
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(tf2_ros REQUIRED)
add_executable(robot_pose_publisher src/robot_pose_publisher.cpp)
ament_target_dependencies(robot_pose_publisher rclcpp geometry_msgs tf2_ros)
install(TARGETS
robot_pose_publisher
DESTINATION lib/${PROJECT_NAME})
install(
DIRECTORY launch
DESTINATION share/${PROJECT_NAME}
)
ament_package()

@ -0,0 +1,117 @@
# robot_pose_publisher_ros2
This is ros2 verison of [robot_pose_publisher](https://github.com/GT-RAIL/robot_pose_publisher)
# ROS2 节点 README 文档
## 节点名称
**robot_pose_publisher**
## 功能描述
该节点的主要功能是基于TF变换框架获取机器人在地图坐标系`/map`)下的位姿,并将其发布为`geometry_msgs/msg/PoseStamped`或`geometry_msgs/msg/Pose`消息。节点通过定时器定期查询TF变换获取机器人在地图坐标系中的位置和姿态并将这些信息封装到相应的消息类型中进行发布供其他节点使用。
## 订阅接口
该节点没有显式的订阅接口因为它依赖于TF变换框架来获取机器人位姿信息。节点通过`tf2_ros::TransformListener`监听TF变换从`/map`到`/base_link`的变换被用于计算机器人位姿。
## 发布接口
| 话题名称 | 消息类型 | 发布内容描述 |
|:--------------|:----------------------------------------------|:-------------------------------|
| `/robot_pose` | `geometry_msgs/msg/PoseStamped``geometry_msgs/msg/Pose` | 发布机器人在地图坐标系下的位姿数据 |
## 其他信息
### 代码中的问题
1. **异常处理不够完善**
- 在`timer_callback`函数中如果TF查询失败异常被捕获但没有记录具体的错误信息这在调试时可能会导致问题难以定位。
2. **参数验证不足**
- 节点中使用了参数`map_frame`、`base_frame`和`is_stamped`,但在代码中没有对这些参数进行充分的验证,可能导致节点在运行时出现意外行为。
3. **资源管理问题**
- 在节点构造函数中,创建了多个共享指针(如`tf_buffer_`、`tf_listener_`、`publisher_stamp`、`publisher_`),但在节点销毁时没有明确释放这些资源,可能导致资源泄漏。
4. **日志输出不够详细**
- 节点在运行过程中缺乏足够的日志输出,无法方便地监控节点的状态和数据流动。
### 修改建议
1. **完善异常处理**
- 在异常处理中添加具体的错误信息,方便调试和问题定位。
- 修改代码:
```cpp
void timer_callback()
{
geometry_msgs::msg::TransformStamped transformStamped;
try
{
transformStamped = tf_buffer_->lookupTransform(map_frame, base_frame, this->now());
}
catch (tf2::TransformException &ex)
{
RCLCPP_ERROR(this->get_logger(), "Failed to lookup transform: %s", ex.what());
return;
}
// 其他逻辑...
}
```
2. **添加参数验证**
- 在设置参数后,添加验证逻辑,确保参数值合理。
- 示例修改:
```cpp
RobotPosePublisher() : Node("robot_pose_publisher")
{
// 参数声明和获取...
if (map_frame.empty() || base_frame.empty())
{
RCLCPP_ERROR(this->get_logger(), "Invalid frame parameters");
throw std::runtime_error("Invalid frame parameters");
}
// 其他逻辑...
}
```
3. **优化资源管理**
- 在节点析构函数中明确释放共享资源,确保没有资源泄漏。
- 修改代码:
```cpp
~RobotPosePublisher()
{
tf_buffer_.reset();
tf_listener_.reset();
publisher_stamp.reset();
publisher_.reset();
}
```
4. **添加详细日志输出**
- 在关键步骤添加日志输出,方便监控节点运行状态。
- 示例修改:
```cpp
void timer_callback()
{
// 查询TF变换...
RCLCPP_INFO(this->get_logger(), "Published robot pose: (%.2f, %.2f, %.2f)",
pose_stamped.pose.position.x,
pose_stamped.pose.position.y,
pose_stamped.pose.position.z);
// 发布消息...
}
```
### 使用说明
1. **运行节点**
- 确保ROS2环境已正确配置并安装了必要的依赖项。
- 使用以下命令运行节点:
```bash
ros2 run <package_name> robot_pose_publisher
```
2. **验证节点功能**
- 使用`ros2 topic echo /robot_pose`命令查看发布的机器人位姿数据。
- 确保TF变换框架中存在从`/map`到`/base_link`的变换,否则节点将无法正常工作。
3. **参数调整**
- 根据实际应用场景,可以调整节点中的参数,如`map_frame`、`base_frame`和`is_stamped`,通过命令行或配置文件进行设置。
通过以上修改和优化,可以提高节点的稳定性和可维护性,使其更好地适应实际应用需求。

@ -0,0 +1,19 @@
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
return LaunchDescription([
Node(
package="robot_pose_publisher_ros2",
node_executable="robot_pose_publisher",
node_name="robot_pose_publisher",
output="screen",
emulate_tty=True,
parameters=[
{"use_sim_time": True},
{"is_stamped": True},
{"map_frame": "map"},
{"base_frame": "base_link"}
]
)
])

@ -0,0 +1,22 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>robot_pose_publisher_ros2</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="milan.madathiparambil@gmail.com">milan</maintainer>
<license>TODO: License declaration</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<depend>rclcpp</depend>
<depend>geometry_msgs</depend>
<depend>tf2_ros</depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

@ -0,0 +1,98 @@
/*!
* \file robot_pose_publisher.cpp
* \brief Publishes the robot's position in a geometry_msgs/Pose message.
*
* Publishes the robot's position in a geometry_msgs/Pose message based on the TF
* difference between /map and /base_link.
*
* \author Milan - milan.madathiparambil@gmail.com
* \date April 20 1020
*/
#include <chrono>
#include <memory>
#include "rclcpp/rclcpp.hpp"
#include "geometry_msgs/msg/pose_stamped.hpp"
#include "geometry_msgs/msg/pose.hpp"
#include "geometry_msgs/msg/transform_stamped.hpp"
#include "tf2_ros/transform_listener.h"
using namespace std::chrono_literals;
/* This example creates a subclass of Node and uses std::bind() to register a
* member function as a callback from the timer. */
class RobotPosePublisher : public rclcpp::Node
{
public:
RobotPosePublisher() : Node("robot_pose_publisher")
{
tf_buffer_ = std::make_shared<tf2_ros::Buffer>(this->get_clock());
tf_listener_ = std::make_shared<tf2_ros::TransformListener>(*tf_buffer_);
this->declare_parameter<std::string>("map_frame","map");
this->declare_parameter<std::string>("base_frame","base_link");
this->declare_parameter<bool>("is_stamped",false);
this->get_parameter("map_frame", map_frame);
this->get_parameter("base_frame", base_frame);
this->get_parameter("is_stamped", is_stamped);
if (is_stamped)
publisher_stamp = this->create_publisher<geometry_msgs::msg::PoseStamped>("robot_pose", 1);
else
publisher_ = this->create_publisher<geometry_msgs::msg::Pose>("robot_pose", 1);
timer_ = this->create_wall_timer(
50ms, std::bind(&RobotPosePublisher::timer_callback, this));
}
private:
void timer_callback()
{
geometry_msgs::msg::TransformStamped transformStamped;
try
{
transformStamped = tf_buffer_->lookupTransform(map_frame, base_frame,
this->now());
}
catch (tf2::TransformException &ex)
{
return;
}
geometry_msgs::msg::PoseStamped pose_stamped;
pose_stamped.header.frame_id = map_frame;
pose_stamped.header.stamp = this->now();
pose_stamped.pose.orientation.x = transformStamped.transform.rotation.x;
pose_stamped.pose.orientation.y = transformStamped.transform.rotation.y;
pose_stamped.pose.orientation.z = transformStamped.transform.rotation.z;
pose_stamped.pose.orientation.w = transformStamped.transform.rotation.w;
pose_stamped.pose.position.x = transformStamped.transform.translation.x;
pose_stamped.pose.position.y = transformStamped.transform.translation.y;
pose_stamped.pose.position.z = transformStamped.transform.translation.z;
if (is_stamped)
publisher_stamp->publish(pose_stamped);
else
publisher_->publish(pose_stamped.pose);
}
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<geometry_msgs::msg::PoseStamped>::SharedPtr publisher_stamp;
rclcpp::Publisher<geometry_msgs::msg::Pose>::SharedPtr publisher_;
size_t count_;
bool is_stamped = false;
std::string base_frame = "base_link";
std::string map_frame = "map";
std::shared_ptr<tf2_ros::TransformListener> tf_listener_;
std::shared_ptr<tf2_ros::Buffer> tf_buffer_;
};
int main(int argc, char *argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<RobotPosePublisher>());
rclcpp::shutdown();
return 0;
}

@ -0,0 +1,48 @@
cmake_minimum_required(VERSION 3.5)
project(yahboom_web_savmap_interfaces)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
# uncomment the following section in order to fill in
# further dependencies manually.
# find_package(<dependency> REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(rosidl_default_generators REQUIRED)
rosidl_generate_interfaces(${PROJECT_NAME}
"src/srv/WebSaveMap.srv"
)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
ament_package()

@ -0,0 +1,24 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>yahboom_web_savmap_interfaces</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="yahboom@todo.todo">yahboom</maintainer>
<license>TODO: License declaration</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<depend>geometry_msgs</depend>
<buildtool_depend>rosidl_default_generators</buildtool_depend>
<exec_depend>rosidl_default_runtime</exec_depend>
<member_of_group>rosidl_interface_packages</member_of_group>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

@ -0,0 +1,3 @@
string mapname
---
string response

@ -0,0 +1,72 @@
cmake_minimum_required(VERSION 3.5)
project(yahboomcar_KCFTracker)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_msgs REQUIRED)
find_package(rclcpp REQUIRED)
find_package(OpenCV REQUIRED )
find_package(cv_bridge REQUIRED )
# uncomment the following section in order to fill in
# further dependencies manually.
# find_package(<dependency> REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
link_directories(${OpenCV_LIBRARY_DIRS})
include_directories(include/yahboomcar_KCFTracker)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)ImageConverter::ImageConverter()':
ament_lint_auto_find_test_dependencies()
endif()
#add_subdirectory(include/yahboomcar_KCFTracker)
#add_executable(KCF_Tracker_Node src/KCF_Tracker_main.cpp src/KCF_Tracker.cpp)
#ament_target_dependencies(KCF_Tracker_Node rclcpp sensor_msgs geometry_msgs std_msgs)
#install(
#TARGETS
#KCF_Tracker_Node
#DESTINATION lib/${PROJECT_NAME}
#)
add_executable( KCF_Tracker_Node src/KCF_Tracker_main.cpp src/KCF_Tracker.cpp include/yahboomcar_KCFTracker/kcftracker.cpp include/yahboomcar_KCFTracker/fhog.cpp include/yahboomcar_KCFTracker/PID.cpp)
#target_link_libraries( KCF_Tracker_Node ${OpenCV_LIBRARIES} )
#add_subdirectory(include/yahboomcar_KCFTracker)
ament_target_dependencies(KCF_Tracker_Node rclcpp sensor_msgs geometry_msgs std_msgs cv_bridge)
install(
TARGETS
KCF_Tracker_Node
DESTINATION lib/${PROJECT_NAME}
)
install(
DIRECTORY
launch
DESTINATION share/${PROJECT_NAME}
)
ament_package()

@ -0,0 +1,6 @@
FILE(GLOB_RECURSE sourcefiles "*.cpp" "*.h")
add_library(kcftracker ${sourcefiles})
target_link_libraries(kcftracker kcftracker.cpp)

@ -0,0 +1,130 @@
//
// Created by yahboom on 2021/7/30.
//
#ifndef TRANSBOT_ASTRA_KCF_TRACKER_H
#define TRANSBOT_ASTRA_KCF_TRACKER_H
#include <iostream>
#include <algorithm>
#include <dirent.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include "sensor_msgs/msg/image.hpp"
#include "sensor_msgs/image_encodings.hpp"
#include "geometry_msgs/msg/twist.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "kcftracker.h"
#include "PID.h"
#include <rclcpp/rclcpp.hpp>
#include "std_msgs/msg/bool.hpp"
#include <time.h>
using namespace std;
using namespace cv;
using std::placeholders::_1;
class ImageConverter :public rclcpp::Node{
rclcpp::Publisher<sensor_msgs::msg::Image>::SharedPtr image_pub_;
rclcpp::Publisher<geometry_msgs::msg::Twist>::SharedPtr vel_pub_;
rclcpp::Subscription<sensor_msgs::msg::Image>::SharedPtr image_sub_;
rclcpp::Subscription<sensor_msgs::msg::Image>::SharedPtr depth_sub_;
rclcpp::Subscription<std_msgs::msg::Bool>::SharedPtr Joy_sub_;
public:
ImageConverter():Node("image_converter")
{
float linear_KP=3.0;
float linear_KI=0.0;
float linear_KD=1.0;
float angular_KP=0.5;
float angular_KI=0.0;
float angular_KD=2.0;
float minDist = 1.0;
bool refresh = false;
this->declare_parameter<float>("linear_KP_",3.0);
this->declare_parameter<float>("linear_KI_",0.0);
this->declare_parameter<float>("linear_KD_",1.0);
this->declare_parameter<float>("angular_KP_",0.5);
this->declare_parameter<float>("angular_KI_",0.0);
this->declare_parameter<float>("angular_KD_",2.0);
this->declare_parameter<float>("minDist_",1.0);
this->declare_parameter<bool>("refresh_",false);
this->get_parameter<float>("linear_KP_",linear_KP);
this->get_parameter<float>("linear_KI_",linear_KI);
this->get_parameter<float>("linear_KD_",linear_KD);
this->get_parameter<float>("angular_KP_",angular_KP);
this->get_parameter<float>("angular_KI_",angular_KI);
this->get_parameter<float>("angular_KD_",angular_KD);
this->get_parameter<float>("minDist_",minDist);
this->get_parameter<bool>("refresh_",refresh);
this->linear_PID = new PID(linear_KP, linear_KI, linear_KD);
this->angular_PID = new PID(angular_KP, angular_KI, angular_KD);
//sub
image_sub_=this->create_subscription<sensor_msgs::msg::Image>("/camera/color/image_raw",1,std::bind(&ImageConverter::imageCb,this,_1));
depth_sub_=this->create_subscription<sensor_msgs::msg::Image>("/camera/depth/image_raw",1,std::bind(&ImageConverter::depthCb,this,_1));
Joy_sub_=this->create_subscription<std_msgs::msg::Bool>("JoyState",1,std::bind(&ImageConverter::JoyCb,this,_1));
//pub
image_pub_=this->create_publisher<sensor_msgs::msg::Image>("/KCF_image",1);
vel_pub_ =this->create_publisher<geometry_msgs::msg::Twist>("/cmd_vel",1);
}
//ros::Publisher pub;
PID *linear_PID;
PID *angular_PID;
/*ros::NodeHandle n;
ros::Subscriber image_sub_;
ros::Subscriber depth_sub_;
ros::Subscriber Joy_sub_;
ros::Publisher image_pub_;*/
const char *RGB_WINDOW = "rgb_img";
const char *DEPTH_WINDOW = "depth_img";
float minDist = 1.0;
float linear_speed = 0;
float rotation_speed = 0;
bool enable_get_depth = false;
float dist_val[5];
bool HOG = true;
bool FIXEDWINDOW = false;
bool MULTISCALE = true;
bool LAB = false;
int center_x;
KCFTracker tracker;
//dynamic_reconfigure::Server<yahboomcar_astra::KCFTrackerPIDConfig> server;
//dynamic_reconfigure::Server<yahboomcar_astra::KCFTrackerPIDConfig>::CallbackType f;
//ImageConverter(ros::NodeHandle &n);
//ImageConverter(Node);
//~ImageConverter();
void PIDcallback();
void Reset();
void Cancel();
void imageCb(const std::shared_ptr<sensor_msgs::msg::Image> msg) ;
void depthCb(const std::shared_ptr<sensor_msgs::msg::Image> msg) ;
void JoyCb(const std::shared_ptr<std_msgs::msg::Bool> msg) ;
//void depthCb(const sensor_msgs::ImageConstPtr &msg);
//void JoyCb(const std_msgs::BoolConstPtr &msg);
};
#endif //TRANSBOT_ASTRA_KCF_TRACKER_H

@ -0,0 +1,40 @@
#include "PID.h"
#include <iostream>
using namespace std;
PID::PID(float kp, float ki, float kd) {
this->kp = kp;
this->ki = ki;
this->kd = kd;
}
float PID::compute(float target, float current) {
// 计算误差
//Calculation error
float error = target - current;
// 误差的累计
//Accumulation of errors
intergral += error;
// 本次误差和上一次误差的差异
//The difference between this error and the previous error
derivative = error - prevError;
// 套用pid的公式
//Apply the formula of pid
targetpoint = kp * error + ki * intergral + kd * derivative;
// 记录上一次的误差
//Record the last error
prevError = error;
return targetpoint;
}
void PID::reset() {
targetpoint = 0;
intergral = 0;
derivative = 0;
prevError = 0;
}
void PID::Set_PID(float kp, float ki, float kd) {
this->kp = kp;
this->ki = ki;
this->kd = kd;
}

@ -0,0 +1,35 @@
class PID {
public:
float kp;
float ki;
float kd;
float targetpoint;
// 上一次的误差
//Last time error
float prevError;
// 积分
//integral
float intergral;
// 微分
//differential
float derivative;
PID(float kp, float ki, float kd);
void Set_PID(float kp, float ki, float kd);
/**
* pid calculation function pid
* @param target
* @param current
* @return pwm
*/
float compute(float target, float current);
/**
* :
* Reset all errors: When the set speed is different from the last time
*/
void reset();
};

@ -0,0 +1,237 @@
/*
Author: Christian Bailer
Contact address: Christian.Bailer@dfki.de
Department Augmented Vision DFKI
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are disclaimed.
In no event shall copyright holders or contributors be liable for any direct,
indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#pragma once
//#include <cv.h>
#ifndef _OPENCV_FFTTOOLS_HPP_
#define _OPENCV_FFTTOOLS_HPP_
#endif
//NOTE: FFTW support is still shaky, disabled for now.
/*#ifdef USE_FFTW
#include <fftw3.h>
#endif*/
namespace FFTTools
{
// Previous declarations, to avoid warnings
cv::Mat fftd(cv::Mat img, bool backwards = false);
cv::Mat real(cv::Mat img);
cv::Mat imag(cv::Mat img);
cv::Mat magnitude(cv::Mat img);
cv::Mat complexMultiplication(cv::Mat a, cv::Mat b);
cv::Mat complexDivision(cv::Mat a, cv::Mat b);
void rearrange(cv::Mat &img);
void normalizedLogTransform(cv::Mat &img);
cv::Mat fftd(cv::Mat img, bool backwards)
{
/*
#ifdef USE_FFTW
fftw_complex * fm = (fftw_complex*) fftw_malloc(sizeof (fftw_complex) * img.cols * img.rows);
fftw_plan p = fftw_plan_dft_2d(img.rows, img.cols, fm, fm, backwards ? 1 : -1, 0 * FFTW_ESTIMATE);
if (img.channels() == 1)
{
for (int i = 0; i < img.rows; i++)
for (int j = 0; j < img.cols; j++)
{
fm[i * img.cols + j][0] = img.at<float>(i, j);
fm[i * img.cols + j][1] = 0;
}
}
else
{
assert(img.channels() == 2);
for (int i = 0; i < img.rows; i++)
for (int j = 0; j < img.cols; j++)
{
fm[i * img.cols + j][0] = img.at<cv::Vec2d > (i, j)[0];
fm[i * img.cols + j][1] = img.at<cv::Vec2d > (i, j)[1];
}
}
fftw_execute(p);
cv::Mat res(img.rows, img.cols, CV_64FC2);
for (int i = 0; i < img.rows; i++)
for (int j = 0; j < img.cols; j++)
{
res.at<cv::Vec2d > (i, j)[0] = fm[i * img.cols + j][0];
res.at<cv::Vec2d > (i, j)[1] = fm[i * img.cols + j][1];
// _iout(fm[i * img.cols + j][0]);
}
if (backwards)res *= 1.d / (float) (res.cols * res.rows);
fftw_free(p);
fftw_free(fm);
return res;
#else
*/
if (img.channels() == 1)
{
cv::Mat planes[] = {cv::Mat_<float> (img), cv::Mat_<float>::zeros(img.size())};
//cv::Mat planes[] = {cv::Mat_<double> (img), cv::Mat_<double>::zeros(img.size())};
cv::merge(planes, 2, img);
}
cv::dft(img, img, backwards ? (cv::DFT_INVERSE | cv::DFT_SCALE) : 0 );
return img;
/*#endif*/
}
cv::Mat real(cv::Mat img)
{
std::vector<cv::Mat> planes;
cv::split(img, planes);
return planes[0];
}
cv::Mat imag(cv::Mat img)
{
std::vector<cv::Mat> planes;
cv::split(img, planes);
return planes[1];
}
cv::Mat magnitude(cv::Mat img)
{
cv::Mat res;
std::vector<cv::Mat> planes;
cv::split(img, planes); // planes[0] = Re(DFT(I), planes[1] = Im(DFT(I))
if (planes.size() == 1) res = cv::abs(img);
else if (planes.size() == 2) cv::magnitude(planes[0], planes[1], res); // planes[0] = magnitude
else assert(0);
return res;
}
cv::Mat complexMultiplication(cv::Mat a, cv::Mat b)
{
std::vector<cv::Mat> pa;
std::vector<cv::Mat> pb;
cv::split(a, pa);
cv::split(b, pb);
std::vector<cv::Mat> pres;
pres.push_back(pa[0].mul(pb[0]) - pa[1].mul(pb[1]));
pres.push_back(pa[0].mul(pb[1]) + pa[1].mul(pb[0]));
cv::Mat res;
cv::merge(pres, res);
return res;
}
cv::Mat complexDivision(cv::Mat a, cv::Mat b)
{
std::vector<cv::Mat> pa;
std::vector<cv::Mat> pb;
cv::split(a, pa);
cv::split(b, pb);
cv::Mat divisor = 1. / (pb[0].mul(pb[0]) + pb[1].mul(pb[1]));
std::vector<cv::Mat> pres;
pres.push_back((pa[0].mul(pb[0]) + pa[1].mul(pb[1])).mul(divisor));
pres.push_back((pa[1].mul(pb[0]) + pa[0].mul(pb[1])).mul(divisor));
cv::Mat res;
cv::merge(pres, res);
return res;
}
void rearrange(cv::Mat &img)
{
// img = img(cv::Rect(0, 0, img.cols & -2, img.rows & -2));
int cx = img.cols / 2;
int cy = img.rows / 2;
cv::Mat q0(img, cv::Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
cv::Mat q1(img, cv::Rect(cx, 0, cx, cy)); // Top-Right
cv::Mat q2(img, cv::Rect(0, cy, cx, cy)); // Bottom-Left
cv::Mat q3(img, cv::Rect(cx, cy, cx, cy)); // Bottom-Right
cv::Mat tmp; // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
}
/*
template < typename type>
cv::Mat fouriertransFull(const cv::Mat & in)
{
return fftd(in);
cv::Mat planes[] = {cv::Mat_<type > (in), cv::Mat_<type>::zeros(in.size())};
cv::Mat t;
assert(planes[0].depth() == planes[1].depth());
assert(planes[0].size == planes[1].size);
cv::merge(planes, 2, t);
cv::dft(t, t);
//cv::normalize(a, a, 0, 1, CV_MINMAX);
//cv::normalize(t, t, 0, 1, CV_MINMAX);
// cv::imshow("a",real(a));
// cv::imshow("b",real(t));
// cv::waitKey(0);
return t;
}*/
void normalizedLogTransform(cv::Mat &img)
{
img = cv::abs(img);
img += cv::Scalar::all(1);
cv::log(img, img);
// cv::normalize(img, img, 0, 1, CV_MINMAX);
}
}

@ -0,0 +1,511 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2013, University of Nizhny Novgorod, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
//Modified from latentsvm module's "lsvmc_featurepyramid.cpp".
//#include "precomp.hpp"
//#include "_lsvmc_latentsvm.h"
//#include "_lsvmc_resizeimg.h"
#include "fhog.h"
#ifdef HAVE_TBB
#include <tbb/tbb.h>
#include "tbb/parallel_for.h"
#include "tbb/blocked_range.h"
#endif
#ifndef max
#define max(a,b) (((a) > (b)) ? (a) : (b))
#endif
#ifndef min
#define min(a,b) (((a) < (b)) ? (a) : (b))
#endif
/*
// Getting feature map for the selected subimage
//
// API
// int getFeatureMaps(const IplImage * image, const int k, featureMap **map);
// INPUT
// image - selected subimage
// k - size of cells
// OUTPUT
// map - feature map
// RESULT
// Error status
*/
int getFeatureMaps(const IplImage* image, const int k, CvLSVMFeatureMapCaskade **map)
{
int sizeX, sizeY;
int p, px, stringSize;
int height, width, numChannels;
int i, j, kk, c, ii, jj, d;
float * datadx, * datady;
int ch;
float magnitude, x, y, tx, ty;
IplImage * dx, * dy;
int *nearest;
float *w, a_x, b_x;
float kernel[3] = {-1.f, 0.f, 1.f};
CvMat kernel_dx = cvMat(1, 3, CV_32F, kernel);
CvMat kernel_dy = cvMat(3, 1, CV_32F, kernel);
float * r;
int * alfa;
float boundary_x[NUM_SECTOR + 1];
float boundary_y[NUM_SECTOR + 1];
float max, dotProd;
int maxi;
height = image->height;
width = image->width ;
numChannels = image->nChannels;
dx = cvCreateImage(cvSize(image->width, image->height),
IPL_DEPTH_32F, 3);
dy = cvCreateImage(cvSize(image->width, image->height),
IPL_DEPTH_32F, 3);
sizeX = width / k;
sizeY = height / k;
px = 3 * NUM_SECTOR;
p = px;
stringSize = sizeX * p;
allocFeatureMapObject(map, sizeX, sizeY, p);
cvFilter2D(image, dx, &kernel_dx, cvPoint(-1, 0));
cvFilter2D(image, dy, &kernel_dy, cvPoint(0, -1));
float arg_vector;
for(i = 0; i <= NUM_SECTOR; i++)
{
arg_vector = ( (float) i ) * ( (float)(PI) / (float)(NUM_SECTOR) );
boundary_x[i] = cosf(arg_vector);
boundary_y[i] = sinf(arg_vector);
}/*for(i = 0; i <= NUM_SECTOR; i++) */
r = (float *)malloc( sizeof(float) * (width * height));
alfa = (int *)malloc( sizeof(int ) * (width * height * 2));
for(j = 1; j < height - 1; j++)
{
datadx = (float*)(dx->imageData + dx->widthStep * j);
datady = (float*)(dy->imageData + dy->widthStep * j);
for(i = 1; i < width - 1; i++)
{
c = 0;
x = (datadx[i * numChannels + c]);
y = (datady[i * numChannels + c]);
r[j * width + i] =sqrtf(x * x + y * y);
for(ch = 1; ch < numChannels; ch++)
{
tx = (datadx[i * numChannels + ch]);
ty = (datady[i * numChannels + ch]);
magnitude = sqrtf(tx * tx + ty * ty);
if(magnitude > r[j * width + i])
{
r[j * width + i] = magnitude;
c = ch;
x = tx;
y = ty;
}
}/*for(ch = 1; ch < numChannels; ch++)*/
max = boundary_x[0] * x + boundary_y[0] * y;
maxi = 0;
for (kk = 0; kk < NUM_SECTOR; kk++)
{
dotProd = boundary_x[kk] * x + boundary_y[kk] * y;
if (dotProd > max)
{
max = dotProd;
maxi = kk;
}
else
{
if (-dotProd > max)
{
max = -dotProd;
maxi = kk + NUM_SECTOR;
}
}
}
alfa[j * width * 2 + i * 2 ] = maxi % NUM_SECTOR;
alfa[j * width * 2 + i * 2 + 1] = maxi;
}/*for(i = 0; i < width; i++)*/
}/*for(j = 0; j < height; j++)*/
nearest = (int *)malloc(sizeof(int ) * k);
w = (float*)malloc(sizeof(float) * (k * 2));
for(i = 0; i < k / 2; i++)
{
nearest[i] = -1;
}/*for(i = 0; i < k / 2; i++)*/
for(i = k / 2; i < k; i++)
{
nearest[i] = 1;
}/*for(i = k / 2; i < k; i++)*/
for(j = 0; j < k / 2; j++)
{
b_x = k / 2 + j + 0.5f;
a_x = k / 2 - j - 0.5f;
w[j * 2 ] = 1.0f/a_x * ((a_x * b_x) / ( a_x + b_x));
w[j * 2 + 1] = 1.0f/b_x * ((a_x * b_x) / ( a_x + b_x));
}/*for(j = 0; j < k / 2; j++)*/
for(j = k / 2; j < k; j++)
{
a_x = j - k / 2 + 0.5f;
b_x =-j + k / 2 - 0.5f + k;
w[j * 2 ] = 1.0f/a_x * ((a_x * b_x) / ( a_x + b_x));
w[j * 2 + 1] = 1.0f/b_x * ((a_x * b_x) / ( a_x + b_x));
}/*for(j = k / 2; j < k; j++)*/
for(i = 0; i < sizeY; i++)
{
for(j = 0; j < sizeX; j++)
{
for(ii = 0; ii < k; ii++)
{
for(jj = 0; jj < k; jj++)
{
if ((i * k + ii > 0) &&
(i * k + ii < height - 1) &&
(j * k + jj > 0) &&
(j * k + jj < width - 1))
{
d = (k * i + ii) * width + (j * k + jj);
(*map)->map[ i * stringSize + j * (*map)->numFeatures + alfa[d * 2 ]] +=
r[d] * w[ii * 2] * w[jj * 2];
(*map)->map[ i * stringSize + j * (*map)->numFeatures + alfa[d * 2 + 1] + NUM_SECTOR] +=
r[d] * w[ii * 2] * w[jj * 2];
if ((i + nearest[ii] >= 0) &&
(i + nearest[ii] <= sizeY - 1))
{
(*map)->map[(i + nearest[ii]) * stringSize + j * (*map)->numFeatures + alfa[d * 2 ] ] +=
r[d] * w[ii * 2 + 1] * w[jj * 2 ];
(*map)->map[(i + nearest[ii]) * stringSize + j * (*map)->numFeatures + alfa[d * 2 + 1] + NUM_SECTOR] +=
r[d] * w[ii * 2 + 1] * w[jj * 2 ];
}
if ((j + nearest[jj] >= 0) &&
(j + nearest[jj] <= sizeX - 1))
{
(*map)->map[i * stringSize + (j + nearest[jj]) * (*map)->numFeatures + alfa[d * 2 ] ] +=
r[d] * w[ii * 2] * w[jj * 2 + 1];
(*map)->map[i * stringSize + (j + nearest[jj]) * (*map)->numFeatures + alfa[d * 2 + 1] + NUM_SECTOR] +=
r[d] * w[ii * 2] * w[jj * 2 + 1];
}
if ((i + nearest[ii] >= 0) &&
(i + nearest[ii] <= sizeY - 1) &&
(j + nearest[jj] >= 0) &&
(j + nearest[jj] <= sizeX - 1))
{
(*map)->map[(i + nearest[ii]) * stringSize + (j + nearest[jj]) * (*map)->numFeatures + alfa[d * 2 ] ] +=
r[d] * w[ii * 2 + 1] * w[jj * 2 + 1];
(*map)->map[(i + nearest[ii]) * stringSize + (j + nearest[jj]) * (*map)->numFeatures + alfa[d * 2 + 1] + NUM_SECTOR] +=
r[d] * w[ii * 2 + 1] * w[jj * 2 + 1];
}
}
}/*for(jj = 0; jj < k; jj++)*/
}/*for(ii = 0; ii < k; ii++)*/
}/*for(j = 1; j < sizeX - 1; j++)*/
}/*for(i = 1; i < sizeY - 1; i++)*/
cvReleaseImage(&dx);
cvReleaseImage(&dy);
free(w);
free(nearest);
free(r);
free(alfa);
return LATENT_SVM_OK;
}
/*
// Feature map Normalization and Truncation
//
// API
// int normalizeAndTruncate(featureMap *map, const float alfa);
// INPUT
// map - feature map
// alfa - truncation threshold
// OUTPUT
// map - truncated and normalized feature map
// RESULT
// Error status
*/
int normalizeAndTruncate(CvLSVMFeatureMapCaskade *map, const float alfa)
{
int i,j, ii;
int sizeX, sizeY, p, pos, pp, xp, pos1, pos2;
float * partOfNorm; // norm of C(i, j)
float * newData;
float valOfNorm;
sizeX = map->sizeX;
sizeY = map->sizeY;
partOfNorm = (float *)malloc (sizeof(float) * (sizeX * sizeY));
p = NUM_SECTOR;
xp = NUM_SECTOR * 3;
pp = NUM_SECTOR * 12;
for(i = 0; i < sizeX * sizeY; i++)
{
valOfNorm = 0.0f;
pos = i * map->numFeatures;
for(j = 0; j < p; j++)
{
valOfNorm += map->map[pos + j] * map->map[pos + j];
}/*for(j = 0; j < p; j++)*/
partOfNorm[i] = valOfNorm;
}/*for(i = 0; i < sizeX * sizeY; i++)*/
sizeX -= 2;
sizeY -= 2;
newData = (float *)malloc (sizeof(float) * (sizeX * sizeY * pp));
//normalization
for(i = 1; i <= sizeY; i++)
{
for(j = 1; j <= sizeX; j++)
{
valOfNorm = sqrtf(
partOfNorm[(i )*(sizeX + 2) + (j )] +
partOfNorm[(i )*(sizeX + 2) + (j + 1)] +
partOfNorm[(i + 1)*(sizeX + 2) + (j )] +
partOfNorm[(i + 1)*(sizeX + 2) + (j + 1)]) + FLT_EPSILON;
pos1 = (i ) * (sizeX + 2) * xp + (j ) * xp;
pos2 = (i-1) * (sizeX ) * pp + (j-1) * pp;
for(ii = 0; ii < p; ii++)
{
newData[pos2 + ii ] = map->map[pos1 + ii ] / valOfNorm;
}/*for(ii = 0; ii < p; ii++)*/
for(ii = 0; ii < 2 * p; ii++)
{
newData[pos2 + ii + p * 4] = map->map[pos1 + ii + p] / valOfNorm;
}/*for(ii = 0; ii < 2 * p; ii++)*/
valOfNorm = sqrtf(
partOfNorm[(i )*(sizeX + 2) + (j )] +
partOfNorm[(i )*(sizeX + 2) + (j + 1)] +
partOfNorm[(i - 1)*(sizeX + 2) + (j )] +
partOfNorm[(i - 1)*(sizeX + 2) + (j + 1)]) + FLT_EPSILON;
for(ii = 0; ii < p; ii++)
{
newData[pos2 + ii + p ] = map->map[pos1 + ii ] / valOfNorm;
}/*for(ii = 0; ii < p; ii++)*/
for(ii = 0; ii < 2 * p; ii++)
{
newData[pos2 + ii + p * 6] = map->map[pos1 + ii + p] / valOfNorm;
}/*for(ii = 0; ii < 2 * p; ii++)*/
valOfNorm = sqrtf(
partOfNorm[(i )*(sizeX + 2) + (j )] +
partOfNorm[(i )*(sizeX + 2) + (j - 1)] +
partOfNorm[(i + 1)*(sizeX + 2) + (j )] +
partOfNorm[(i + 1)*(sizeX + 2) + (j - 1)]) + FLT_EPSILON;
for(ii = 0; ii < p; ii++)
{
newData[pos2 + ii + p * 2] = map->map[pos1 + ii ] / valOfNorm;
}/*for(ii = 0; ii < p; ii++)*/
for(ii = 0; ii < 2 * p; ii++)
{
newData[pos2 + ii + p * 8] = map->map[pos1 + ii + p] / valOfNorm;
}/*for(ii = 0; ii < 2 * p; ii++)*/
valOfNorm = sqrtf(
partOfNorm[(i )*(sizeX + 2) + (j )] +
partOfNorm[(i )*(sizeX + 2) + (j - 1)] +
partOfNorm[(i - 1)*(sizeX + 2) + (j )] +
partOfNorm[(i - 1)*(sizeX + 2) + (j - 1)]) + FLT_EPSILON;
for(ii = 0; ii < p; ii++)
{
newData[pos2 + ii + p * 3 ] = map->map[pos1 + ii ] / valOfNorm;
}/*for(ii = 0; ii < p; ii++)*/
for(ii = 0; ii < 2 * p; ii++)
{
newData[pos2 + ii + p * 10] = map->map[pos1 + ii + p] / valOfNorm;
}/*for(ii = 0; ii < 2 * p; ii++)*/
}/*for(j = 1; j <= sizeX; j++)*/
}/*for(i = 1; i <= sizeY; i++)*/
//truncation
for(i = 0; i < sizeX * sizeY * pp; i++)
{
if(newData [i] > alfa) newData [i] = alfa;
}/*for(i = 0; i < sizeX * sizeY * pp; i++)*/
//swop data
map->numFeatures = pp;
map->sizeX = sizeX;
map->sizeY = sizeY;
free (map->map);
free (partOfNorm);
map->map = newData;
return LATENT_SVM_OK;
}
/*
// Feature map reduction
// In each cell we reduce dimension of the feature vector
// according to original paper special procedure
//
// API
// int PCAFeatureMaps(featureMap *map)
// INPUT
// map - feature map
// OUTPUT
// map - feature map
// RESULT
// Error status
*/
int PCAFeatureMaps(CvLSVMFeatureMapCaskade *map)
{
int i,j, ii, jj, k;
int sizeX, sizeY, p, pp, xp, yp, pos1, pos2;
float * newData;
float val;
float nx, ny;
sizeX = map->sizeX;
sizeY = map->sizeY;
p = map->numFeatures;
pp = NUM_SECTOR * 3 + 4;
yp = 4;
xp = NUM_SECTOR;
nx = 1.0f / sqrtf((float)(xp * 2));
ny = 1.0f / sqrtf((float)(yp ));
newData = (float *)malloc (sizeof(float) * (sizeX * sizeY * pp));
for(i = 0; i < sizeY; i++)
{
for(j = 0; j < sizeX; j++)
{
pos1 = ((i)*sizeX + j)*p;
pos2 = ((i)*sizeX + j)*pp;
k = 0;
for(jj = 0; jj < xp * 2; jj++)
{
val = 0;
for(ii = 0; ii < yp; ii++)
{
val += map->map[pos1 + yp * xp + ii * xp * 2 + jj];
}/*for(ii = 0; ii < yp; ii++)*/
newData[pos2 + k] = val * ny;
k++;
}/*for(jj = 0; jj < xp * 2; jj++)*/
for(jj = 0; jj < xp; jj++)
{
val = 0;
for(ii = 0; ii < yp; ii++)
{
val += map->map[pos1 + ii * xp + jj];
}/*for(ii = 0; ii < yp; ii++)*/
newData[pos2 + k] = val * ny;
k++;
}/*for(jj = 0; jj < xp; jj++)*/
for(ii = 0; ii < yp; ii++)
{
val = 0;
for(jj = 0; jj < 2 * xp; jj++)
{
val += map->map[pos1 + yp * xp + ii * xp * 2 + jj];
}/*for(jj = 0; jj < xp; jj++)*/
newData[pos2 + k] = val * nx;
k++;
} /*for(ii = 0; ii < yp; ii++)*/
}/*for(j = 0; j < sizeX; j++)*/
}/*for(i = 0; i < sizeY; i++)*/
//swop data
map->numFeatures = pp;
free (map->map);
map->map = newData;
return LATENT_SVM_OK;
}
//modified from "lsvmc_routine.cpp"
int allocFeatureMapObject(CvLSVMFeatureMapCaskade **obj, const int sizeX,
const int sizeY, const int numFeatures)
{
int i;
(*obj) = (CvLSVMFeatureMapCaskade *)malloc(sizeof(CvLSVMFeatureMapCaskade));
(*obj)->sizeX = sizeX;
(*obj)->sizeY = sizeY;
(*obj)->numFeatures = numFeatures;
(*obj)->map = (float *) malloc(sizeof (float) *
(sizeX * sizeY * numFeatures));
for(i = 0; i < sizeX * sizeY * numFeatures; i++)
{
(*obj)->map[i] = 0.0f;
}
return LATENT_SVM_OK;
}
int freeFeatureMapObject (CvLSVMFeatureMapCaskade **obj)
{
if(*obj == NULL) return LATENT_SVM_MEM_NULL;
free((*obj)->map);
free(*obj);
(*obj) = NULL;
return LATENT_SVM_OK;
}

@ -0,0 +1,178 @@
/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2010-2013, University of Nizhny Novgorod, all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/
//Modified from latentsvm module's "_lsvmc_latentsvm.h".
/*****************************************************************************/
/* Latent SVM prediction API */
/*****************************************************************************/
#ifndef _FHOG_H_
#define _FHOG_H_
#include <stdio.h>
//#include "_lsvmc_types.h"
//#include "_lsvmc_error.h"
//#include "_lsvmc_routine.h"
//#include "opencv2/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
//modified from "_lsvmc_types.h"
// DataType: STRUCT featureMap
// FEATURE MAP DESCRIPTION
// Rectangular map (sizeX x sizeY),
// every cell stores feature vector (dimension = numFeatures)
// map - matrix of feature vectors
// to set and get feature vectors (i,j)
// used formula map[(j * sizeX + i) * p + k], where
// k - component of feature vector in cell (i, j)
typedef struct{
int sizeX;
int sizeY;
int numFeatures;
float *map;
} CvLSVMFeatureMapCaskade;
#include "float.h"
#define PI CV_PI
#define EPS 0.000001
#define F_MAX FLT_MAX
#define F_MIN -FLT_MAX
// The number of elements in bin
// The number of sectors in gradient histogram building
#define NUM_SECTOR 9
// The number of levels in image resize procedure
// We need Lambda levels to resize image twice
#define LAMBDA 10
// Block size. Used in feature pyramid building procedure
#define SIDE_LENGTH 8
#define VAL_OF_TRUNCATE 0.2f
//modified from "_lsvm_error.h"
#define LATENT_SVM_OK 0
#define LATENT_SVM_MEM_NULL 2
#define DISTANCE_TRANSFORM_OK 1
#define DISTANCE_TRANSFORM_GET_INTERSECTION_ERROR -1
#define DISTANCE_TRANSFORM_ERROR -2
#define DISTANCE_TRANSFORM_EQUAL_POINTS -3
#define LATENT_SVM_GET_FEATURE_PYRAMID_FAILED -4
#define LATENT_SVM_SEARCH_OBJECT_FAILED -5
#define LATENT_SVM_FAILED_SUPERPOSITION -6
#define FILTER_OUT_OF_BOUNDARIES -7
#define LATENT_SVM_TBB_SCHEDULE_CREATION_FAILED -8
#define LATENT_SVM_TBB_NUMTHREADS_NOT_CORRECT -9
#define FFT_OK 2
#define FFT_ERROR -10
#define LSVM_PARSER_FILE_NOT_FOUND -11
/*
// Getting feature map for the selected subimage
//
// API
// int getFeatureMaps(const IplImage * image, const int k, featureMap **map);
// INPUT
// image - selected subimage
// k - size of cells
// OUTPUT
// map - feature map
// RESULT
// Error status
*/
int getFeatureMaps(const IplImage * image, const int k, CvLSVMFeatureMapCaskade **map);
/*
// Feature map Normalization and Truncation
//
// API
// int normalizationAndTruncationFeatureMaps(featureMap *map, const float alfa);
// INPUT
// map - feature map
// alfa - truncation threshold
// OUTPUT
// map - truncated and normalized feature map
// RESULT
// Error status
*/
int normalizeAndTruncate(CvLSVMFeatureMapCaskade *map, const float alfa);
/*
// Feature map reduction
// In each cell we reduce dimension of the feature vector
// according to original paper special procedure
//
// API
// int PCAFeatureMaps(featureMap *map)
// INPUT
// map - feature map
// OUTPUT
// map - feature map
// RESULT
// Error status
*/
int PCAFeatureMaps(CvLSVMFeatureMapCaskade *map);
//modified from "lsvmc_routine.h"
int allocFeatureMapObject(CvLSVMFeatureMapCaskade **obj, const int sizeX, const int sizeY,
const int p);
int freeFeatureMapObject (CvLSVMFeatureMapCaskade **obj);
#endif

@ -0,0 +1,510 @@
/*
Tracker based on Kernelized Correlation Filter (KCF) [1] and Circulant Structure with Kernels (CSK) [2].
CSK is implemented by using raw gray level features, since it is a single-channel filter.
KCF is implemented by using HOG features (the default), since it extends CSK to multiple channels.
[1] J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"High-Speed Tracking with Kernelized Correlation Filters", TPAMI 2015.
[2] J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"Exploiting the Circulant Structure of Tracking-by-detection with Kernels", ECCV 2012.
Authors: Joao Faro, Christian Bailer, Joao F. Henriques
Contacts: joaopfaro@gmail.com, Christian.Bailer@dfki.de, henriques@isr.uc.pt
Institute of Systems and Robotics - University of Coimbra / Department Augmented Vision DFKI
Constructor parameters, all boolean:
hog: use HOG features (default), otherwise use raw pixels
fixed_window: fix window size (default), otherwise use ROI size (slower but more accurate)
multiscale: use multi-scale tracking (default; cannot be used with fixed_window = true)
Default values are set for all properties of the tracker depending on the above choices.
Their values can be customized further before calling init():
interp_factor: linear interpolation factor for adaptation
sigma: gaussian kernel bandwidth
lambda: regularization
cell_size: HOG cell size
padding: area surrounding the target, relative to its size
output_sigma_factor: bandwidth of gaussian target
template_size: template size in pixels, 0 to use ROI size
scale_step: scale step for multi-scale estimation, 1 to disable it
scale_weight: to downweight detection scores of other scales for added stability
For speed, the value (template_size/cell_size) should be a power of 2 or a product of small prime numbers.
Inputs to init():
image is the initial frame.
roi is a cv::Rect with the target positions in the initial frame
Inputs to update():
image is the current frame.
Outputs of update():
cv::Rect with target positions for the current frame
By downloading, copying, installing or using the software you agree to this license.
If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are disclaimed.
In no event shall copyright holders or contributors be liable for any direct,
indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#include <iostream>
#ifndef _KCFTRACKER_HEADERS
#include "kcftracker.h"
#include "ffttools.h"
#include "recttools.h"
#include "fhog.h"
#include "labdata.h"
#endif
// Constructor
KCFTracker::KCFTracker(bool hog, bool fixed_window, bool multiscale, bool lab) {
// Parameters equal in all cases
lambda = 0.0001;
padding = 2.5;
//output_sigma_factor = 0.1;
output_sigma_factor = 0.125;
if (hog) { // HOG
// VOT
interp_factor = 0.012;
sigma = 0.6;
// TPAMI
//interp_factor = 0.02;
//sigma = 0.5;
cell_size = 4;
_hogfeatures = true;
if (lab) {
interp_factor = 0.005;
sigma = 0.4;
//output_sigma_factor = 0.025;
output_sigma_factor = 0.1;
_labfeatures = true;
_labCentroids = cv::Mat(nClusters, 3, CV_32FC1, &data);
cell_sizeQ = cell_size * cell_size;
} else {
_labfeatures = false;
}
} else { // RAW
interp_factor = 0.075;
sigma = 0.2;
cell_size = 1;
_hogfeatures = false;
if (lab) {
printf("Lab features are only used with HOG features.\n");
_labfeatures = false;
}
}
if (multiscale) { // multiscale
template_size = 96;
//template_size = 100;
scale_step = 1.05;
scale_weight = 0.95;
if (!fixed_window) {
//printf("Multiscale does not support non-fixed window.\n");
fixed_window = true;
}
} else if (fixed_window) { // fit correction without multiscale
template_size = 96;
//template_size = 100;
scale_step = 1;
} else {
template_size = 1;
scale_step = 1;
}
}
// Initialize tracker
void KCFTracker::init(const cv::Rect &roi, cv::Mat image) {
_roi = roi;
assert(roi.width >= 0 && roi.height >= 0);
_tmpl = getFeatures(image, 1);
_prob = createGaussianPeak(size_patch[0], size_patch[1]);
_alphaf = cv::Mat(size_patch[0], size_patch[1], CV_32FC2, float(0));
//_num = cv::Mat(size_patch[0], size_patch[1], CV_32FC2, float(0));
//_den = cv::Mat(size_patch[0], size_patch[1], CV_32FC2, float(0));
train(_tmpl, 1.0); // train with initial frame
}
// Update position based on the new frame
cv::Rect KCFTracker::update(cv::Mat image) {
if (_roi.x + _roi.width <= 0) _roi.x = -_roi.width + 1;
if (_roi.y + _roi.height <= 0) _roi.y = -_roi.height + 1;
if (_roi.x >= image.cols - 1) _roi.x = image.cols - 2;
if (_roi.y >= image.rows - 1) _roi.y = image.rows - 2;
float cx = _roi.x + _roi.width / 2.0f;
float cy = _roi.y + _roi.height / 2.0f;
float peak_value;
cv::Point2f res = detect(_tmpl, getFeatures(image, 0, 1.0f), peak_value);
if (scale_step != 1) {
// Test at a smaller _scale
float new_peak_value;
cv::Point2f new_res = detect(_tmpl, getFeatures(image, 0, 1.0f / scale_step), new_peak_value);
if (scale_weight * new_peak_value > peak_value) {
res = new_res;
peak_value = new_peak_value;
_scale /= scale_step;
_roi.width /= scale_step;
_roi.height /= scale_step;
}
// Test at a bigger _scale
new_res = detect(_tmpl, getFeatures(image, 0, scale_step), new_peak_value);
if (scale_weight * new_peak_value > peak_value) {
res = new_res;
peak_value = new_peak_value;
_scale *= scale_step;
_roi.width *= scale_step;
_roi.height *= scale_step;
}
}
// Adjust by cell size and _scale
_roi.x = cx - _roi.width / 2.0f + ((float) res.x * cell_size * _scale);
_roi.y = cy - _roi.height / 2.0f + ((float) res.y * cell_size * _scale);
if (_roi.x >= image.cols - 1) _roi.x = image.cols - 1;
if (_roi.y >= image.rows - 1) _roi.y = image.rows - 1;
if (_roi.x + _roi.width <= 0) _roi.x = -_roi.width + 2;
if (_roi.y + _roi.height <= 0) _roi.y = -_roi.height + 2;
assert(_roi.width >= 0 && _roi.height >= 0);
cv::Mat x = getFeatures(image, 0);
train(x, interp_factor);
return _roi;
}
// Detect object in the current frame.
cv::Point2f KCFTracker::detect(cv::Mat z, cv::Mat x, float &peak_value) {
using namespace FFTTools;
cv::Mat k = gaussianCorrelation(x, z);
cv::Mat res = (real(fftd(complexMultiplication(_alphaf, fftd(k)), true)));
//minMaxLoc only accepts doubles for the peak, and integer points for the coordinates
cv::Point2i pi;
double pv;
cv::Point2i pi_min;
double pv_min;
cv::minMaxLoc(res, &pv_min, &pv, &pi_min, &pi);
peak_value = (float) pv;
// std::cout << "min reponse : " << pv_min << " max response :" << pv << std::endl;
//subpixel peak estimation, coordinates will be non-integer
cv::Point2f p((float) pi.x, (float) pi.y);
if (pi.x > 0 && pi.x < res.cols - 1) {
p.x += subPixelPeak(res.at<float>(pi.y, pi.x - 1), peak_value, res.at<float>(pi.y, pi.x + 1));
}
if (pi.y > 0 && pi.y < res.rows - 1) {
p.y += subPixelPeak(res.at<float>(pi.y - 1, pi.x), peak_value, res.at<float>(pi.y + 1, pi.x));
}
p.x -= (res.cols) / 2;
p.y -= (res.rows) / 2;
return p;
}
// train tracker with a single image
void KCFTracker::train(cv::Mat x, float train_interp_factor) {
using namespace FFTTools;
cv::Mat k = gaussianCorrelation(x, x);
cv::Mat alphaf = complexDivision(_prob, (fftd(k) + lambda));
_tmpl = (1 - train_interp_factor) * _tmpl + (train_interp_factor) * x;
_alphaf = (1 - train_interp_factor) * _alphaf + (train_interp_factor) * alphaf;
/*cv::Mat kf = fftd(gaussianCorrelation(x, x));
cv::Mat num = complexMultiplication(kf, _prob);
cv::Mat den = complexMultiplication(kf, kf + lambda);
_tmpl = (1 - train_interp_factor) * _tmpl + (train_interp_factor) * x;
_num = (1 - train_interp_factor) * _num + (train_interp_factor) * num;
_den = (1 - train_interp_factor) * _den + (train_interp_factor) * den;
_alphaf = complexDivision(_num, _den);*/
}
// Evaluates a Gaussian kernel with bandwidth SIGMA for all relative shifts between input images X and Y, which must both be MxN. They must also be periodic (ie., pre-processed with a cosine window).
cv::Mat KCFTracker::gaussianCorrelation(cv::Mat x1, cv::Mat x2) {
using namespace FFTTools;
cv::Mat c = cv::Mat(cv::Size(size_patch[1], size_patch[0]), CV_32F, cv::Scalar(0));
// HOG features
if (_hogfeatures) {
cv::Mat caux;
cv::Mat x1aux;
cv::Mat x2aux;
for (int i = 0; i < size_patch[2]; i++) {
x1aux = x1.row(i); // Procedure do deal with cv::Mat multichannel bug
x1aux = x1aux.reshape(1, size_patch[0]);
x2aux = x2.row(i).reshape(1, size_patch[0]);
cv::mulSpectrums(fftd(x1aux), fftd(x2aux), caux, 0, true);
caux = fftd(caux, true);
rearrange(caux);
caux.convertTo(caux, CV_32F);
c = c + real(caux);
}
}
// Gray features
else {
cv::mulSpectrums(fftd(x1), fftd(x2), c, 0, true);
c = fftd(c, true);
rearrange(c);
c = real(c);
}
cv::Mat d;
cv::max(((cv::sum(x1.mul(x1))[0] + cv::sum(x2.mul(x2))[0]) - 2. * c) /
(size_patch[0] * size_patch[1] * size_patch[2]), 0, d);
cv::Mat k;
cv::exp((-d / (sigma * sigma)), k);
return k;
}
// Create Gaussian Peak. Function called only in the first frame.
cv::Mat KCFTracker::createGaussianPeak(int sizey, int sizex) {
cv::Mat_<float> res(sizey, sizex);
int syh = (sizey) / 2;
int sxh = (sizex) / 2;
float output_sigma = std::sqrt((float) sizex * sizey) / padding * output_sigma_factor;
float mult = -0.5 / (output_sigma * output_sigma);
for (int i = 0; i < sizey; i++)
for (int j = 0; j < sizex; j++) {
int ih = i - syh;
int jh = j - sxh;
res(i, j) = std::exp(mult * (float) (ih * ih + jh * jh));
}
return FFTTools::fftd(res);
}
// Obtain sub-window from image, with replication-padding and extract features
cv::Mat KCFTracker::getFeatures(const cv::Mat &image, bool inithann, float scale_adjust) {
cv::Rect extracted_roi;
float cx = _roi.x + _roi.width / 2;
float cy = _roi.y + _roi.height / 2;
if (inithann) {
int padded_w = _roi.width * padding;
int padded_h = _roi.height * padding;
if (template_size > 1) { // Fit largest dimension to the given template size
if (padded_w >= padded_h) //fit to width
_scale = padded_w / (float) template_size;
else
_scale = padded_h / (float) template_size;
_tmpl_sz.width = padded_w / _scale;
_tmpl_sz.height = padded_h / _scale;
} else { //No template size given, use ROI size
_tmpl_sz.width = padded_w;
_tmpl_sz.height = padded_h;
_scale = 1;
// original code from paper:
/*if (sqrt(padded_w * padded_h) >= 100) { //Normal size
_tmpl_sz.width = padded_w;
_tmpl_sz.height = padded_h;
_scale = 1;
}
else { //ROI is too big, track at half size
_tmpl_sz.width = padded_w / 2;
_tmpl_sz.height = padded_h / 2;
_scale = 2;
}*/
}
if (_hogfeatures) {
// Round to cell size and also make it even
_tmpl_sz.width = (((int) (_tmpl_sz.width / (2 * cell_size))) * 2 * cell_size) + cell_size * 2;
_tmpl_sz.height = (((int) (_tmpl_sz.height / (2 * cell_size))) * 2 * cell_size) + cell_size * 2;
} else { //Make number of pixels even (helps with some logic involving half-dimensions)
_tmpl_sz.width = (_tmpl_sz.width / 2) * 2;
_tmpl_sz.height = (_tmpl_sz.height / 2) * 2;
}
}
extracted_roi.width = scale_adjust * _scale * _tmpl_sz.width;
extracted_roi.height = scale_adjust * _scale * _tmpl_sz.height;
// center roi with new size
extracted_roi.x = cx - extracted_roi.width / 2;
extracted_roi.y = cy - extracted_roi.height / 2;
cv::Mat FeaturesMap;
cv::Mat z = RectTools::subwindow(image, extracted_roi, cv::BORDER_REPLICATE);
if (z.cols != _tmpl_sz.width || z.rows != _tmpl_sz.height) {
cv::resize(z, z, _tmpl_sz);
}
// HOG features
if (_hogfeatures) {
IplImage z_ipl = IplImage(z);
// IplImage z_ipl = cvIplImage(z);
CvLSVMFeatureMapCaskade *map;
getFeatureMaps(&z_ipl, cell_size, &map);
normalizeAndTruncate(map, 0.2f);
PCAFeatureMaps(map);
size_patch[0] = map->sizeY;
size_patch[1] = map->sizeX;
size_patch[2] = map->numFeatures;
FeaturesMap = cv::Mat(cv::Size(map->numFeatures, map->sizeX * map->sizeY), CV_32F,
map->map); // Procedure do deal with cv::Mat multichannel bug
FeaturesMap = FeaturesMap.t();
freeFeatureMapObject(&map);
// Lab features
if (_labfeatures) {
cv::Mat imgLab;
cvtColor(z, imgLab, CV_BGR2Lab);
unsigned char *input = (unsigned char *) (imgLab.data);
// Sparse output vector
cv::Mat outputLab = cv::Mat(_labCentroids.rows, size_patch[0] * size_patch[1], CV_32F, float(0));
int cntCell = 0;
// Iterate through each cell
for (int cY = cell_size; cY < z.rows - cell_size; cY += cell_size) {
for (int cX = cell_size; cX < z.cols - cell_size; cX += cell_size) {
// Iterate through each pixel of cell (cX,cY)
for (int y = cY; y < cY + cell_size; ++y) {
for (int x = cX; x < cX + cell_size; ++x) {
// Lab components for each pixel
float l = (float) input[(z.cols * y + x) * 3];
float a = (float) input[(z.cols * y + x) * 3 + 1];
float b = (float) input[(z.cols * y + x) * 3 + 2];
// Iterate trough each centroid
float minDist = FLT_MAX;
int minIdx = 0;
float *inputCentroid = (float *) (_labCentroids.data);
for (int k = 0; k < _labCentroids.rows; ++k) {
float dist = ((l - inputCentroid[3 * k]) * (l - inputCentroid[3 * k]))
+ ((a - inputCentroid[3 * k + 1]) * (a - inputCentroid[3 * k + 1]))
+ ((b - inputCentroid[3 * k + 2]) * (b - inputCentroid[3 * k + 2]));
if (dist < minDist) {
minDist = dist;
minIdx = k;
}
}
// Store result at output
outputLab.at<float>(minIdx, cntCell) += 1.0 / cell_sizeQ;
//((float*) outputLab.data)[minIdx * (size_patch[0]*size_patch[1]) + cntCell] += 1.0 / cell_sizeQ;
}
}
cntCell++;
}
}
// Update size_patch[2] and add features to FeaturesMap
size_patch[2] += _labCentroids.rows;
FeaturesMap.push_back(outputLab);
}
} else {
FeaturesMap = RectTools::getGrayImage(z);
FeaturesMap -= (float) 0.5; // In Paper;
size_patch[0] = z.rows;
size_patch[1] = z.cols;
size_patch[2] = 1;
}
if (inithann) {
createHanningMats();
}
FeaturesMap = hann.mul(FeaturesMap);
return FeaturesMap;
}
// Initialize Hanning window. Function called only in the first frame.
void KCFTracker::createHanningMats() {
cv::Mat hann1t = cv::Mat(cv::Size(size_patch[1], 1), CV_32F, cv::Scalar(0));
cv::Mat hann2t = cv::Mat(cv::Size(1, size_patch[0]), CV_32F, cv::Scalar(0));
for (int i = 0; i < hann1t.cols; i++)
hann1t.at<float>(0, i) = 0.5 * (1 - std::cos(2 * 3.14159265358979323846 * i / (hann1t.cols - 1)));
for (int i = 0; i < hann2t.rows; i++)
hann2t.at<float>(i, 0) = 0.5 * (1 - std::cos(2 * 3.14159265358979323846 * i / (hann2t.rows - 1)));
cv::Mat hann2d = hann2t * hann1t;
// HOG features
if (_hogfeatures) {
cv::Mat hann1d = hann2d.reshape(1, 1); // Procedure do deal with cv::Mat multichannel bug
hann = cv::Mat(cv::Size(size_patch[0] * size_patch[1], size_patch[2]), CV_32F, cv::Scalar(0));
for (int i = 0; i < size_patch[2]; i++) {
for (int j = 0; j < size_patch[0] * size_patch[1]; j++) {
hann.at<float>(i, j) = hann1d.at<float>(0, j);
}
}
}
// Gray features
else {
hann = hann2d;
}
}
// Calculate sub-pixel peak for one dimension
float KCFTracker::subPixelPeak(float left, float center, float right) {
float divisor = 2 * center - right - left;
if (divisor == 0)
return 0;
return 0.5 * (right - left) / divisor;
}

@ -0,0 +1,150 @@
/*
Tracker based on Kernelized Correlation Filter (KCF) [1] and Circulant Structure with Kernels (CSK) [2].
CSK is implemented by using raw gray level features, since it is a single-channel filter.
KCF is implemented by using HOG features (the default), since it extends CSK to multiple channels.
[1] J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"High-Speed Tracking with Kernelized Correlation Filters", TPAMI 2015.
[2] J. F. Henriques, R. Caseiro, P. Martins, J. Batista,
"Exploiting the Circulant Structure of Tracking-by-detection with Kernels", ECCV 2012.
Authors: Joao Faro, Christian Bailer, Joao F. Henriques
Contacts: joaopfaro@gmail.com, Christian.Bailer@dfki.de, henriques@isr.uc.pt
Institute of Systems and Robotics - University of Coimbra / Department Augmented Vision DFKI
Constructor parameters, all boolean:
hog: use HOG features (default), otherwise use raw pixels
fixed_window: fix window size (default), otherwise use ROI size (slower but more accurate)
multiscale: use multi-scale tracking (default; cannot be used with fixed_window = true)
Default values are set for all properties of the tracker depending on the above choices.
Their values can be customized further before calling init():
interp_factor: linear interpolation factor for adaptation
sigma: gaussian kernel bandwidth
lambda: regularization
cell_size: HOG cell size
padding: horizontal area surrounding the target, relative to its size
output_sigma_factor: bandwidth of gaussian target
template_size: template size in pixels, 0 to use ROI size
scale_step: scale step for multi-scale estimation, 1 to disable it
scale_weight: to downweight detection scores of other scales for added stability
For speed, the value (template_size/cell_size) should be a power of 2 or a product of small prime numbers.
Inputs to init():
image is the initial frame.
roi is a cv::Rect with the target positions in the initial frame
Inputs to update():
image is the current frame.
Outputs of update():
cv::Rect with target positions for the current frame
By downloading, copying, installing or using the software you agree to this license.
If you do not agree to this license, do not download, install,
copy or use the software.
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are disclaimed.
In no event shall copyright holders or contributors be liable for any direct,
indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#pragma once
#include "tracker.h"
#ifndef _OPENCV_KCFTRACKER_HPP_
#define _OPENCV_KCFTRACKER_HPP_
#endif
class KCFTracker : public Tracker{
public:
// Constructor
KCFTracker(bool hog = true, bool fixed_window = true, bool multiscale = true, bool lab = true);
// Initialize tracker
virtual void init(const cv::Rect &roi, cv::Mat image);
// Update position based on the new frame
virtual cv::Rect update(cv::Mat image);
float interp_factor; // linear interpolation factor for adaptation
float sigma; // gaussian kernel bandwidth
float lambda; // regularization
int cell_size; // HOG cell size
int cell_sizeQ; // cell size^2, to avoid repeated operations
float padding; // extra area surrounding the target
float output_sigma_factor; // bandwidth of gaussian target
int template_size; // template size
float scale_step; // scale step for multi-scale estimation
float scale_weight; // to downweight detection scores of other scales for added stability
protected:
// Detect object in the current frame.
cv::Point2f detect(cv::Mat z, cv::Mat x, float &peak_value);
// train tracker with a single image
void train(cv::Mat x, float train_interp_factor);
// Evaluates a Gaussian kernel with bandwidth SIGMA for all relative shifts between input images X and Y, which must both be MxN. They must also be periodic (ie., pre-processed with a cosine window).
cv::Mat gaussianCorrelation(cv::Mat x1, cv::Mat x2);
// Create Gaussian Peak. Function called only in the first frame.
cv::Mat createGaussianPeak(int sizey, int sizex);
// Obtain sub-window from image, with replication-padding and extract features
cv::Mat getFeatures(const cv::Mat & image, bool inithann, float scale_adjust = 1.0f);
// Initialize Hanning window. Function called only in the first frame.
void createHanningMats();
// Calculate sub-pixel peak for one dimension
float subPixelPeak(float left, float center, float right);
cv::Mat _alphaf;
cv::Mat _prob;
cv::Mat _tmpl;
cv::Mat _num;
cv::Mat _den;
cv::Mat _labCentroids;
private:
int size_patch[3];
cv::Mat hann;
cv::Size _tmpl_sz;
float _scale;
int _gaussian_size;
bool _hogfeatures;
bool _labfeatures;
};

@ -0,0 +1,17 @@
const int nClusters = 15;
float data[nClusters][3] = {
{161.317504, 127.223401, 128.609333},
{142.922425, 128.666965, 127.532319},
{67.879757, 127.721830, 135.903311},
{92.705062, 129.965717, 137.399500},
{120.172257, 128.279647, 127.036493},
{195.470568, 127.857070, 129.345415},
{41.257102, 130.059468, 132.675336},
{12.014861, 129.480555, 127.064714},
{226.567086, 127.567831, 136.345727},
{154.664210, 131.676606, 156.481669},
{121.180447, 137.020793, 153.433743},
{87.042204, 137.211742, 98.614874},
{113.809537, 106.577104, 157.818094},
{81.083293, 170.051905, 148.904079},
{45.015485, 138.543124, 102.402528}};

@ -0,0 +1,141 @@
/*
Author: Christian Bailer
Contact address: Christian.Bailer@dfki.de
Department Augmented Vision DFKI
License Agreement
For Open Source Computer Vision Library
(3-clause BSD License)
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the names of the copyright holders nor the names of the contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
This software is provided by the copyright holders and contributors "as is" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are disclaimed.
In no event shall copyright holders or contributors be liable for any direct,
indirect, incidental, special, exemplary, or consequential damages
(including, but not limited to, procurement of substitute goods or services;
loss of use, data, or profits; or business interruption) however caused
and on any theory of liability, whether in contract, strict liability,
or tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of such damage.
*/
#pragma once
#include <opencv2/highgui/highgui.hpp>
#include <math.h>
#ifndef _OPENCV_RECTTOOLS_HPP_
#define _OPENCV_RECTTOOLS_HPP_
#endif
namespace RectTools
{
template <typename t>
inline cv::Vec<t, 2 > center(const cv::Rect_<t> &rect)
{
return cv::Vec<t, 2 > (rect.x + rect.width / (t) 2, rect.y + rect.height / (t) 2);
}
template <typename t>
inline t x2(const cv::Rect_<t> &rect)
{
return rect.x + rect.width;
}
template <typename t>
inline t y2(const cv::Rect_<t> &rect)
{
return rect.y + rect.height;
}
template <typename t>
inline void resize(cv::Rect_<t> &rect, float scalex, float scaley = 0)
{
if (!scaley)scaley = scalex;
rect.x -= rect.width * (scalex - 1.f) / 2.f;
rect.width *= scalex;
rect.y -= rect.height * (scaley - 1.f) / 2.f;
rect.height *= scaley;
}
template <typename t>
inline void limit(cv::Rect_<t> &rect, cv::Rect_<t> limit)
{
if (rect.x + rect.width > limit.x + limit.width)rect.width = (limit.x + limit.width - rect.x);
if (rect.y + rect.height > limit.y + limit.height)rect.height = (limit.y + limit.height - rect.y);
if (rect.x < limit.x)
{
rect.width -= (limit.x - rect.x);
rect.x = limit.x;
}
if (rect.y < limit.y)
{
rect.height -= (limit.y - rect.y);
rect.y = limit.y;
}
if(rect.width<0)rect.width=0;
if(rect.height<0)rect.height=0;
}
template <typename t>
inline void limit(cv::Rect_<t> &rect, t width, t height, t x = 0, t y = 0)
{
limit(rect, cv::Rect_<t > (x, y, width, height));
}
template <typename t>
inline cv::Rect getBorder(const cv::Rect_<t > &original, cv::Rect_<t > & limited)
{
cv::Rect_<t > res;
res.x = limited.x - original.x;
res.y = limited.y - original.y;
res.width = x2(original) - x2(limited);
res.height = y2(original) - y2(limited);
assert(res.x >= 0 && res.y >= 0 && res.width >= 0 && res.height >= 0);
return res;
}
inline cv::Mat subwindow(const cv::Mat &in, const cv::Rect & window, int borderType = cv::BORDER_CONSTANT)
{
cv::Rect cutWindow = window;
RectTools::limit(cutWindow, in.cols, in.rows);
if (cutWindow.height <= 0 || cutWindow.width <= 0)assert(0); //return cv::Mat(window.height,window.width,in.type(),0) ;
cv::Rect border = RectTools::getBorder(window, cutWindow);
cv::Mat res = in(cutWindow);
if (border != cv::Rect(0, 0, 0, 0))
{
cv::copyMakeBorder(res, res, border.y, border.height, border.x, border.width, borderType);
}
return res;
}
inline cv::Mat getGrayImage(cv::Mat img)
{
cv::cvtColor(img, img, 6);
// cv::cvtColor(img, img, CV_BGR2GRAY);
img.convertTo(img, CV_32F, 1 / 255.f);
return img;
}
}

@ -0,0 +1,39 @@
/*
* File: BasicTracker.h
* Author: Joao F. Henriques, Joao Faro, Christian Bailer
* Contact address: henriques@isr.uc.pt, joaopfaro@gmail.com, Christian.Bailer@dfki.de
* Instute of Systems and Robotics- University of COimbra / Department Augmented Vision DFKI
*
* This source code is provided for for research purposes only. For a commercial license or a different use case please contact us.
* You are not allowed to publish the unmodified sourcecode on your own e.g. on your webpage. Please refer to the official download page instead.
* If you want to publish a modified/extended version e.g. because you wrote a publication with a modified version of the sourcecode you need our
* permission (Please contact us for the permission).
*
* We reserve the right to change the license of this sourcecode anytime to BSD, GPL or LGPL.
* By using the sourcecode you agree to possible restrictions and requirements of these three license models so that the license can be changed
* anytime without you knowledge.
*/
#pragma once
#include <opencv2/opencv.hpp>
#include <string>
class Tracker
{
public:
Tracker() {}
virtual ~Tracker() { }
virtual void init(const cv::Rect &roi, cv::Mat image) = 0;
virtual cv::Rect update( cv::Mat image)=0;
protected:
cv::Rect_<float> _roi;
};

@ -0,0 +1,21 @@
from launch import LaunchDescription
from launch_ros.actions import Node
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
def generate_launch_description():
KCFTracker_node = Node(
package='yahboomcar_KCFTracker',
executable='KCF_Tracker_Node',
)
uabcam_node = IncludeLaunchDescription(PythonLaunchDescriptionSource([os.path.join(get_package_share_directory('usb_cam'), 'launch'),
'/demo_launch.py'])
)
launch_description = LaunchDescription([uabcam_node,KCFTracker_node])
return launch_description

@ -0,0 +1,23 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>yahboomcar_KCFTracker</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="yahboom@todo.todo">yahboom</maintainer>
<license>TODO: License declaration</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<depend>geometry_msgs</depend>
<depend>sensor_msgs</depend>
<depend>std_msgs</depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

@ -0,0 +1,190 @@
#include <iostream>
#include "KCF_Tracker.h"
#include <rclcpp/rclcpp.hpp>
#include <cv_bridge/cv_bridge.h>
#include "kcftracker.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
Rect selectRect;
Point origin;
Rect result;
bool select_flag = false;
bool bRenewROI = false;
bool bBeginKCF = false;
Mat rgbimage;
Mat depthimage;
const int &ACTION_ESC = 27;
const int &ACTION_SPACE = 32;
void onMouse(int event, int x, int y, int, void *) {
if (select_flag) {
selectRect.x = MIN(origin.x, x);
selectRect.y = MIN(origin.y, y);
selectRect.width = abs(x - origin.x);
selectRect.height = abs(y - origin.y);
selectRect &= Rect(0, 0, rgbimage.cols, rgbimage.rows);
}
if (event == 1) {
// if (event == CV_EVENT_LBUTTONDOWN) {
bBeginKCF = false;
select_flag = true;
origin = Point(x, y);
selectRect = Rect(x, y, 0, 0);
} else if (event == 4) {
// } else if (event == CV_EVENT_LBUTTONUP) {
select_flag = false;
bRenewROI = true;
}
}
void ImageConverter::Reset() {
bRenewROI = false;
bBeginKCF = false;
selectRect.x = 0;
selectRect.y = 0;
selectRect.width = 0;
selectRect.height = 0;
linear_speed = 0;
rotation_speed = 0;
enable_get_depth = false;
this->linear_PID->reset();
this->angular_PID->reset();
vel_pub_->publish(geometry_msgs::msg::Twist());
}
void ImageConverter::Cancel() {
this->Reset();
delete RGB_WINDOW;
delete DEPTH_WINDOW;
//delete this->linear_PID;
//delete this->angular_PID;
destroyWindow(RGB_WINDOW);
// destroyWindow(DEPTH_WINDOW);
}
void ImageConverter::PIDcallback() {
this->minDist=1.0;
this->linear_PID->Set_PID(3.0, 0.0, 1.0);
this->angular_PID->Set_PID(0.5, 0.0, 2.0);
this->linear_PID->reset();
this->angular_PID->reset();
}
void ImageConverter::imageCb(const std::shared_ptr<sensor_msgs::msg::Image> msg) {
cv_bridge::CvImagePtr cv_ptr;
try {
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception &e) {
std::cout<<"cv_bridge exception"<<std::endl;
return;
}
cv_ptr->image.copyTo(rgbimage);
setMouseCallback(RGB_WINDOW, onMouse, 0);
if (bRenewROI) {
if (selectRect.width <= 0 || selectRect.height <= 0)
{
bRenewROI = false;
return;
}
tracker.init(selectRect, rgbimage);
bBeginKCF = true;
bRenewROI = false;
enable_get_depth = false;
}
if (bBeginKCF) {
result = tracker.update(rgbimage);
rectangle(rgbimage, result, Scalar(0, 255, 255), 1, 8);
circle(rgbimage, Point(result.x + result.width / 2, result.y + result.height / 2), 3, Scalar(0, 0, 255),-1);
} else rectangle(rgbimage, selectRect, Scalar(255, 0, 0), 2, 8, 0);
//sensor_msgs::ImagePtr kcf_imagemsg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", rgbimage).toImageMsg();
//mage_pub_ -> publish(kcf_imagemsg.get());
sensor_msgs::msg::Image kcf_imagemsg;
std_msgs::msg::Header _header;
cv_bridge::CvImage _cv_bridge;
_header.stamp = this->get_clock() -> now();
_cv_bridge = cv_bridge::CvImage(_header, sensor_msgs::image_encodings::BGR8, rgbimage);
_cv_bridge.toImageMsg(kcf_imagemsg);
image_pub_-> publish(kcf_imagemsg);
imshow(RGB_WINDOW, rgbimage);
int action = waitKey(1) & 0xFF;
if (action == 'q' || action == ACTION_ESC) this->Cancel();
else if (action == 'r') this->Reset();
else if (action == ACTION_SPACE) enable_get_depth = true;
}
void ImageConverter::depthCb(const std::shared_ptr<sensor_msgs::msg::Image> msg) {
this->get_parameter<float>("minDist_",this->minDist);
cv_bridge::CvImagePtr cv_ptr;
try {
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_32FC1);
cv_ptr->image.copyTo(depthimage);
}
catch (cv_bridge::Exception &e) {
std::cout<<"Could not convert from to 'TYPE_32FC1'."<<std::endl;
}
if (enable_get_depth) {
int center_x = (int)(result.x + result.width / 2);
std::cout<<"center_x: "<<center_x<<std::endl;
int center_y = (int)(result.y + result.height / 2);
std::cout<<"center_y: "<<center_y<<std::endl;
dist_val[0] = depthimage.at<float>(center_y - 5, center_x - 5)/1000.0;
dist_val[1] = depthimage.at<float>(center_y - 5, center_x + 5)/1000.0;
dist_val[2] = depthimage.at<float>(center_y + 5, center_x + 5)/1000.0;
dist_val[3] = depthimage.at<float>(center_y + 5, center_x - 5)/1000.0;
dist_val[4] = depthimage.at<float>(center_y, center_x);
std::cout<<"dist_val[0]: "<<dist_val[0]<<std::endl;
std::cout<<"dist_val[1]: "<<dist_val[1]<<std::endl;
std::cout<<"dist_val[2]: "<<dist_val[2]<<std::endl;
std::cout<<"dist_val[3]: "<<dist_val[3]<<std::endl;
std::cout<<"dist_val[4]: "<<dist_val[4]<<std::endl;
float distance = 0;
int num_depth_points = 5;
for (int i = 0; i < 5; i++) {
if (dist_val[i] > 0.4 && dist_val[i] < 10.0) distance += dist_val[i];
else num_depth_points--;
}
distance /= num_depth_points;
std::cout<<distance<<std::endl;
if (num_depth_points != 0) {
std::cout<<"minDist: "<<minDist<<std::endl;
if (abs(distance - this->minDist) < 0.1) linear_speed = 0;
else linear_speed = -linear_PID->compute(this->minDist, distance);//-linear_PID->compute(minDist, distance)
}
rotation_speed = angular_PID->compute(320 / 100.0, center_x / 100.0);//angular_PID->compute(320 / 100.0, center_x / 100.0)
if (abs(rotation_speed) < 0.1)rotation_speed = 0;
geometry_msgs::msg::Twist twist;
twist.linear.x = linear_speed;
twist.angular.z = rotation_speed;
vel_pub_->publish(twist);
}
else{
geometry_msgs::msg::Twist twist;
vel_pub_->publish(twist);
}
// imshow(DEPTH_WINDOW, depthimage);
waitKey(1);
}
void ImageConverter::JoyCb(const std::shared_ptr<std_msgs::msg::Bool> msg) {
enable_get_depth = msg->data;
}

@ -0,0 +1,11 @@
#include <iostream>
#include "KCF_Tracker.h"
#include <rclcpp/rclcpp.hpp>
int main(int argc,char **argv)
{
rclcpp::init(argc, argv);
std::cout<<"start"<<std::endl;
rclcpp::spin(std::make_shared<ImageConverter>());
return 0;
}

@ -0,0 +1,31 @@
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
def generate_launch_description():
'''astra_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('astra_camera'),'launch'),'/astra.launch.xml'])
)'''
driver_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('yahboomcar_bringup'),'launch'),'/yahboomcar_bringup_X3_launch.py']
))
colorIdentify_node = Node(
package='yahboomcar_astra',
node_executable='colorHSV',
node_name='coloridentify'
)
'''colorTracker_node = Node(
package='yahboomcar_astra',
node_executable='colorTraker',
node_name='colortracker'
)'''
return LaunchDescription([#astra_node,
driver_node,
colorIdentify_node,
#colorTracker_node
])

@ -0,0 +1,18 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>yahboomcar_astra</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="13377528435@sina.cn">nx-ros2</maintainer>
<license>TODO: License declaration</license>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

@ -0,0 +1,11 @@
import cv2 as cv
if __name__ == '__main__':
img = cv.imread('yahboom.jpg')
while True :
cv.imshow("frame",img)
action = cv.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv.destroyAllWindows()

@ -0,0 +1,14 @@
import cv2 as cv
if __name__ == '__main__':
img = cv.imread('yahboom.jpg')
cv.imwrite("yahboom_new.jpg",img) #新建文件yahboom_new.jpg并且把yahboom.jpg写进去
new_img = cv.imread('yahboom_new.jpg') #读取新写入的图片
while True :
cv.imshow("frame",img)
cv.imshow("new_frame",new_img)
action = cv.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv.destroyAllWindows()

@ -0,0 +1,11 @@
import cv2 as cv
if __name__ == '__main__':
frame = cv.VideoCapture(0)
while frame.isOpened():
ret,img = frame.read()
cv.imshow('frame',img)
action = cv.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
frame.release()
cv.destroyAllWindows()

@ -0,0 +1,17 @@
import cv2
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
(b,g,r) = img[100,100]
print(b,g,r)
i=j=0
for j in range(1,255):
img[i,j] = (0,0,0)
for j in range(1,255):
img[i,j] = (0,0,0)
while True :
cv2.imshow("frame",img)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,14 @@
import cv2
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
print(img.shape)
x, y = img.shape[0:2]
img_test1 = cv2.resize(img, (int(y / 2), int(x / 2)))
while True :
cv2.imshow("frame",img)
cv2.imshow('resize0', img_test1)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
dst = img[0:100,100:200]
while True :
cv2.imshow("frame",img)
cv2.imshow('dst', dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,17 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
imgInfo = img.shape
height = imgInfo[0]
width = imgInfo[1]
matShift = np.float32([[1,0,10],[0,1,10]])# 2*3
dst = cv2.warpAffine(img, matShift, (width,height))
while True :
cv2.imshow("frame",img)
cv2.imshow('dst', dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,22 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
imgInfo = img.shape
height = imgInfo[0]
width = imgInfo[1]
deep = imgInfo[2]
newImgInfo = (height*2,width,deep)
dst = np.zeros(newImgInfo,np.uint8)#uint8
for i in range(0,height):
for j in range(0,width):
dst[i,j] = img[i,j]
dst[height*2-i-1,j] = img[i,j]
while True :
cv2.imshow("frame",img)
cv2.imshow('dst', dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,13 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
while True :
cv2.imshow("frame",img)
cv2.imshow('gray', gray)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,15 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh1=cv2.threshold(gray,180,255,cv2.THRESH_BINARY_INV)
while True :
cv2.imshow("frame",img)
cv2.imshow('gray', gray)
cv2.imshow("binary",thresh1)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,16 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgG = cv2.GaussianBlur(gray,(3,3),0)
dst = cv2.Canny(imgG,50,50)
while True :
cv2.imshow("frame",img)
cv2.imshow('gray', gray)
cv2.imshow("canny",dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
line = cv2.line(img, (50,20), (20,100), (255,0,255), 10)
while True :
cv2.imshow("line",line)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
rect = cv2.rectangle(img, (50,20), (100,100), (255,0,255), 10)
while True :
cv2.imshow("line",rect)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
circle = cv2.circle(img, (80,80), 50, (255,0,255), 10)
while True :
cv2.imshow("circle",circle)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
ellipse = cv2.ellipse(img, (80,80), (20,50),0,0, 360,(255,0,255), 5)
while True :
cv2.imshow("ellipse",ellipse)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,13 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
points = np.array([[120,50], [40,140], [120,70], [110,110], [50,50]], np.int32)
polylines = cv2.polylines(img, [points],True,(255,0,255), 5)
while True :
cv2.imshow("polylines",polylines)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,12 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
cv2.putText(img,'This is Yahboom!',(50,50),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,0),2)
while True :
cv2.imshow("img",img)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,21 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
for i in range(50,100):
img[i,50] = (0,0,0)
img[i,50+1] = (0,0,0)
img[i,50-1] = (0,0,0)
for i in range(100,150):
img[150,i] = (0,0,0)
img[150,i+1] = (0,0,0)
img[150-1,i] = (0,0,0)
cv2.imwrite("damaged.jpg",img)
dam_img = cv2.imread('damaged.jpg')
while True :
cv2.imshow("dam_img",dam_img)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,26 @@
import cv2
import numpy as np
if __name__ == '__main__':
dam_img = cv2.imread('damaged.jpg')
imgInfo = dam_img.shape
height = imgInfo[0]
width = imgInfo[1]
paint = np.zeros((height,width,1),np.uint8)
for i in range(50,100):
paint[i,50] = 255
paint[i,50+1] = 255
paint[i,50-1] = 255
for i in range(100,150):
paint[150,i] = 255
paint[150+1,i] = 255
paint[150-1,i] = 255
dst_img = cv2.inpaint(dam_img,paint,3,cv2.INPAINT_TELEA)
while True :
cv2.imshow("dam_img",dam_img)
cv2.imshow("paint",paint)
cv2.imshow("dst",dst_img)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,28 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
imgInfo = img.shape
height = imgInfo[0]
width = imgInfo[1]
dst = np.zeros((height,width,3),np.uint8)
for i in range(0,height):
for j in range(0,width):
(b,g,r) = img[i,j]
bb = int(b) + 100
gg = int(g) + 100
rr = int(r) + 100
if bb > 255:
bb = 255
if gg > 255:
gg = 255
if rr > 255:
rr = 255
dst[i,j] = (bb,gg,rr)
while True :
cv2.imshow("dst",dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

@ -0,0 +1,26 @@
import cv2
import numpy as np
if __name__ == '__main__':
img = cv2.imread('yahboom.jpg')
imgInfo = img.shape
height = imgInfo[0]
width = imgInfo[1]
dst = np.zeros((height,width,3),np.uint8)
for i in range(0,height):
for j in range(0,width):
(b,g,r) = img[i,j]
bb = int(b*1.4) + 5
gg = int(g*1.4) + 5
if bb > 255:
bb = 255
if gg > 255:
gg = 255
dst[i,j] = (bb,gg,r)
while True :
cv2.imshow("origin",img)
cv2.imshow("dst",dst)
action = cv2.waitKey(10) & 0xFF
if action == ord('q') or action == 113:
break
img.release()
cv2.destroyAllWindows()

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

@ -0,0 +1,4 @@
[develop]
script-dir=$base/lib/yahboomcar_astra
[install]
install-scripts=$base/lib/yahboomcar_astra

@ -0,0 +1,29 @@
from setuptools import setup
import os
from glob import glob
package_name = 'yahboomcar_astra'
setup(
name=package_name,
version='0.0.0',
packages=[package_name],
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
(os.path.join('share',package_name,'launch'),glob(os.path.join('launch','*launch.py'))),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='nx-ros2',
maintainer_email='13377528435@sina.cn',
description='TODO: Package description',
license='TODO: License declaration',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'colorHSV = yahboomcar_astra.colorHSV:main',
'colorTracker = yahboomcar_astra.colorTracker:main'
],
},
)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_copyright.main import main
import pytest
@pytest.mark.copyright
@pytest.mark.linter
def test_copyright():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found errors'

@ -0,0 +1,25 @@
# Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_flake8.main import main_with_errors
import pytest
@pytest.mark.flake8
@pytest.mark.linter
def test_flake8():
rc, errors = main_with_errors(argv=[])
assert rc == 0, \
'Found %d code style errors / warnings:\n' % len(errors) + \
'\n'.join(errors)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_pep257.main import main
import pytest
@pytest.mark.linter
@pytest.mark.pep257
def test_pep257():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found code style errors / warnings'

@ -0,0 +1,230 @@
#!/usr/bin/env python
# encoding: utf-8
import time
import cv2 as cv
import numpy as np
def write_HSV(wf_path, value):
with open(wf_path, "w") as wf:
wf_str = str(value[0][0]) + ', ' + str(
value[0][1]) + ', ' + str(value[0][2]) + ', ' + str(
value[1][0]) + ', ' + str(value[1][1]) + ', ' + str(
value[1][2])
wf.write(wf_str)
wf.flush()
def read_HSV(rf_path):
rf = open(rf_path, "r+")
line = rf.readline()
if len(line) == 0: return ()
list = line.split(',')
if len(list) != 6: return ()
hsv = ((int(list[0]), int(list[1]), int(list[2])),
(int(list[3]), int(list[4]), int(list[5])))
rf.flush()
return hsv
# 定义函数,第一个参数是缩放比例,第二个参数是需要显示的图片组成的元组或者列表
# Define the function, the first parameter is the zoom ratio, and the second parameter is a tuple or list of pictures to be displayed
def ManyImgs(scale, imgarray):
rows = len(imgarray) # 元组或者列表的长度 Length of tuple or list
cols = len(imgarray[0]) # 如果imgarray是列表返回列表里第一幅图像的通道数如果是元组返回元组里包含的第一个列表的长度
# If imgarray is a list, return the number of channels of the first image in the list, if it is a tuple, return the length of the first list contained in the tuple
# print("rows=", rows, "cols=", cols)
# 判断imgarray[0]的类型是否是list,
# 是list表明imgarray是一个元组需要垂直显示
# Determine whether the type of imgarray[0] is list,
# It is a list, indicating that imgarray is a tuple and needs to be displayed vertically
rowsAvailable = isinstance(imgarray[0], list)
# 第一张图片的宽高
# The width and height of the first picture
width = imgarray[0][0].shape[1]
height = imgarray[0][0].shape[0]
# print("width=", width, "height=", height)
# 如果传入的是一个元组
# If the incoming is a tuple
if rowsAvailable:
for x in range(0, rows):
for y in range(0, cols):
# 遍历元组,如果是第一幅图像,不做变换
# Traverse the tuple, if it is the first image, do not transform
if imgarray[x][y].shape[:2] == imgarray[0][0].shape[:2]:
imgarray[x][y] = cv.resize(imgarray[x][y], (0, 0), None, scale, scale)
# 将其他矩阵变换为与第一幅图像相同大小缩放比例为scale
# Transform other matrices to the same size as the first image, and the zoom ratio is scale
else:
imgarray[x][y] = cv.resize(imgarray[x][y], (imgarray[0][0].shape[1], imgarray[0][0].shape[0]), None,
scale, scale)
# 如果图像是灰度图,将其转换成彩色显示
# If the image is grayscale, convert it to color display
if len(imgarray[x][y].shape) == 2:
imgarray[x][y] = cv.cvtColor(imgarray[x][y], cv.COLOR_GRAY2BGR)
# 创建一个空白画布,与第一张图片大小相同
# Create a blank canvas, the same size as the first picture
imgBlank = np.zeros((height, width, 3), np.uint8)
hor = [imgBlank] * rows # 与第一张图片大小相同,与元组包含列表数相同的水平空白图像
# The same size as the first picture, and the same number of horizontal blank images as the tuple contains the list
for x in range(0, rows):
# 将元组里第x个列表水平排列
# Arrange the x-th list in the tuple horizontally
hor[x] = np.hstack(imgarray[x])
ver = np.vstack(hor) # 将不同列表垂直拼接 Concatenate different lists vertically
# 如果传入的是一个列表 If the incoming is a list
else:
# 变换操作,与前面相同
# Transformation operation, same as before
for x in range(0, rows):
if imgarray[x].shape[:2] == imgarray[0].shape[:2]:
imgarray[x] = cv.resize(imgarray[x], (0, 0), None, scale, scale)
else:
imgarray[x] = cv.resize(imgarray[x], (imgarray[0].shape[1], imgarray[0].shape[0]), None, scale, scale)
if len(imgarray[x].shape) == 2:
imgarray[x] = cv.cvtColor(imgarray[x], cv.COLOR_GRAY2BGR)
# 将列表水平排列
# Arrange the list horizontally
hor = np.hstack(imgarray)
ver = hor
return ver
class color_follow:
def __init__(self):
'''
初始化一些参数
Initialize some parameters
'''
self.Center_x = 0
self.Center_y = 0
self.Center_r = 0
def object_follow(self, img, hsv_msg):
src = img.copy()
# 由颜色范围创建NumPy数组
# Create NumPy array from color range
src = cv.cvtColor(src, cv.COLOR_BGR2HSV)
lower = np.array(hsv_msg[0], dtype="uint8")
upper = np.array(hsv_msg[1], dtype="uint8")
# 根据特定颜色范围创建mask
# Create a mask based on a specific color range
mask = cv.inRange(src, lower, upper)
color_mask = cv.bitwise_and(src, src, mask=mask)
# 将图像转为灰度图
# Convert the image to grayscale
gray_img = cv.cvtColor(color_mask, cv.COLOR_RGB2GRAY)
# 获取不同形状的结构元素
# Get structure elements of different shapes
kernel = cv.getStructuringElement(cv.MORPH_RECT, (5, 5))
# 形态学闭操作
# Morphological closed operation
gray_img = cv.morphologyEx(gray_img, cv.MORPH_CLOSE, kernel)
# 图像二值化操作
# Image binarization operation
ret, binary = cv.threshold(gray_img, 10, 255, cv.THRESH_BINARY)
# 获取轮廓点集(坐标)
# Get the set of contour points (coordinates)
find_contours = cv.findContours(binary, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
if len(find_contours) == 3:
contours = find_contours[1]
else:
contours = find_contours[0]
if len(contours) != 0:
areas = []
for c in range(len(contours)): areas.append(cv.contourArea(contours[c]))
max_id = areas.index(max(areas))
max_rect = cv.minAreaRect(contours[max_id])
max_box = cv.boxPoints(max_rect)
max_box = np.int0(max_box)
(color_x, color_y), color_radius = cv.minEnclosingCircle(max_box)
# 将检测到的颜色用原形线圈标记出来
# Mark the detected color with the original shape coil
self.Center_x = int(color_x)
self.Center_y = int(color_y)
self.Center_r = int(color_radius)
cv.circle(img, (self.Center_x, self.Center_y), self.Center_r, (255, 0, 255), 2)
cv.circle(img, (self.Center_x, self.Center_y), 2, (0, 0, 255), -1)
else:
self.Center_x = 0
self.Center_y = 0
self.Center_r = 0
return img, binary, (self.Center_x, self.Center_y, self.Center_r)
def Roi_hsv(self, img, Roi):
'''
获取某一区域的HSV的范围
Get the range of HSV in a certain area
:param img: Color map 彩色图
:param Roi: (x_min, y_min, x_max, y_max)
Roi=(290,280,350,340)
:return: 图像和HSV的范围 例如(0,0,90)(177,40,150)
Image and HSV range E.g(0,0,90)(177,40,150)
'''
H = [];S = [];V = []
# 将彩色图转成HSV
# Convert color image to HSV
HSV = cv.cvtColor(img, cv.COLOR_BGR2HSV)
# 画矩形框
# Draw a rectangular frame
# cv.rectangle(img, (Roi[0], Roi[1]), (Roi[2], Roi[3]), (0, 255, 0), 2)
# 依次取出每行每列的H,S,V值放入容器中
# Take out the H, S, V values of each row and each column in turn and put them into the container
for i in range(Roi[0], Roi[2]):
for j in range(Roi[1], Roi[3]):
H.append(HSV[j, i][0])
S.append(HSV[j, i][1])
V.append(HSV[j, i][2])
# 分别计算出H,S,V的最大最小
# Calculate the maximum and minimum of H, S, and V respectively
H_min = min(H); H_max = max(H)
S_min = min(S); S_max = 253
V_min = min(V); V_max = 255
# HSV范围调整
# HSV range adjustment
if H_max + 5 > 255: H_max = 255
else: H_max += 5
if H_min - 5 < 0: H_min = 0
else: H_min -= 5
if S_min - 20 < 0: S_min = 0
else: S_min -= 20
if V_min - 20 < 0: V_min = 0
else: V_min -= 20
lowerb = 'lowerb : (' + str(H_min) + ' ,' + str(S_min) + ' ,' + str(V_min) + ')'
upperb = 'upperb : (' + str(H_max) + ' ,' + str(S_max) + ' ,' + str(V_max) + ')'
txt1 = 'Learning ...'
txt2 = 'OK !!!'
if S_min < 5 or V_min < 5:
cv.putText(img, txt1, (30, 50), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
else:
cv.putText(img, txt2, (30, 50), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
cv.putText(img, lowerb, (150, 30), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
cv.putText(img, upperb, (150, 50), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
hsv_range = ((int(H_min), int(S_min), int(V_min)), (int(H_max), int(S_max), int(V_max)))
return img, hsv_range
class simplePID:
def __init__(self, kp, ki, kd):
self.kp = kp
self.ki = ki
self.kd = kd
self.targetpoint = 0
self.intergral = 0
self.derivative = 0
self.prevError = 0
def compute(self, target, current):
error = target - current
self.intergral += error
self.derivative = error - self.prevError
self.targetpoint = self.kp * error + self.ki * self.intergral + self.kd * self.derivative
self.prevError = error
return self.targetpoint
def reset(self):
self.targetpoint = 0
self.intergral = 0
self.derivative = 0
self.prevError = 0

@ -0,0 +1,177 @@
#ros lib
import rclpy
from rclpy.node import Node
from std_msgs.msg import Bool
from geometry_msgs.msg import Twist
from sensor_msgs.msg import CompressedImage, LaserScan, Image
from yahboomcar_msgs.msg import Position
#common lib
import os
import threading
import math
from yahboomcar_astra.astra_common import *
from yahboomcar_msgs.msg import Position
print("import finish")
cv_edition = cv.__version__
print("cv_edition: ",cv_edition)
class Color_Identify(Node):
def __init__(self,name):
super().__init__(name)
#create a publisher
self.pub_position = self.create_publisher(Position,"/Current_point", 10)
self.pub_cmdVel = self.create_publisher(Twist, '/cmd_vel', 10)
self.index = 2
self.Roi_init = ()
self.hsv_range = ()
self.circle = (0, 0, 0)
self.point_pose = (0, 0, 0)
self.dyn_update = True
self.Start_state = True
self.select_flags = False
self.gTracker_state = False
self.windows_name = 'frame'
self.Track_state = 'identify'
self.color = color_follow()
self.cols, self.rows = 0, 0
self.Mouse_XY = (0, 0)
self.declare_param()
self.hsv_text = "/root/yahboomcar_ros2_ws/yahboomcar_ws/src/yahboomcar_astra/yahboomcar_astra/colorHSV.text"
self.capture = cv.VideoCapture(0)
if cv_edition[0]=='3': self.capture.set(cv.CAP_PROP_FOURCC, cv.VideoWriter_fourcc(*'XVID'))
else: self.capture.set(cv.CAP_PROP_FOURCC, cv.VideoWriter.fourcc('M', 'J', 'P', 'G'))
self.timer = self.create_timer(0.001, self.on_timer)
def declare_param(self):
#HSV
self.declare_parameter("Hmin",0)
self.Hmin = self.get_parameter('Hmin').get_parameter_value().integer_value
self.declare_parameter("Smin",85)
self.Smin = self.get_parameter('Smin').get_parameter_value().integer_value
self.declare_parameter("Vmin",126)
self.Vmin = self.get_parameter('Vmin').get_parameter_value().integer_value
self.declare_parameter("Hmax",9)
self.Hmax = self.get_parameter('Hmax').get_parameter_value().integer_value
self.declare_parameter("Smax",253)
self.Smax = self.get_parameter('Smax').get_parameter_value().integer_value
self.declare_parameter("Vmax",253)
self.Vmax = self.get_parameter('Vmax').get_parameter_value().integer_value
self.declare_parameter('refresh',False)
self.refresh = self.get_parameter('refresh').get_parameter_value().bool_value
def on_timer(self):
ret, frame = self.capture.read()
action = cv.waitKey(10) & 0xFF
frame, binary =self.process(frame, action)
start = time.time()
end = time.time()
fps = 1 / (end - start)
text = "FPS : " + str(int(fps))
cv.putText(frame, text, (30, 30), cv.FONT_HERSHEY_SIMPLEX, 0.6, (100, 200, 200), 1)
if len(binary) != 0: cv.imshow('frame', ManyImgs(1, ([frame, binary])))
else:cv.imshow('frame', frame)
if action == ord('q') or action == 113:
self.capture.release()
cv.destroyAllWindows()
def process(self, rgb_img, action):
self.get_param()
rgb_img = cv.resize(rgb_img, (640, 480))
binary = []
if action == 32: self.Track_state = 'tracking'
elif action == ord('i') or action == ord('I'): self.Track_state = "identify"
elif action == ord('r') or action == ord('R'): self.Reset()
elif action == ord('q') or action == ord('Q'): self.cancel()
if self.Track_state == 'init':
cv.namedWindow(self.windows_name, cv.WINDOW_AUTOSIZE)
cv.setMouseCallback(self.windows_name, self.onMouse, 0)
if self.select_flags == True:
cv.line(rgb_img, self.cols, self.rows, (255, 0, 0), 2)
cv.rectangle(rgb_img, self.cols, self.rows, (0, 255, 0), 2)
if self.Roi_init[0] != self.Roi_init[2] and self.Roi_init[1] != self.Roi_init[3]:
rgb_img, self.hsv_range = self.color.Roi_hsv(rgb_img, self.Roi_init)
self.gTracker_state = True
self.dyn_update = True
else: self.Track_state = 'init'
elif self.Track_state == "identify":
if os.path.exists(self.hsv_text): self.hsv_range = read_HSV(self.hsv_text)
else: self.Track_state = 'init'
if self.Track_state != 'init':
if len(self.hsv_range) != 0:
rgb_img, binary, self.circle = self.color.object_follow(rgb_img, self.hsv_range)
if self.dyn_update == True:
write_HSV(self.hsv_text, self.hsv_range)
self.Hmin = rclpy.parameter.Parameter('Hmin',rclpy.Parameter.Type.INTEGER,self.hsv_range[0][0])
self.Smin = rclpy.parameter.Parameter('Smin',rclpy.Parameter.Type.INTEGER,self.hsv_range[0][1])
self.Vmin = rclpy.parameter.Parameter('Vmin',rclpy.Parameter.Type.INTEGER,self.hsv_range[0][2])
self.Hmax = rclpy.parameter.Parameter('Hmax',rclpy.Parameter.Type.INTEGER,self.hsv_range[1][0])
self.Smax = rclpy.parameter.Parameter('Smax',rclpy.Parameter.Type.INTEGER,self.hsv_range[1][1])
self.Vmax = rclpy.parameter.Parameter('Vmax',rclpy.Parameter.Type.INTEGER,self.hsv_range[1][2])
all_new_parameters = [self.Hmin,self.Smin,self.Vmin,self.Hmax,self.Smax,self.Vmax]
self.set_parameters(all_new_parameters)
self.dyn_update = False
if self.Track_state == 'tracking':
self.Start_state = True
if self.circle[2] != 0: threading.Thread(
target=self.execute, args=(self.circle[0], self.circle[1], self.circle[2])).start()
if self.point_pose[0] != 0 and self.point_pose[1] != 0: threading.Thread(
target=self.execute, args=(self.point_pose[0], self.point_pose[1], self.point_pose[2])).start()
else:
if self.Start_state == True:
self.pub_cmdVel.publish(Twist())
self.Start_state = False
return rgb_img, binary
def execute(self, x, y, z):
position = Position()
position.anglex = x * 1.0
position.angley = y * 1.0
position.distance = z * 1.0
self.pub_position.publish(position)
def get_param(self):
#hsv
self.Hmin = self.get_parameter('Hmin').get_parameter_value().integer_value
self.Smin = self.get_parameter('Smin').get_parameter_value().integer_value
self.Vmin = self.get_parameter('Vmin').get_parameter_value().integer_value
self.Hmax = self.get_parameter('Hmax').get_parameter_value().integer_value
self.Smax = self.get_parameter('Smax').get_parameter_value().integer_value
self.Vmax = self.get_parameter('Vmax').get_parameter_value().integer_value
self.refresh = self.get_parameter('refresh').get_parameter_value().bool_value
def Reset(self):
self.hsv_range = ()
self.circle = (0, 0, 0)
self.Mouse_XY = (0, 0)
self.Track_state = 'init'
for i in range(3): self.pub_position.publish(Position())
print("succes!!!")
def cancel(self):
print("Shutting down this node.")
cv.destroyAllWindows()
def onMouse(self, event, x, y, flags, param):
if event == 1:
self.Track_state = 'init'
self.select_flags = True
self.Mouse_XY = (x, y)
if event == 4:
self.select_flags = False
self.Track_state = 'mouse'
if self.select_flags == True:
self.cols = min(self.Mouse_XY[0], x), min(self.Mouse_XY[1], y)
self.rows = max(self.Mouse_XY[0], x), max(self.Mouse_XY[1], y)
self.Roi_init = (self.cols[0], self.cols[1], self.rows[0], self.rows[1])
def main():
rclpy.init()
color_identify = Color_Identify("ColorIdentify")
print("start it")
rclpy.spin(color_identify)

@ -0,0 +1,172 @@
#ros lib
import rclpy
from rclpy.node import Node
from std_msgs.msg import Bool
from geometry_msgs.msg import Twist
from sensor_msgs.msg import CompressedImage, LaserScan, Image
from yahboomcar_msgs.msg import Position
#common lib
import os
import threading
import math
from yahboomcar_astra.astra_common import *
from yahboomcar_msgs.msg import Position
from cv_bridge import CvBridge
print("import done")
class color_Tracker(Node):
def __init__(self,name):
super().__init__(name)
#create the publisher
self.pub_cmdVel = self.create_publisher(Twist,'/cmd_vel',10)
#create the subscriber
self.sub_depth = self.create_subscription(Image,"/camera/depth/image_raw", self.depth_img_Callback, 1)
self.sub_JoyState = self.create_subscription(Bool,'/JoyState', self.JoyStateCallback,1)
self.sub_position = self.create_subscription(Position,"/Current_point",self.positionCallback,1)
self.bridge = CvBridge()
self.minDist = 1500
self.Center_x = 0
self.Center_y = 0
self.Center_r = 0
self.Center_prevx = 0
self.Center_prevr = 0
self.prev_time = 0
self.prev_dist = 0
self.prev_angular = 0
self.Joy_active = False
self.Robot_Run = False
self.dist = []
self.encoding = ['16UC1', '32FC1']
self.linear_PID = (3.0, 0.0, 1.0)
self.angular_PID = (0.5, 0.0, 2.0)
self.scale = 1000
self.PID_init()
self.declare_param()
print("init done")
def declare_param(self):
self.declare_parameter("linear_Kp",3.0)
self.linear_Kp = self.get_parameter('linear_Kp').get_parameter_value().double_value
self.declare_parameter("linear_Ki",0.0)
self.linear_Ki = self.get_parameter('linear_Ki').get_parameter_value().double_value
self.declare_parameter("linear_Kd",1.0)
self.linear_Kd = self.get_parameter('linear_Kd').get_parameter_value().double_value
self.declare_parameter("angular_Kp",0.5)
self.angular_Kp = self.get_parameter('angular_Kp').get_parameter_value().double_value
self.declare_parameter("angular_Ki",0.0)
self.angular_Ki = self.get_parameter('angular_Ki').get_parameter_value().double_value
self.declare_parameter("angular_Kd",2.0)
self.angular_Kd = self.get_parameter('angular_Kd').get_parameter_value().double_value
self.declare_parameter("scale",1000)
self.scale = self.get_parameter('scale').get_parameter_value().integer_value
self.declare_parameter("minDistance",1.0)
self.minDistance = self.get_parameter('minDistance').get_parameter_value().double_value
def get_param(self):
self.linear_Kp = self.get_parameter('linear_Kp').get_parameter_value().double_value
self.linear_Ki = self.get_parameter('linear_Ki').get_parameter_value().double_value
self.linear_Kd = self.get_parameter('linear_Kd').get_parameter_value().double_value
self.angular_Kp = self.get_parameter('angular_Kp').get_parameter_value().double_value
self.angular_Ki = self.get_parameter('angular_Ki').get_parameter_value().double_value
self.angular_Kd = self.get_parameter('angular_Kd').get_parameter_value().double_value
self.scale = self.get_parameter('scale').get_parameter_value().integer_value
self.minDistance = self.get_parameter('minDistance').get_parameter_value().double_value
self.linear_PID = (self.linear_Kp, self.linear_Ki, self.linear_Kd)
self.angular_PID = (self.angular_Kp, self.angular_Ki, self.angular_Kd)
self.minDist = self.minDistance * 1000
def PID_init(self):
self.linear_pid = simplePID(self.linear_PID[0] / 1000.0, self.linear_PID[1] / 1000.0, self.linear_PID[2] / 1000.0)
self.angular_pid = simplePID(self.angular_PID[0] / 100.0, self.angular_PID[1] / 100.0, self.angular_PID[2] / 100.0)
def depth_img_Callback(self, msg):
if not isinstance(msg, Image): return
depthFrame = self.bridge.imgmsg_to_cv2(msg, desired_encoding=self.encoding[1])
self.action = cv.waitKey(1)
if self.Center_r != 0:
now_time = time.time()
if now_time - self.prev_time > 5:
if self.Center_prevx == self.Center_x and self.Center_prevr == self.Center_r: self.Center_r = 0
self.prev_time = now_time
distance = [0, 0, 0, 0, 0]
if 0 < int(self.Center_y - 3) and int(self.Center_y + 3) < 480 and 0 < int(
self.Center_x - 3) and int(self.Center_x + 3) < 640:
# print("depthFrame: ", len(depthFrame), len(depthFrame[0]))
distance[0] = depthFrame[int(self.Center_y - 3)][int(self.Center_x - 3)]
distance[1] = depthFrame[int(self.Center_y + 3)][int(self.Center_x - 3)]
distance[2] = depthFrame[int(self.Center_y - 3)][int(self.Center_x + 3)]
distance[3] = depthFrame[int(self.Center_y + 3)][int(self.Center_x + 3)]
distance[4] = depthFrame[int(self.Center_y)][int(self.Center_x)]
distance_ = 1000.0
num_depth_points = 5
for i in range(5):
if 40 < distance[i] < 80000: distance_ += distance[i]
else: num_depth_points -= 1
if num_depth_points == 0: distance_ = self.minDist
else: distance_ /= num_depth_points
#print("Center_x: {}, Center_y: {}, distance_: {}".format(self.Center_x, self.Center_y, distance_))
self.execute(self.Center_x, distance_)
self.Center_prevx = self.Center_x
self.Center_prevr = self.Center_r
else:
if self.Robot_Run ==True:
self.pub_cmdVel.publish(Twist())
self.Robot_Run = False
if self.action == ord('q') or self.action == 113: self.cleanup()
def JoyStateCallback(self, msg):
if not isinstance(msg, Bool): return
self.Joy_active = msg.data
self.pub_cmdVel.publish(Twist())
def positionCallback(self, msg):
if not isinstance(msg, Position): return
self.Center_x = msg.anglex
self.Center_y = msg.angley
self.Center_r = msg.distance
def cleanup(self):
self.pub_cmdVel.publish(Twist())
print ("Shutting down this node.")
cv.destroyAllWindows()
def execute(self, point_x, dist):
self.get_param()
if abs(self.prev_dist - dist) > 300:
self.prev_dist = dist
return
if abs(self.prev_angular - point_x) > 300:
self.prev_angular = point_x
return
if self.Joy_active == True: return
linear_x = self.linear_pid.compute(dist, self.minDist)
angular_z = self.angular_pid.compute(320, point_x)
if abs(dist - self.minDist) < 30: linear_x = 0
if abs(point_x - 320.0) < 30: angular_z = 0
twist = Twist()
if angular_z>2.0:
angular_z = 2.0
if angular_z<-2.0:
angular_z = -2.0
if linear_x > 1.0:
linear_x = 1.0
if linear_x <-1.0:
linear_x = -1.0
twist.angular.z = angular_z * 1.0
twist.linear.x = linear_x * 1.0
print("twist.linear.x: ",twist.linear.x)
print("twist.angular.z: ",twist.angular.z)
self.pub_cmdVel.publish(twist)
self.Robot_Run = True
def main():
rclpy.init()
color_tracker = color_Tracker("ColorTracker")
print("start it")
rclpy.spin(color_tracker)

@ -0,0 +1,52 @@
cmake_minimum_required(VERSION 3.5)
project(yahboomcar_base_node)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
# uncomment the following section in order to fill in
# further dependencies manually.
find_package(tf2 REQUIRED)
find_package(tf2_ros REQUIRED)
find_package(nav_msgs REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(rclcpp REQUIRED)
find_package(turtlesim REQUIRED)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
add_executable(base_node_X3 src/base_node_X3.cpp)
add_executable(base_node_x1 src/base_node_x1.cpp)
add_executable(base_node_R2 src/base_node_R2.cpp)
ament_target_dependencies(base_node_X3 rclcpp tf2 tf2_ros nav_msgs geometry_msgs)
ament_target_dependencies(base_node_x1 rclcpp tf2 tf2_ros nav_msgs geometry_msgs)
ament_target_dependencies(base_node_R2 rclcpp tf2 tf2_ros nav_msgs geometry_msgs)
install(TARGETS
base_node_X3
base_node_x1
base_node_R2
DESTINATION lib/${PROJECT_NAME}
)
ament_package()

@ -0,0 +1,25 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>yahboomcar_base_node</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="nx-ros2@todo.todo">nx-ros2</maintainer>
<license>TODO: License declaration</license>
<buildtool_depend>ament_cmake</buildtool_depend>
<test_depend>ament_lint_auto</test_depend>
<test_depend>ament_lint_common</test_depend>
<depend>geometry_msgs</depend>
<depend>rclcpp</depend>
<depend>tf2</depend>
<depend>tf2_ros</depend>
<depend>turtlesim</depend>
<depend>nav_msgs</depend>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>

@ -0,0 +1,159 @@
#include <geometry_msgs/msg/transform_stamped.hpp>
#include "geometry_msgs/msg/twist.hpp"
#include "nav_msgs/msg/odometry.hpp"
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <memory>
#include <string>
#include <geometry_msgs/msg/transform_stamped.hpp>
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <turtlesim/msg/pose.hpp>
#include <memory>
#include <string>
#include <chrono>
#include <functional>
#include <memory>
#include <string>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using std::placeholders::_1;
class OdomPublisher:public rclcpp ::Node
{
rclcpp::Subscription<geometry_msgs::msg::Twist>::SharedPtr subscription_;
rclcpp::Publisher<nav_msgs::msg::Odometry>::SharedPtr odom_publisher_;
std::unique_ptr<tf2_ros::TransformBroadcaster> tf_broadcaster_;
double linear_scale_x_ = 0.0 ;
double linear_scale_y_ = 0.0;
double vel_dt_ = 0.0;
double x_pos_ = 0.0;
double y_pos_ = 0.0;
double heading_ = 0.0;
double linear_velocity_x_ = 0.0;
double linear_velocity_y_ = 0.0;
double wheelbase_ = 0.25;
bool pub_odom_tf_ = false;
rclcpp::Time last_vel_time_ ;
public:
OdomPublisher()
: Node("base_node")
{
this->declare_parameter<double>("wheelbase",0.25);
this->declare_parameter<std::string>("odom_frame","odom");
this->declare_parameter<std::string>("base_footprint_frame","base_footprint");
this->declare_parameter<double>("linear_scale_x",1.0);
this->declare_parameter<double>("linear_scale_y",1.0);
this->declare_parameter<bool>("pub_odom_tf",true);
this->get_parameter<double>("linear_scale_x",linear_scale_x_);
this->get_parameter<double>("linear_scale_y",linear_scale_y_);
this->get_parameter<double>("wheelbase",wheelbase_);
this->get_parameter<bool>("pub_odom_tf",pub_odom_tf_);
tf_broadcaster_ = std::make_unique<tf2_ros::TransformBroadcaster>(*this);
subscription_ = this->create_subscription<geometry_msgs::msg::Twist>("vel_raw",50,std::bind(&OdomPublisher::handle_vel,this,_1));
odom_publisher_ = this->create_publisher<nav_msgs::msg::Odometry>("/odom_raw", 50);
}
private:
void handle_vel(const std::shared_ptr<geometry_msgs::msg::Twist > msg)
{
//geometry_msgs::msg::Twist twist;
rclcpp::Time curren_time = rclcpp::Clock().now();
linear_velocity_x_ = msg->linear.x * linear_scale_x_;// scale = 1
linear_velocity_y_ = msg->linear.y * linear_scale_y_;
vel_dt_ = (curren_time - last_vel_time_).seconds();
//std::cout<<"curren_time: "<<curren_time.seconds()<<std::endl;
// std::cout<<"vel_dt: "<<vel_dt_<<std::endl;
last_vel_time_ = curren_time;
double steer_angle = linear_velocity_y_;
double MI_PI = 3.1416;
double R = wheelbase_ / tan(steer_angle/180.0*MI_PI);
double angular_velocity_z_ = linear_velocity_x_/R;
double delta_heading = angular_velocity_z_ * vel_dt_; //radians
double delta_x = (linear_velocity_x_ * cos(heading_)) * vel_dt_; //m
double delta_y = (linear_velocity_x_ * sin(heading_)) * vel_dt_; //m
x_pos_ += delta_x;
y_pos_ += delta_y;
heading_ += delta_heading;
tf2::Quaternion myQuaternion;
geometry_msgs::msg::Quaternion odom_quat ;
myQuaternion.setRPY(0.00,0.00,heading_ );
odom_quat.x = myQuaternion.x();
odom_quat.y = myQuaternion.y();
odom_quat.z = myQuaternion.z();
odom_quat.w = myQuaternion.w();
nav_msgs::msg::Odometry odom;
odom.header.stamp = curren_time;
odom.header.frame_id = "odom";
odom.child_frame_id = "base_footprint";
// robot's position in x,y and z
odom.pose.pose.position.x = x_pos_;
odom.pose.pose.position.y = y_pos_;
odom.pose.pose.position.z = 0.0;
// robot's heading in quaternion
odom.pose.pose.orientation = odom_quat;
odom.pose.covariance[0] = 0.001;
odom.pose.covariance[7] = 0.001;
odom.pose.covariance[35] = 0.001;
// linear speed from encoders
odom.twist.twist.linear.x = linear_velocity_x_;
//odom.twist.twist.linear.y = linear_velocity_y_;
odom.twist.twist.linear.y = 0.0; // vy = 0.0
odom.twist.twist.linear.z = 0.0;
odom.twist.twist.angular.x = 0.0;
odom.twist.twist.angular.y = 0.0;
// angular speed from encoders
odom.twist.twist.angular.z = angular_velocity_z_;
odom.twist.covariance[0] = 0.0001;
odom.twist.covariance[7] = 0.0001;
odom.twist.covariance[35] = 0.0001;
// ROS_INFO("ODOM PUBLISH");
odom_publisher_ -> publish(odom);
if (pub_odom_tf_)
{
geometry_msgs::msg::TransformStamped t;
rclcpp::Time now = this->get_clock()->now();
t.header.stamp = now;
t.header.frame_id = "odom";
t.child_frame_id = "base_footprint";
t.transform.translation.x = x_pos_;
t.transform.translation.y = y_pos_;
t.transform.translation.z = 0.0;
t.transform.rotation.x = myQuaternion.x();
t.transform.rotation.y = myQuaternion.y();
t.transform.rotation.z = myQuaternion.z();
t.transform.rotation.w = myQuaternion.w();
tf_broadcaster_->sendTransform(t);
}
}
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<OdomPublisher>());
rclcpp::shutdown();
return 0;
}

@ -0,0 +1,160 @@
#include <geometry_msgs/msg/transform_stamped.hpp>
#include "geometry_msgs/msg/twist.hpp"
#include "nav_msgs/msg/odometry.hpp"
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <memory>
#include <string>
#include <geometry_msgs/msg/transform_stamped.hpp>
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <turtlesim/msg/pose.hpp>
#include <memory>
#include <string>
#include <chrono>
#include <functional>
#include <memory>
#include <string>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using std::placeholders::_1;
class OdomPublisher:public rclcpp ::Node
{
rclcpp::Subscription<geometry_msgs::msg::Twist>::SharedPtr subscription_;
rclcpp::Publisher<nav_msgs::msg::Odometry>::SharedPtr odom_publisher_;
std::unique_ptr<tf2_ros::TransformBroadcaster> tf_broadcaster_;
double linear_scale_x_ = 0.0 ;
double linear_scale_y_ = 0.0;
double vel_dt_ = 0.0;
double x_pos_ = 0.0;
double y_pos_ = 0.0;
double heading_ = 0.0;
double linear_velocity_x_ = 0.0;
double linear_velocity_y_ = 0.0;
double angular_velocity_z_ = 0.0;
double wheelbase_ = 0.25;
bool pub_odom_tf_ = false;
rclcpp::Time last_vel_time_ ;
public:
OdomPublisher()
: Node("base_node")
{
this->declare_parameter<double>("wheelbase",0.25);
this->declare_parameter<std::string>("odom_frame","odom");
this->declare_parameter<std::string>("base_footprint_frame","base_footprint");
this->declare_parameter<double>("linear_scale_x",1.0);
this->declare_parameter<double>("linear_scale_y",1.0);
this->declare_parameter<bool>("pub_odom_tf",true);
this->get_parameter<double>("linear_scale_x",linear_scale_x_);
this->get_parameter<double>("linear_scale_y",linear_scale_y_);
this->get_parameter<double>("wheelbase",wheelbase_);
this->get_parameter<bool>("pub_odom_tf",pub_odom_tf_);
tf_broadcaster_ = std::make_unique<tf2_ros::TransformBroadcaster>(*this);
subscription_ = this->create_subscription<geometry_msgs::msg::Twist>("vel_raw",50,std::bind(&OdomPublisher::handle_vel,this,_1));
odom_publisher_ = this->create_publisher<nav_msgs::msg::Odometry>("/odom_raw", 50);
}
private:
void handle_vel(const std::shared_ptr<geometry_msgs::msg::Twist > msg)
{
//geometry_msgs::msg::Twist twist;
rclcpp::Time curren_time = rclcpp::Clock().now();
linear_velocity_x_ = msg->linear.x * linear_scale_x_;// scale = 1
linear_velocity_y_ = msg->linear.y * linear_scale_y_;
angular_velocity_z_ = msg->angular.z ;
vel_dt_ = (curren_time - last_vel_time_).seconds();
//std::cout<<"curren_time: "<<curren_time.seconds()<<std::endl;
// std::cout<<"vel_dt: "<<vel_dt_<<std::endl;
last_vel_time_ = curren_time;
double steer_angle = linear_velocity_y_;
double MI_PI = 3.1416;
double delta_heading = angular_velocity_z_ * vel_dt_; //radians
double delta_x = (linear_velocity_x_ * cos(heading_)-linear_velocity_y_*sin(heading_)) * vel_dt_; //m
double delta_y = (linear_velocity_x_ * sin(heading_)+linear_velocity_y_*cos(heading_)) * vel_dt_; //m
x_pos_ += delta_x;
y_pos_ += delta_y;
heading_ += delta_heading;
tf2::Quaternion myQuaternion;
geometry_msgs::msg::Quaternion odom_quat ;
myQuaternion.setRPY(0.00,0.00,heading_ );
odom_quat.x = myQuaternion.x();
odom_quat.y = myQuaternion.y();
odom_quat.z = myQuaternion.z();
odom_quat.w = myQuaternion.w();
nav_msgs::msg::Odometry odom;
odom.header.stamp = curren_time;
odom.header.frame_id = "odom";
odom.child_frame_id = "base_footprint";
// robot's position in x,y and z
odom.pose.pose.position.x = x_pos_;
odom.pose.pose.position.y = y_pos_;
odom.pose.pose.position.z = 0.0;
// robot's heading in quaternion
odom.pose.pose.orientation = odom_quat;
odom.pose.covariance[0] = 0.001;
odom.pose.covariance[7] = 0.001;
odom.pose.covariance[35] = 0.001;
// linear speed from encoders
odom.twist.twist.linear.x = linear_velocity_x_;
odom.twist.twist.linear.y = linear_velocity_y_;
odom.twist.twist.linear.y = 0.0; // vy = 0.0
odom.twist.twist.linear.z = 0.0;
odom.twist.twist.angular.x = 0.0;
odom.twist.twist.angular.y = 0.0;
// angular speed from encoders
odom.twist.twist.angular.z = angular_velocity_z_;
odom.twist.covariance[0] = 0.0001;
odom.twist.covariance[7] = 0.0001;
odom.twist.covariance[35] = 0.0001;
// ROS_INFO("ODOM PUBLISH");
odom_publisher_ -> publish(odom);
if (pub_odom_tf_)
{
geometry_msgs::msg::TransformStamped t;
rclcpp::Time now = this->get_clock()->now();
t.header.stamp = now;
t.header.frame_id = "odom";
t.child_frame_id = "base_footprint";
t.transform.translation.x = x_pos_;
t.transform.translation.y = y_pos_;
t.transform.translation.z = 0.0;
t.transform.rotation.x = myQuaternion.x();
t.transform.rotation.y = myQuaternion.y();
t.transform.rotation.z = myQuaternion.z();
t.transform.rotation.w = myQuaternion.w();
tf_broadcaster_->sendTransform(t);
}
}
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<OdomPublisher>());
rclcpp::shutdown();
return 0;
}

@ -0,0 +1,161 @@
#include <geometry_msgs/msg/transform_stamped.hpp>
#include "geometry_msgs/msg/twist.hpp"
#include "nav_msgs/msg/odometry.hpp"
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <memory>
#include <string>
#include <geometry_msgs/msg/transform_stamped.hpp>
#include <rclcpp/rclcpp.hpp>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <turtlesim/msg/pose.hpp>
#include <memory>
#include <string>
#include <chrono>
#include <functional>
#include <memory>
#include <string>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using std::placeholders::_1;
class OdomPublisher:public rclcpp ::Node
{
rclcpp::Subscription<geometry_msgs::msg::Twist>::SharedPtr subscription_;
rclcpp::Publisher<nav_msgs::msg::Odometry>::SharedPtr odom_publisher_;
std::unique_ptr<tf2_ros::TransformBroadcaster> tf_broadcaster_;
double linear_scale_x_ = 0.0 ;
double linear_scale_y_ = 0.0;
double vel_dt_ = 0.0;
double x_pos_ = 0.0;
double y_pos_ = 0.0;
double heading_ = 0.0;
double linear_velocity_x_ = 0.0;
double linear_velocity_y_ = 0.0;
double angular_velocity_z_ = 0.0;
double wheelbase_ = 0.25;
bool pub_odom_tf_ = false;
rclcpp::Time last_vel_time_ ;
public:
OdomPublisher()
: Node("base_node_x1")
{
this->declare_parameter<double>("wheelbase",0.25);
this->declare_parameter<std::string>("odom_frame","odom");
this->declare_parameter<std::string>("base_footprint_frame","base_footprint");
this->declare_parameter<double>("linear_scale_x",1.0);
this->declare_parameter<double>("linear_scale_y",1.0);
this->declare_parameter<bool>("pub_odom_tf",false);
this->get_parameter<double>("linear_scale_x",linear_scale_x_);
this->get_parameter<double>("linear_scale_y",linear_scale_y_);
this->get_parameter<double>("wheelbase",wheelbase_);
this->get_parameter<bool>("pub_odom_tf",pub_odom_tf_);
tf_broadcaster_ = std::make_unique<tf2_ros::TransformBroadcaster>(*this);
subscription_ = this->create_subscription<geometry_msgs::msg::Twist>("/vel_raw",50,std::bind(&OdomPublisher::handle_vel,this,_1));
odom_publisher_ = this->create_publisher<nav_msgs::msg::Odometry>("/odom_raw", 50);
}
private:
void handle_vel(const std::shared_ptr<geometry_msgs::msg::Twist > msg)
{
// RCLCPP_INFO(this->get_logger(),"linear_velocity_x_ = %f",msg->linear.x);
// RCLCPP_INFO(this->get_logger(),"linear_velocity_y_ = %f",msg->linear.y);
// RCLCPP_INFO(this->get_logger(),"angular_velocity_z_ = %f",msg->angular.z);
// ROS_INFO("ODOM PUBLISH %.2f,%.2f,%.2f", twist.linear.x, twist.linear.y, twist.angular.z);
rclcpp::Time current_time = rclcpp::Clock().now();
linear_velocity_x_ = msg->linear.x * linear_scale_x_;
linear_velocity_y_ = msg->linear.y * linear_scale_y_;
angular_velocity_z_ = msg->angular.z;
vel_dt_ = (current_time - last_vel_time_).seconds();
last_vel_time_ = current_time;
//compute odometry in a typical way given the velocities of the robot
double delta_heading = angular_velocity_z_ * vel_dt_; //radians
double delta_x = (linear_velocity_x_ * cos(heading_) - linear_velocity_y_ * sin(heading_)) * vel_dt_; //m
double delta_y = (linear_velocity_x_ * sin(heading_) + linear_velocity_y_ * cos(heading_)) * vel_dt_; //m
//calculate current position of the robot
x_pos_ += delta_x;
y_pos_ += delta_y;
heading_ += delta_heading;
tf2::Quaternion myQuaternion;
geometry_msgs::msg::Quaternion odom_quat ;
myQuaternion.setRPY(0.00,0.00,heading_ );
odom_quat.x = myQuaternion.x();
odom_quat.y = myQuaternion.y();
odom_quat.z = myQuaternion.z();
odom_quat.w = myQuaternion.w();
nav_msgs::msg::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = "odom";
odom.child_frame_id = "base_footprint";
// robot's position in x,y and z
odom.pose.pose.position.x = x_pos_;
odom.pose.pose.position.y = y_pos_;
odom.pose.pose.position.z = 0.0;
// robot's heading in quaternion
odom.pose.pose.orientation = odom_quat;
odom.pose.covariance[0] = 0.001;
odom.pose.covariance[7] = 0.001;
odom.pose.covariance[35] = 0.001;
// linear speed from encoders
odom.twist.twist.linear.x = linear_velocity_x_;
odom.twist.twist.linear.y = linear_velocity_y_;
odom.twist.twist.linear.z = 0.0;
odom.twist.twist.angular.x = 0.0;
odom.twist.twist.angular.y = 0.0;
// angular speed from encoders
odom.twist.twist.angular.z = angular_velocity_z_;
odom.twist.covariance[0] = 0.0001;
odom.twist.covariance[7] = 0.0001;
odom.twist.covariance[35] = 0.0001;
// ROS_INFO("ODOM PUBLISH");
// RCLCPP_INFO(this->get_logger(),"odom.pose.pose.position.x = %f",odom.pose.pose.position.x);
// RCLCPP_INFO(this->get_logger(),"odom.pose.pose.position.y = %f",odom.pose.pose.position.y);
// RCLCPP_INFO(this->get_logger(),"odom.pose.pose.position.z = %f",odom.pose.pose.position.z);
odom_publisher_ -> publish(odom);
if (pub_odom_tf_)
{
geometry_msgs::msg::TransformStamped t;
rclcpp::Time now = this->get_clock()->now();
t.header.stamp = now;
t.header.frame_id = "odom";
t.child_frame_id = "base_footprint";
t.transform.translation.x = x_pos_;
t.transform.translation.y = y_pos_;
t.transform.translation.z = 0.0;
t.transform.rotation.x = myQuaternion.x();
t.transform.rotation.y = myQuaternion.y();
t.transform.rotation.z = myQuaternion.z();
t.transform.rotation.w = myQuaternion.w();
tf_broadcaster_->sendTransform(t);
}
}
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<OdomPublisher>());
rclcpp::shutdown();
return 0;
}

@ -0,0 +1,42 @@
#include <chrono>
#include <memory>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using namespace std::chrono_literals;
/* This example creates a subclass of Node and uses std::bind() to register a
* member function as a callback from the timer. */
class MinimalPublisher : public rclcpp::Node
{
public:
MinimalPublisher()
: Node("minimal_publisher"), count_(0)
{
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);
timer_ = this->create_wall_timer(
500ms, std::bind(&MinimalPublisher::timer_callback, this));
}
private:
void timer_callback()
{
auto message = std_msgs::msg::String();
message.data = "Hello, world! " + std::to_string(count_++);
RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str());
publisher_->publish(message);
}
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;
size_t count_;
};
int main(int argc, char * argv[])
{
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalPublisher>());
rclcpp::shutdown();
return 0;
}

@ -0,0 +1,53 @@
from ament_index_python.packages import get_package_share_path
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.conditions import IfCondition, UnlessCondition
from launch.substitutions import Command, LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.parameter_descriptions import ParameterValue
def generate_launch_description():
urdf_tutorial_path = get_package_share_path('yahboomcar_description')
default_model_path = urdf_tutorial_path / 'urdf/yahboomcar_R2.urdf'
default_rviz_config_path = urdf_tutorial_path / 'rviz/yahboomcar.rviz'
gui_arg = DeclareLaunchArgument(name='gui', default_value='true', choices=['true', 'false'],
description='Flag to enable joint_state_publisher_gui')
model_arg = DeclareLaunchArgument(name='model', default_value=str(default_model_path),
description='Absolute path to robot urdf file')
rviz_arg = DeclareLaunchArgument(name='rvizconfig', default_value=str(default_rviz_config_path))
robot_description = ParameterValue(Command(['xacro ', LaunchConfiguration('model')]),
value_type=str)
return LaunchDescription([
Node(
package='yahboomcar_bringup',
executable='Mcnamu_driver',
),
Node(
package='yahboomcar_base_node',
executable='base_node',
),
Node(
package='robot_state_publisher',
executable='robot_state_publisher',
parameters=[{'robot_description': robot_description}]
),
Node(
package='joint_state_publisher',
executable='joint_state_publisher',
condition=UnlessCondition(LaunchConfiguration('gui'))
),
Node(
package='joint_state_publisher_gui',
executable='joint_state_publisher_gui',
condition=IfCondition(LaunchConfiguration('gui'))
),
'''Node(
package='rviz2',
executable='rviz2',
name='rviz2',
output='screen',
arguments=['-d', LaunchConfiguration('rvizconfig')],
),'''
])

@ -0,0 +1,111 @@
from ament_index_python.packages import get_package_share_path
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.conditions import IfCondition, UnlessCondition
from launch.substitutions import Command, LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.parameter_descriptions import ParameterValue
import os
from ament_index_python.packages import get_package_share_directory
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
print("---------------------robot_type = r2---------------------")
def generate_launch_description():
urdf_tutorial_path = get_package_share_path('yahboomcar_description')
default_model_path = urdf_tutorial_path / 'urdf/yahboomcar_R2.urdf.xacro'
default_rviz_config_path = urdf_tutorial_path / 'rviz/yahboomcar.rviz'
gui_arg = DeclareLaunchArgument(name='gui', default_value='false', choices=['true', 'false'],
description='Flag to enable joint_state_publisher_gui')
model_arg = DeclareLaunchArgument(name='model', default_value=str(default_model_path),
description='Absolute path to robot urdf file')
rviz_arg = DeclareLaunchArgument(name='rvizconfig', default_value=str(default_rviz_config_path),
description='Absolute path to rviz config file')
pub_odom_tf_arg = DeclareLaunchArgument('pub_odom_tf', default_value='false',
description='Whether to publish the tf from the original odom to the base_footprint')
robot_description = ParameterValue(Command(['xacro ', LaunchConfiguration('model')]),
value_type=str)
robot_state_publisher_node = Node(
package='robot_state_publisher',
executable='robot_state_publisher',
parameters=[{'robot_description': robot_description}]
)
# Depending on gui parameter, either launch joint_state_publisher or joint_state_publisher_gui
joint_state_publisher_node = Node(
package='joint_state_publisher',
executable='joint_state_publisher',
condition=UnlessCondition(LaunchConfiguration('gui'))
)
joint_state_publisher_gui_node = Node(
package='joint_state_publisher_gui',
executable='joint_state_publisher_gui',
condition=IfCondition(LaunchConfiguration('gui'))
)
rviz_node = Node(
package='rviz2',
executable='rviz2',
name='rviz2',
output='screen',
arguments=['-d', LaunchConfiguration('rvizconfig')],
)
imu_filter_config = os.path.join(
get_package_share_directory('yahboomcar_bringup'),
'param',
'imu_filter_param.yaml'
)
driver_node = Node(
package='yahboomcar_bringup',
executable='Ackman_driver_R2',
)
base_node = Node(
package='yahboomcar_base_node',
executable='base_node_R2',
# 当使用ekf融合时该tf有ekf发布
parameters=[{'pub_odom_tf': LaunchConfiguration('pub_odom_tf')}]
)
imu_filter_node = Node(
package='imu_filter_madgwick',
executable='imu_filter_madgwick_node',
parameters=[imu_filter_config]
)
ekf_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('robot_localization'), 'launch'),
'/ekf_x1_x3_launch.py'])
)
yahboom_joy_node = Node(
package='yahboomcar_ctrl',
executable='yahboom_joy_R2',
)
return LaunchDescription([
gui_arg,
model_arg,
rviz_arg,
pub_odom_tf_arg,
joint_state_publisher_node,
joint_state_publisher_gui_node,
robot_state_publisher_node,
#rviz_node,
driver_node,
base_node,
imu_filter_node,
ekf_node,
yahboom_joy_node
])

@ -0,0 +1,111 @@
from ament_index_python.packages import get_package_share_path
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.conditions import IfCondition, UnlessCondition
from launch.substitutions import Command, LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.parameter_descriptions import ParameterValue
import os
from ament_index_python.packages import get_package_share_directory
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
print("---------------------robot_type = x1---------------------")
def generate_launch_description():
urdf_tutorial_path = get_package_share_path('yahboomcar_description_x1')
default_model_path = urdf_tutorial_path / 'urdf/yahboomcar_X1.urdf'
default_rviz_config_path = urdf_tutorial_path / 'rviz/yahboomcar.rviz'
gui_arg = DeclareLaunchArgument(name='gui', default_value='false', choices=['true', 'false'],
description='Flag to enable joint_state_publisher_gui')
model_arg = DeclareLaunchArgument(name='model', default_value=str(default_model_path),
description='Absolute path to robot urdf file')
rviz_arg = DeclareLaunchArgument(name='rvizconfig', default_value=str(default_rviz_config_path),
description='Absolute path to rviz config file')
pub_odom_tf_arg = DeclareLaunchArgument('pub_odom_tf', default_value='false',
description='Whether to publish the tf from the original odom to the base_footprint')
robot_description = ParameterValue(Command(['xacro ', LaunchConfiguration('model')]),
value_type=str)
robot_state_publisher_node = Node(
package='robot_state_publisher',
executable='robot_state_publisher',
parameters=[{'robot_description': robot_description}]
)
# Depending on gui parameter, either launch joint_state_publisher or joint_state_publisher_gui
joint_state_publisher_node = Node(
package='joint_state_publisher',
executable='joint_state_publisher',
condition=UnlessCondition(LaunchConfiguration('gui'))
)
joint_state_publisher_gui_node = Node(
package='joint_state_publisher_gui',
executable='joint_state_publisher_gui',
condition=IfCondition(LaunchConfiguration('gui'))
)
rviz_node = Node(
package='rviz2',
executable='rviz2',
name='rviz2',
output='screen',
arguments=['-d', LaunchConfiguration('rvizconfig')],
)
imu_filter_config = os.path.join( # 找到参数文件的完整路径
get_package_share_directory('yahboomcar_bringup'),
'param',
'imu_filter_param.yaml'
)
driver_node = Node(
package='yahboomcar_bringup',
executable='Mcnamu_driver_x1',
)
base_node = Node(
package='yahboomcar_base_node',
executable='base_node_x1',
# 当使用ekf融合时该tf有ekf发布
parameters=[{'pub_odom_tf': LaunchConfiguration('pub_odom_tf')}]
)
imu_filter_node = Node(
package='imu_filter_madgwick',
executable='imu_filter_madgwick_node',
parameters=[imu_filter_config]
)
ekf_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('robot_localization'), 'launch'),
'/ekf_x1_x3_launch.py'])
)
yahboom_joy_node = Node(
package='yahboomcar_ctrl',
executable='yahboom_joy_X3',
)
return LaunchDescription([
gui_arg,
model_arg,
rviz_arg,
pub_odom_tf_arg,
joint_state_publisher_node,
joint_state_publisher_gui_node,
robot_state_publisher_node,
# rviz_node
driver_node,
base_node,
imu_filter_node,
ekf_node,
yahboom_joy_node
])

@ -0,0 +1,111 @@
from ament_index_python.packages import get_package_share_path
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.conditions import IfCondition, UnlessCondition
from launch.substitutions import Command, LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.parameter_descriptions import ParameterValue
import os
from ament_index_python.packages import get_package_share_directory
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
print("---------------------robot_type = x3---------------------")
def generate_launch_description():
urdf_tutorial_path = get_package_share_path('yahboomcar_description')
default_model_path = urdf_tutorial_path / 'urdf/yahboomcar_X3.urdf'
default_rviz_config_path = urdf_tutorial_path / 'rviz/yahboomcar.rviz'
gui_arg = DeclareLaunchArgument(name='gui', default_value='false', choices=['true', 'false'],
description='Flag to enable joint_state_publisher_gui')
model_arg = DeclareLaunchArgument(name='model', default_value=str(default_model_path),
description='Absolute path to robot urdf file')
rviz_arg = DeclareLaunchArgument(name='rvizconfig', default_value=str(default_rviz_config_path),
description='Absolute path to rviz config file')
pub_odom_tf_arg = DeclareLaunchArgument('pub_odom_tf', default_value='false',
description='Whether to publish the tf from the original odom to the base_footprint')
robot_description = ParameterValue(Command(['xacro ', LaunchConfiguration('model')]),
value_type=str)
robot_state_publisher_node = Node(
package='robot_state_publisher',
executable='robot_state_publisher',
parameters=[{'robot_description': robot_description}]
)
# Depending on gui parameter, either launch joint_state_publisher or joint_state_publisher_gui
joint_state_publisher_node = Node(
package='joint_state_publisher',
executable='joint_state_publisher',
condition=UnlessCondition(LaunchConfiguration('gui'))
)
joint_state_publisher_gui_node = Node(
package='joint_state_publisher_gui',
executable='joint_state_publisher_gui',
condition=IfCondition(LaunchConfiguration('gui'))
)
rviz_node = Node(
package='rviz2',
executable='rviz2',
name='rviz2',
output='screen',
arguments=['-d', LaunchConfiguration('rvizconfig')],
)
imu_filter_config = os.path.join(
get_package_share_directory('yahboomcar_bringup'),
'param',
'imu_filter_param.yaml'
)
driver_node = Node(
package='yahboomcar_bringup',
executable='Mcnamu_driver_X3',
)
base_node = Node(
package='yahboomcar_base_node',
executable='base_node_X3',
# 当使用ekf融合时该tf有ekf发布
parameters=[{'pub_odom_tf': LaunchConfiguration('pub_odom_tf')}]
)
imu_filter_node = Node(
package='imu_filter_madgwick',
executable='imu_filter_madgwick_node',
parameters=[imu_filter_config]
)
ekf_node = IncludeLaunchDescription(
PythonLaunchDescriptionSource([os.path.join(
get_package_share_directory('robot_localization'), 'launch'),
'/ekf_x1_x3_launch.py'])
)
yahboom_joy_node = Node(
package='yahboomcar_ctrl',
executable='yahboom_joy_X3',
)
return LaunchDescription([
gui_arg,
model_arg,
rviz_arg,
pub_odom_tf_arg,
joint_state_publisher_node,
joint_state_publisher_gui_node,
robot_state_publisher_node,
# rviz_node
driver_node,
base_node,
imu_filter_node,
ekf_node,
yahboom_joy_node
])

@ -0,0 +1,19 @@
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>yahboomcar_bringup</name>
<version>0.0.0</version>
<description>TODO: Package description</description>
<maintainer email="nx-ros2@todo.todo">nx-ros2</maintainer>
<license>TODO: License declaration</license>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<depend>rclcpp</depend>
<depend>std_msgs</depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>

@ -0,0 +1,7 @@
imu_filter_madgwick:
ros__parameters:
fixed_frame: "base_link"
use_mag: false
publish_tf: false
world_frame: "enu"
orientation_stddev: 0.05

@ -0,0 +1,40 @@
Panels:
- Class: rviz_common/Displays
Name: Displays
- Class: rviz_common/Views
Name: Views
Visualization Manager:
Class: ""
Displays:
- Class: rviz_default_plugins/Grid
Name: Grid
Value: true
- Alpha: 0.8
Class: rviz_default_plugins/RobotModel
Description Source: Topic
Description Topic:
Value: /robot_description
Enabled: true
Name: RobotModel
Value: true
- Class: rviz_default_plugins/TF
Name: TF
Value: true
Global Options:
Fixed Frame: base_link
Frame Rate: 30
Name: root
Tools:
- Class: rviz_default_plugins/MoveCamera
Value: true
Views:
Current:
Class: rviz_default_plugins/Orbit
Distance: 1.7
Name: Current View
Pitch: 0.33
Value: Orbit (rviz)
Yaw: 5.5
Window Geometry:
Height: 800
Width: 1200

@ -0,0 +1,4 @@
[develop]
script-dir=$base/lib/yahboomcar_bringup
[install]
install-scripts=$base/lib/yahboomcar_bringup

@ -0,0 +1,43 @@
from setuptools import setup
import os
from glob import glob
package_name = 'yahboomcar_bringup'
setup(
name=package_name,
version='0.0.0',
packages=[package_name],
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
(os.path.join('share',package_name,'launch'),glob(os.path.join('launch','*launch.py'))),
#(os.path.join('share','yahboomcar_description','urdf'),glob(os.path.join('urdf','*.*'))),
#(os.path.join('share','yahboomcar_description','meshes'),glob(os.path.join('meshes','*.*'))),
(os.path.join('share','yahboomcar_description','rviz'),glob(os.path.join('rviz','*.rviz*'))),
(os.path.join('share', package_name, 'param'), glob(os.path.join('param', '*.yaml'))),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='nx-ros2',
maintainer_email='nx-ros2@todo.todo',
description='TODO: Package description',
license='TODO: License declaration',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'Mcnamu_driver_X3 = yahboomcar_bringup.Mcnamu_driver_X3:main',
'Mcnamu_driver_x1 = yahboomcar_bringup.Mcnamu_driver_x1:main',
'calibrate_linear_X3 = yahboomcar_bringup.calibrate_linear_X3:main',
'calibrate_angular_X3 = yahboomcar_bringup.calibrate_angular_X3:main',
'patrol_4ROS = yahboomcar_bringup.patrol_4ROS:main',
'patrol_a1_X3 = yahboomcar_bringup.patrol_a1_X3:main',
'Ackman_driver_R2 = yahboomcar_bringup.Ackman_driver_R2:main',
'calibrate_linear_R2 = yahboomcar_bringup.calibrate_linear_R2:main',
'calibrate_angular_R2 = yahboomcar_bringup.calibrate_angular_R2:main',
'patrol_4ROS_R2 = yahboomcar_bringup.patrol_4ROS_R2:main',
'patrol_a1_R2 = yahboomcar_bringup.patrol_a1_R2:main',
],
},
)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_copyright.main import main
import pytest
@pytest.mark.copyright
@pytest.mark.linter
def test_copyright():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found errors'

@ -0,0 +1,25 @@
# Copyright 2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_flake8.main import main_with_errors
import pytest
@pytest.mark.flake8
@pytest.mark.linter
def test_flake8():
rc, errors = main_with_errors(argv=[])
assert rc == 0, \
'Found %d code style errors / warnings:\n' % len(errors) + \
'\n'.join(errors)

@ -0,0 +1,23 @@
# Copyright 2015 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ament_pep257.main import main
import pytest
@pytest.mark.linter
@pytest.mark.pep257
def test_pep257():
rc = main(argv=['.', 'test'])
assert rc == 0, 'Found code style errors / warnings'

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save