Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Changed the explanation to focus on XT32 LiDAR related to discontinue… #27

Merged
merged 2 commits into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions docs/tutorials/01_hardware_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@

## Sample hardware configuration

> [!WARNING]
>
> HESAI AT128 is deprecated because it has been discontinued.

This following hardware configuration is used throughout this tutorial.

- ECU setup
Expand All @@ -12,10 +16,10 @@
- Sensor setup
- Sample configuration 1
- Camera: TIER IV Automotive HDR Camera C1 (x2)
- LiDAR: HESAI AT128 (x1)
- LiDAR: HESAI Pandar XT32 (x1)
- Sample configuration 2
- Camera: TIER IV Automotive HDR Camera C1 (x2)
- LiDAR: HESAI Pandar XT32 (x1)
- LiDAR: HESAI AT128 (x1)

### Connection diagram

Expand All @@ -39,16 +43,16 @@

### Sensor driver

Edge.Auto supports a variety of sensor types. The following repositories are used to make those sensors available in your ROS2 environment.

Check warning on line 46 in docs/tutorials/01_hardware_setup.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Forbidden word (ROS2)
Please refer to the each repositories for more details.

- Camera driver
- [tier4/tier4_automotive_hdr_camera](https://github.com/tier4/tier4_automotive_hdr_camera): Kernel driver for using TIER IV cameras with Video4Linux2 interface.
- [tier4/ros2_v4l2_camera](https://github.com/tier4/ros2_v4l2_camera): ROS2 package for camera driver using Video4Linux2.

Check warning on line 51 in docs/tutorials/01_hardware_setup.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Forbidden word (ROS2)
- LIDAR driver
- [tier4/nebula](https://github.com/tier4/nebula): ROS2 package for unified ethernet-based LiDAR driver.

Check warning on line 53 in docs/tutorials/01_hardware_setup.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Forbidden word (ROS2)
- Sensor synchronization
- [tier4/sensor_trigger](https://github.com/tier4/sensor_trigger): ROS2 package for generating sensor trigger signals.

Check warning on line 55 in docs/tutorials/01_hardware_setup.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Forbidden word (ROS2)

### Sensor/ECU synchronization

Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/03_sensor_calibration.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,9 @@ for detailed operation on the tool.
cd edge-auto
source install/setup.bash

ros2 launch edge_auto_launch calibration_extrinsic_at128_sample.launch.xml
## or
ros2 launch edge_auto_launch calibration_extrinsic_xt32_sample.launch.xml
## or
ros2 launch edge_auto_launch calibration_extrinsic_at128_sample.launch.xml
```

After executing the above launch file, two windows will be appear like below:
Expand All @@ -115,6 +115,6 @@ After calculating extrinsic parameters, put the result in the appropriate files
```sh
edge-auto/src/individual_params/individual_params/config/
└── default
├── at128_to_camera0.json # <- replace this file with your calculated results
└── at128_to_camera1.json # <- replace this file with your calculated results
├── xt32_to_camera0.json # <- replace this file with your calculated results
└── xt32_to_camera1.json # <- replace this file with your calculated results
```
6 changes: 3 additions & 3 deletions docs/tutorials/04_launch_application.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,16 @@
![demo_construction](figures/demo_construction.drawio.svg "demo_construction.svg")

The launch files can take an argument named `sensor_height` that represents LiDAR height from the ground in meters (default: 1.0).
Because some functions in Auoware are designed to perform best using previously known LiDAR height,

Check warning on line 36 in docs/tutorials/04_launch_application.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (Auoware)
it is recommended to adjust this value according to your actual sensor setup to acquire better performance.

```sh
cd edge-auto
source install/setup.bash

ros2 launch edge_auto_launch perception_at128_sample.launch.xml sensor_height:=[sensor height from the ground]
## or
ros2 launch edge_auto_launch perception_xt32_sample.launch.xml sensor_height:=[sensor height from the ground]
## or
ros2 launch edge_auto_launch perception_at128_sample.launch.xml sensor_height:=[sensor height from the ground]
```

This sample mainly leverages [pointcloud_preprocessor](https://github.com/autowarefoundation/autoware.universe/tree/main/sensing/pointcloud_preprocessor), [centerpoint](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/lidar_centerpoint), and [image_projection_based_fusion](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/image_projection_based_fusion) packages
Expand All @@ -51,14 +51,14 @@

In addition to the perception stack, this sample also launches viewers so that users can check perception results visually.

As an example, the following picture shows the perception results in the case of a system configuration that consists of one AT128 and one C1 camera.
As an example, the following picture shows the perception results.

![Example: perception result](../sample.png "Example: perception result")

Note: The default models used in this tutorial are tuned for outdoor environments
(especially for autonomous driving contexts).
If you try this tutorial in some indoor environments, such as room ceil is in the range of sensor FoV,
additional preprocessings, such as cropping the range to be processed, may be required to get better results.

Check warning on line 61 in docs/tutorials/04_launch_application.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (preprocessings)

## (Experimental) Launch experimental repositories

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/figures/connection.drawio.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/tutorials/figures/demo_construction.drawio.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading