You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/integrating_gps.rst
+26-20Lines changed: 26 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,17 +11,25 @@ Notes on Fusing GPS Data
11
11
Before beginning this tutorial, users should be sure to familiarize themselves with `REP-105 <http://www.ros.org/reps/rep-0105.html>`_. It is important for users to realize that using a position estimate that includes GPS data will likely be unfit for use by navigation modules, owing to the fact that GPS data is subject to discrete discontinuities ("jumps"). If you want to fuse data from a GPS into your position estimate, one potential solution is to do the following:
12
12
13
13
1. Run one instance of a ``robot_localization`` state estimation node that fuses only continuous data, such as odometry and IMU data. Set the ``world_frame`` parameter for this instance to the same value as the ``odom_frame`` parameter. Execute local path plans and motions in this frame.
14
-
2. Run another instance of a ``robot_localization`` state estimation node that fuses all sources of data, including the GPS. Set the ``world_frame`` parameter for this instance to the same value as the ``map_frame`` parameter.
14
+
2. Run another instance of a ``robot_localization`` state estimation node that fuses all sources of data, including the GPS. Set the ``world_frame`` parameter for this instance to the same value as the ``map_frame`` parameter.
15
15
16
-
This is just a suggestion, however, and users are free to fuse the GPS data into a single instance of a ``robot_localization`` state estimation node.
16
+
This is just a suggestion, however, and users are free to fuse the GPS data into a single instance of a ``robot_localization`` state estimation node.
17
17
18
18
Using navsat_transform_node
19
19
***************************
20
20
21
+
The diagram below illustrates an example setup:
22
+
23
+
.. image:: images/navsat_transform_workflow.png
24
+
:width:800px
25
+
:align:center
26
+
:alt:navsat_transform workflow
27
+
28
+
21
29
Required Inputs
22
30
===============
23
31
24
-
``navsat_transform_node`` requires three sources of information: the robot's current pose estimate in its world frame, an earth-referenced heading, and a geographic coordinate expressed as a latitude/longitude pair (with optional altitude).
32
+
``navsat_transform_node`` requires three sources of information: the robot's current pose estimate in its world frame, an earth-referenced heading, and a geographic coordinate expressed as a latitude/longitude pair (with optional altitude).
25
33
26
34
These data can be obtained in three different ways:
27
35
@@ -31,7 +39,7 @@ These data can be obtained in three different ways:
31
39
* A `sensor_msgs/Imu <http://docs.ros.org/api/sensor_msgs/html/msg/Imu.html>`_ message with an absolute (earth-referenced) heading.
32
40
* A `nav_msgs/Odometry <http://docs.ros.org/api/nav_msgs/html/msg/Odometry.html>`_ message that contains the robot's current position estimate in the frame specified by its start location (typically the output of a ``robot_localization`` state estimation node).
33
41
34
-
2. The datum (global frame origin) can be specified via the ``datum`` parameter.
42
+
2. The datum (global frame origin) can be specified via the ``datum`` parameter.
35
43
36
44
.. note:: In order to use this mode, the ``wait_for_datum`` parameter must be set to *true*.
37
45
@@ -41,15 +49,15 @@ These data can be obtained in three different ways:
The parameter order is ``latitude`` in decimal degrees, ``longitude`` in decimal degrees, ``heading`` in radians) the ``frame_id`` of your robot's world frame (i.e., the value of the ``world_frame`` parameter in a ``robot_localization`` state estimation node), and the ``frame_id`` of your robot's body frame (i.e., the value of the ``base_link_frame`` parameter in a ``robot_localization`` state estimation node). When this mode is used, the robot assumes that your robot's world frame origin is at the specified latitude and longitude and with a heading of :math:`0` (east).
52
+
The parameter order is ``latitude`` in decimal degrees, ``longitude`` in decimal degrees, ``heading`` in radians) the ``frame_id`` of your robot's world frame (i.e., the value of the ``world_frame`` parameter in a ``robot_localization`` state estimation node), and the ``frame_id`` of your robot's body frame (i.e., the value of the ``base_link_frame`` parameter in a ``robot_localization`` state estimation node). When this mode is used, the robot assumes that your robot's world frame origin is at the specified latitude and longitude and with a heading of :math:`0` (east).
45
53
46
-
3. The datum can be set manually via the ``set_datum`` service and using the `robot_localization/SetDatum <http://docs.ros.org/api/robot_localization/html/srv/SetDatum.html>`_ service message.
54
+
3. The datum can be set manually via the ``set_datum`` service and using the `robot_localization/SetDatum <http://docs.ros.org/api/robot_localization/html/srv/SetDatum.html>`_ service message.
47
55
48
56
49
57
GPS Data
50
58
^^^^^^^^
51
59
52
-
Please note that all development of ``navsat_transform_node`` was done using a Garmin 18x GPS unit, so there may be intricacies of the data generated by other units that need to be handled.
60
+
Please note that all development of ``navsat_transform_node`` was done using a Garmin 18x GPS unit, so there may be intricacies of the data generated by other units that need to be handled.
53
61
54
62
The excellent `nmea_navsat_driver <http://wiki.ros.org/nmea_navsat_driver>`_ package provides the required `sensor_msgs/NavSatFix <http://docs.ros.org/api/sensor_msgs/html/msg/NavSatFix.html>`_ message. Here is the ``nmea_navsat_driver`` launch file we'll use for this tutorial:
55
63
@@ -110,9 +118,9 @@ Integration with ``robot_localization`` is straightforward at this point. Simply
@@ -129,29 +137,27 @@ You should have no need to modify the ``_differential`` setting within the state
129
137
Details
130
138
=======
131
139
132
-
We'll start with a picture. Consider a robot that starts at some latitude and longitude and with some heading. We assume in this tutorial that the heading comes from an IMU that reads 0 when facing east, and increases according to the ROS spec (i.e., counter-clockwise). The remainder of this tutorial will refer to Figure 1:
140
+
We'll start with a picture. Consider a robot that starts at some latitude and longitude and with some heading. We assume in this tutorial that the heading comes from an IMU that reads 0 when facing east, and increases according to the ROS spec (i.e., counter-clockwise). The remainder of this tutorial will refer to Figure 2:
133
141
134
-
.. image:: images/figure1.png
142
+
.. image:: images/figure2.png
135
143
:width:800px
136
144
:align:center
137
-
:alt:Figure 1
145
+
:alt:Figure 2
138
146
139
147
140
-
`REP-105 <http://www.ros.org/reps/rep-0105.html>`_ suggests four coordinate frames: *base_link*, *odom*, *map*, and *earth*. *base_link* is the coordinate frame that is rigidly attached to the vehicle. The *odom* and *map* frames are world-fixed frames and generally have their origins at the vehicle's start position and orientation. The *earth* frame is used as a common reference frame for multiple map frames, and is not yet supported by ``navsat_transform_node``. Note that in Figure 1, the robot has just started (``t = 0``), and so its *base_link*, *odom*, and *map* frames are aligned. We can also define a coordinate frame for the UTM grid, which we will call *utm*. For the purposes of this tutorial, we will refer to the UTM grid coordinate frame as *utm*. Therefore, what we want to do is create a *utm*->*map* transform.
148
+
`REP-105 <http://www.ros.org/reps/rep-0105.html>`_ suggests four coordinate frames: *base_link*, *odom*, *map*, and *earth*. *base_link* is the coordinate frame that is rigidly attached to the vehicle. The *odom* and *map* frames are world-fixed frames and generally have their origins at the vehicle's start position and orientation. The *earth* frame is used as a common reference frame for multiple map frames, and is not yet supported by ``navsat_transform_node``. Note that in Figure 2, the robot has just started (``t = 0``), and so its *base_link*, *odom*, and *map* frames are aligned. We can also define a coordinate frame for the UTM grid, which we will call *utm*. For the purposes of this tutorial, we will refer to the UTM grid coordinate frame as *utm*. Therefore, what we want to do is create a *utm*->*map* transform.
141
149
142
-
Referring to Figure 1, these ideas are (hopefully) made clear. The UTM origin is the :math:`(0_{UTM}, 0_{UTM})` point of the UTM zone that is associated with the robot's GPS location. The robot begins somewhere within the UTM zone at location :math:`(x_{UTM}, y_{UTM})`. The robot's initial orientation is some angle :math:`\theta` above the UTM grid's :math:`X`-axis. Our transform will therefore require that we know :math:`x_{UTM}, y_{UTM}` and :math:`\theta`.
150
+
Referring to Figure 2, these ideas are (hopefully) made clear. The UTM origin is the :math:`(0_{UTM}, 0_{UTM})` point of the UTM zone that is associated with the robot's GPS location. The robot begins somewhere within the UTM zone at location :math:`(x_{UTM}, y_{UTM})`. The robot's initial orientation is some angle :math:`\theta` above the UTM grid's :math:`X`-axis. Our transform will therefore require that we know :math:`x_{UTM}, y_{UTM}` and :math:`\theta`.
143
151
144
-
We now need to convert our latitude and longitude to UTM coordinates. The UTM grid assumes that the :math:`X`-axis faces east, the :math:`Y`-axis faces (true) north, and the :math:`Z`-axis points up out of the ground. This complies with the right-handed coordinate frame as dictated by `REP-105 <http://www.ros.org/reps/rep-0105.html>`_. The REP also states that a yaw angle of :math:`0` means that we are facing straight down the :math:`X`-axis, and that the yaw increases counter-clockwise. ``navsat_transform_node`` assumes your heading data conforms to this standard. However, there are two factors that need to be considered:
152
+
We now need to convert our latitude and longitude to UTM coordinates. The UTM grid assumes that the :math:`X`-axis faces east, the :math:`Y`-axis faces (true) north, and the :math:`Z`-axis points up out of the ground. This complies with the right-handed coordinate frame as dictated by `REP-105 <http://www.ros.org/reps/rep-0105.html>`_. The REP also states that a yaw angle of :math:`0` means that we are facing straight down the :math:`X`-axis, and that the yaw increases counter-clockwise. ``navsat_transform_node`` assumes your heading data conforms to this standard. However, there are two factors that need to be considered:
145
153
146
154
1. The IMU driver may not allow the user to apply the magnetic declination correction factor
147
-
2. The IMU driver may incorrectly report :math:`0` when facing north, and not when facing east (even though its headings increase and decrease correctly). Fortunately, ``navsat_transform_node`` exposes two parameters to adddress these possible shortcomings in IMU data: ``magnetic_declination_radians`` and ``yaw_offset``. Referring to Figure 1, for an IMU that is currently measuring a yaw value of ``imu_yaw``,
155
+
2. The IMU driver may incorrectly report :math:`0` when facing north, and not when facing east (even though its headings increase and decrease correctly). Fortunately, ``navsat_transform_node`` exposes two parameters to adddress these possible shortcomings in IMU data: ``magnetic_declination_radians`` and ``yaw_offset``. Referring to Figure 2, for an IMU that is currently measuring a yaw value of ``imu_yaw``,
We now have a translation :math:`(x_{UTM}, y_{UTM})` and rotation :math:`\theta`, which we can use to create the required *utm* -> *map* transform. We use the transform to convert all future GPS positions into the robot's local coordinate frame. ``navsat_transform_node`` will also broadcast this transform if the ``broadcast_utm_transform`` parameter is set to *true*.
161
+
We now have a translation :math:`(x_{UTM}, y_{UTM})` and rotation :math:`\theta`, which we can use to create the required *utm* -> *map* transform. We use the transform to convert all future GPS positions into the robot's local coordinate frame. ``navsat_transform_node`` will also broadcast this transform if the ``broadcast_utm_transform`` parameter is set to *true*.
154
162
155
163
If you have any questions about this tutorial, please feel free to ask questions on `answers.ros.org <http://answers.ros.org>`_.
Copy file name to clipboardExpand all lines: doc/navsat_transform_node.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ The time, in seconds, to wait before calculating the transform from GPS coordina
18
18
19
19
~magnetic_declination_radians
20
20
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
21
-
Enter the magnetic declination for your location. If you don't know it, see `http://www.ngdc.noaa.gov/geomag-web <http://www.ngdc.noaa.gov/geomag-web>`_ (make sure to convert the value to radians). This parameter is needed if your IMU prodives its orientation with respect to the magnetic north.
21
+
Enter the magnetic declination for your location. If you don't know it, see `http://www.ngdc.noaa.gov/geomag-web <http://www.ngdc.noaa.gov/geomag-web>`_ (make sure to convert the value to radians). This parameter is needed if your IMU provides its orientation with respect to the magnetic north.
Copy file name to clipboardExpand all lines: doc/state_estimation_nodes.rst
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,6 +166,14 @@ Starts the filter with the specified state. The state is given as a 15-D vector
166
166
0.0, 0.0, 0.0,
167
167
0.0, 0.0, 0.0]</rosparam>
168
168
169
+
~permit_corrected_publication
170
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
171
+
When the state estimation nodes publish the state at time `t`, but then receive a measurement with a timestamp < `t`, they re-publish the corrected state, with the same time stamp as the previous publication. Setting this parameter to *false* disables that behavior. Defaults to *false*.
172
+
173
+
~print_diagnostics
174
+
^^^^^^^^^^^^^^^^^^
175
+
If true, the state estimation node will publish diagnostic messages to the ``/diagnostics`` topic. This is useful for debugging your configuration and sensor data.
176
+
169
177
~publish_tf
170
178
^^^^^^^^^^^
171
179
If *true*, the state estimation node will publish the transform from the frame specified by the ``world_frame`` parameter to the frame specified by the ``base_link_frame`` parameter. Defaults to *true*.
@@ -174,10 +182,6 @@ If *true*, the state estimation node will publish the transform from the frame s
174
182
^^^^^^^^^^^^^^^^^^^^^
175
183
If *true*, the state estimation node will publish the linear acceleration state. Defaults to *false*.
176
184
177
-
~print_diagnostics
178
-
^^^^^^^^^^^^^^^^^^
179
-
If true, the state estimation node will publish diagnostic messages to the ``/diagnostics`` topic. This is useful for debugging your configuration and sensor data.
0 commit comments