You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-root/Cobalt/Computer_Science/AI.mdx
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,50 +1,50 @@
1
-
====== AI ======
1
+
#AI
2
2
Main function of AI on the sub is to make decisions of what the sub should do based on given data from topics like sensors, vision, data, etc. It is using hierarchical state machine, which allows it to cover majority of cases it will face.
3
3
4
-
===== Overview =====
4
+
##Overview
5
5
6
-
Current AI uses [[http://wiki.ros.org/smach| SMACH]] package that is in ROS. SMACH is a task-level architecture for rapidly creating complex robot behavior. At its core, SMACH is a ROS-independent Python library to build hierarchical state machines.
6
+
Current AI uses [ SMACH](http://wiki.ros.org/smach) package that is in ROS. SMACH is a task-level architecture for rapidly creating complex robot behavior. At its core, SMACH is a ROS-independent Python library to build hierarchical state machines.
7
7
8
8
Advantages of SMACH are:
9
9
- rapid development, ability to create complex state machines
10
10
- ability to quickly change state machines without big code changes
11
11
- explicitly define outcomes of every state thus covering most or all possible situations
12
-
===== Current AI =====
12
+
##Current AI
13
13
14
14
Our current AI was re-written using SMACH. There are several utility files such as:
15
15
- gate_util.py - all states that are used by gate AI, they are generic.
16
16
- util.py - contains utility functions for vision to filter labels, get N most probably, normalize coordinates from vision, or wrap yaw. **Note that vision will be changed in future, some of the functions will no longer be useful.**
17
17
- basic_states.py - contains all of the states for roulette and dice AI.
18
18
- control_wrapper.py - wrapper made to ease communication with control system, making it easy to send basic commands such as dive, yaw, pitch, roll, move forward.
19
19
- start_switch.py - every high-level state machine **must** have start_switch as their first state. It is a state that waits for ros message to be sent over topic /start_switch to be true at least 3 times.
20
-
- blind_movement.py - contains move_forward state that moves forward with //**x**// speed for //**y**// number of seconds.
20
+
- blind_movement.py - contains move_forward state that moves forward with ***x*** speed for ***y*** number of seconds.
21
21
- SubscribeState.py - a state that was made which accept also topic to which you want to subscribe. It is also modified to pass over any input/output keys. **In future this file will also contain SynchronousSubscribeState that subscribes to two topics and moves once it has two**
22
22
23
-
There is a useful tool to see state machine and transitions of it called [[http://wiki.ros.org/smach_viewer|smach_viewer]]. To run it, run
24
-
\<code bash\>
23
+
There is a useful tool to see state machine and transitions of it called [smach_viewer](http://wiki.ros.org/smach_viewer). To run it, run
24
+
```bash
25
25
rosrun smach_viewer smach_viewer.py
26
-
\</code\>
26
+
```
27
27
An example of what our AI looks like in smach_viewer is this screenshot below
28
28
`\{\{ :cs:ai:gate_ai.png?nolink |\}\}
29
-
===== Things to know when developing AI =====
29
+
##Things to know when developing AI
30
30
* When inheriting from **SubscribeState** instead of **SmachState**, you need to use **self.exit("outcome")** instead of **return "outcome"**
31
31
* Every python script that is going to run any state machine **must have** in its main function these lines of code below.
32
-
\<code python\>
32
+
```python
33
33
while rospy.get_time() ==0:
34
34
continue
35
-
\</code\>
35
+
```
36
36
* Every time you create control_wrapper instance, you need to set depth value again.
37
37
* If your state is using control wrapper to move, right before final outcome make sure to set changed yaw/roll/pitch/forward to 0.
38
38
* Control wrapper forward and strafe do not use relative same way as yaw and pitch, use instead **strafeLeftError()** and **forwardError()**
39
39
* Some current AI files use parameters from roscore server for different configuration of values. They will crash if they do not load parameters. Parameters are located in \<wrap em\>/ros/robosub/param/\</wrap\> or some may be located in individual utility folders. To load parameters run
40
-
\<code bash\>
40
+
```bash
41
41
rosparam load [param_file_name].yaml
42
-
\</code\>
42
+
```
43
43
* Every time you restart roscore you need to reload parameters
44
44
* Our vision detection requires undistortion to be running.
* The cameras that we use for the sub are Point Grey Flea3 GigE Vision ethernet cameras. Specs and downloads are available here. [[http://www.ptgrey.com/flea3-14-mp-color-gige-vision-sony-icx267-camera|Flea3]]
5
+
* The cameras that we use for the sub are Point Grey Flea3 GigE Vision ethernet cameras. Specs and downloads are available here. [Flea3](http://www.ptgrey.com/flea3-14-mp-color-gige-vision-sony-icx267-camera)
6
6
* The cameras come with an SDK which contains configuration software and programmable interfaces for the camera. You can access it on their website.
* Since the cameras are ethernet cameras the only thing you need for setup is an ethernet connection and a 12V power supply. The cameras take a few seconds to boot up and usually give themselves a default IP that is not on the network. You can use the flycap software to detect the cameras and add them to the network.
18
18
19
-
===== Configuration =====
19
+
##Configuration
20
20
21
21
* Before you edit any camera settings, be sure to read the ENTIRE guide on the flycap configuration software as well as the IEEE-1394 camera standard. These documents give great insight into the design, terminology, and typical settings of the cameras.
22
22
* Note, that the Flea3 cameras are somewhat tricky to configure correctly. Using factory settings, the cameras will most likely cause image inconsistency errors due to packet loss. Since most ethernet packet sizes are not suitable for streaming, it is almost necessary to increase packet size to jumbo (9000 bytes) in order to get the smoothest streaming possible.
23
23
* On Windows this can be done in Network Devices by configuring the driver to have jumbo packets. In Linux this can be configured with: sudo ifconfig eth0 mtu 9000.
24
24
* The packet delay should also be changed to account for the packet size increase. It should be the lowest delay that does not cause consistency errors depending on the setup. This can be tested by configuring the delay and running the cameras for about 5 minutes. If the cameras report no loss then those settings are optimal.
25
25
* Be aware that the jumbo packet size cannot usually be maxed out. (On my system I could only get to ~7000 byte packets before the stream would freeze) In order to get optimal settings you have to play with the packet size and delay until you get the highest estimated bandwidth without errors.
26
26
27
-
===Serial Numbers===
27
+
#### Serial Numbers
28
28
* Left:
29
29
* Right: 14406634
30
30
31
-
===== Camera Settings =====
31
+
##Camera Settings
32
32
33
33
* The cameras can use multiple encodings depending on bandwidth and desired end types. For low bandwidth setups, Raw8 is the best encoding. You can configure post processing to convert the Raw8 into RGB8. Rigorous will return the best results. For medium bandwidth YUV422 returns great results, and for high bandwidth RGB8 is best as it requires no additional conversion once it reaches the computer.
34
34
* Since the cameras use orthographical fisheye lenses, the best image would be a square centered on the image circle. Since the Flea3 only supports resolutions up to 1032p, the best image is 1032x1032 centered. (I guessed around 162px right)
35
35
* Since conditions can change at the competition, it seems it is best to allow the cameras to auto adjust exposure, gain, and whitebalance. Although this makes the vision input somewhat unpredictable, you can stamp each frame with this information in the top left hand corner. The flycap SDK can save it once it reaches the computer if you want to analyze the footage later and figure out what settings the camera has at that point in time.
36
36
37
-
===== Problems / Fixes =====
37
+
##Problems / Fixes
38
38
39
39
* PGR cameras have issues on linux with large packet types. In order to fix this the buffer needs to be increased.
40
40
41
-
===Linux Streaming Fix:===
42
-
Derived from here: [[https://www.ptgrey.com/KB/10016|Linux Streaming Fix]]
41
+
#### Linux Streaming Fix:
42
+
Derived from here: [Linux Streaming Fix](https://www.ptgrey.com/KB/10016)
43
43
44
44
CAUSE:
45
45
When streaming images from a GigE Vision camera on Linux Ubuntu 8.04 systems, a high number of lost data packets may be observed. In FlyCapture SDK applications, dropped packets result in IMAGE_CONSISTENCY_ERRORS returned.
@@ -54,17 +54,20 @@ Note: On some ARM boards, you may need to increase the receive buffer size to gr
54
54
55
55
The following sysctl command updates the receive buffer memory settings:
Note: In order for these changes to persist after system reboots, the following lines must be manually added to the bottom of the /etc/sysctl.conf file:
60
62
61
63
net.core.rmem_max=1048576
62
64
net.core.rmem_default=1048576
63
65
Once changes are persisted, they can be reloaded at any time by running the following command in sysctl:
64
66
67
+
```bash
65
68
sudo sysctl -p
66
-
67
-
==== Running With ROS ====
69
+
```
70
+
###Running With ROS
68
71
69
72
Point Grey supplies drivers for our Gig E cameras. You will need to have installed them:
70
73
@@ -79,9 +82,9 @@ The cameras should begin publishing on the **/camera/[left|right|bottom]/image**
In the future, a downward facing camera is also planned to be added, though many of the steps will be similar.
81
84
82
-
One more note, when using the Point Grey drivers, the cameras will publish a [[https://github.com/ros-drivers/pointgrey_camera_driver/blob/master/wfov_camera_msgs/msg/WFOVImage.msg|WFOVImage]] message so many standard ros systems may not be able to use the data without it being republished. A republisher has been implemented in the Robosub repository and can be run with the following command:
85
+
One more note, when using the Point Grey drivers, the cameras will publish a [WFOVImage](https://github.com/ros-drivers/pointgrey_camera_driver/blob/master/wfov_camera_msgs/msg/WFOVImage.msg) message so many standard ros systems may not be able to use the data without it being republished. A republisher has been implemented in the Robosub repository and can be run with the following command:
83
86
rosrun robosub camera_repub
84
-
This will republish the [[https://github.com/ros-drivers/pointgrey_camera_driver/blob/master/wfov_camera_msgs/msg/WFOVImage.msg|WFOVImage message]]messages as [[http://docs.ros.org/api/sensor_msgs/html/msg/Image.html|sensor_msgs/Image]] messages on the **/camera/[left|right|bottom]/undistorted** topic which aligns with the undistortion nodes so nodes are agnostic as to whether or not undistortion is being performed.
87
+
This will republish the [WFOVImage message](https://github.com/ros-drivers/pointgrey_camera_driver/blob/master/wfov_camera_msgs/msg/WFOVImage.msg)messages as [sensor_msgs/Image](http://docs.ros.org/api/sensor_msgs/html/msg/Image.html) messages on the **/camera/[left|right|bottom]/undistorted** topic which aligns with the undistortion nodes so nodes are agnostic as to whether or not undistortion is being performed.
Due to this being a collaborative software project among new developers, it's important to make sure that the code you write is clean and usable by others. We predominantly use two languages in this project, C++ and Python. Below are a list of coding conventions for each respective language in the project. There may be exceptions to these rules, but otherwise they should be followed. A code linter will be created to ensure that code conforms to there standards.
3
3
4
-
====== roslint ======
4
+
#roslint
5
5
To check if you code passes standard checks, compile using:
6
6
rsmake roslint
7
7
This will invoke the compiler with the code linter turned on. The C++ linter will run first and will fail with an error if the linter fails and the Python linter will not be run. If the C++ code passes the linter then the Python linter will be run.
8
-
===== All Languages =====
9
-
===Indentation===
8
+
##All Languages
9
+
#### Indentation
10
10
Spaces shall be used for indentation, no tabs. \\
11
11
**Rational:** mixing tabs and spaces can cause major bugs in Python, and can create ugly formatted code in all other languages, so consistency is important. Although I prefer all tabs in Python, spaces makes more sense for other languages.
12
-
===Line Length===
12
+
#### Line Length
13
13
The maximum length of a line shall be 80 characters \\
14
14
**Rational:** this results in code that is readable on most screens without wrapping lines
15
15
16
-
===Trailing Spaces===
16
+
#### Trailing Spaces
17
17
Trailing whitespace at the end of a line is not allowed. \\
18
18
**Rational:** most text editors will actually automatically chop the trailing whitespace when you open a file. If people aren't paying attention, this can result in them commiting a file where they made no functional changes other than to cut the whitespace, resulting in a confusing source control history. In addition, it's just a cleanliness thing.
19
19
20
-
===Extra Newlines===
20
+
#### Extra Newlines
21
21
More than one blank line in a row is not allowed. \\
22
22
**Rational**: If you need to break up your code into logical chunks it is better to use a single newline and a comment describing the next code block.
23
-
===== C++ =====
23
+
##C++
24
24
running
25
25
astyle -A1 -N -n \<file name\>
26
26
on your C++ files should clean them up nicely.
27
-
===== Python =====
27
+
##Python
28
28
29
29
Nodes should be defined in classes with the constructor called in the "main" section.
0 commit comments