Skip to content

Commit 23f9fdc

Browse files
Merge pull request #19 from sichkar-valentyn/updating-readme
Update README.md
2 parents e87f185 + 09b26be commit 23f9fdc

File tree

1 file changed

+27
-3
lines changed

1 file changed

+27
-3
lines changed

README.md

Lines changed: 27 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,36 @@
11
# Reinforcement Learning in Python
2-
Implementing Reinforcement Learning Algorithms for global path planning in tasks of mobile robot navigation
2+
Implementing Reinforcement Learning (RL) Algorithms for global path planning in tasks of mobile robot navigation
33

44
### Reference to:
55
[1] Valentyn N Sichkar. Reinforcement Learning Algorithms for global path planning // GitHub platform [Electronic resource]. URL: https://github.com/sichkar-valentyn/Reinforcement_Learning_in_Python (date of access: XX.XX.XXXX)
66

77
## Description
8-
RL Algorithms implemented in Python for the task of global path planning for mobile robot.
9-
<br/>Experimental results with different Environments.
8+
RL Algorithms implemented in Python for the task of global path planning for mobile robot. Such system is said to have feedback. The agent acts on the environment, and the environment acts on the agent. At each step the agent:
9+
* Executes action.
10+
* Receives observation (new state).
11+
* Receives reward.
12+
13+
The environment:
14+
* Receives action.
15+
* Emits observation (new state).
16+
* Emits reward.
17+
18+
Goal is to learn how to take actions in order to maximize the reward. The objective function is as following:
19+
20+
Q[s, a] = Q[s, a] + λ * (r + γ * max (Q[s_, a_]) – Q[s, a]),
21+
22+
where,
23+
<br/>s – current position of the agent,
24+
<br/>a – current action,
25+
<br/>λ – learning rate,
26+
<br/>r – reward that is got in the current position,
27+
<br/>γ – gamma (reward decay, discount factor),
28+
<br/>s_ - next chosen position according to the next chosen action,
29+
<br/>a_ - next chosen action.
30+
31+
The major component of the RL method is the table of weights - Q-table of the system state. Matrix Q is a set of all possible states of the system and the system response weights to different actions. During trying to go through the given environment, mobile robot learns how to avoid obstacles and find the path to the destination point. As a result, the Q-table is built. Looking at the values of the table it is possible to see the decision for the next action made by agent (mobile robot).
32+
33+
<br/>Experimental results with different Environments sre shown and described below.
1034
<br/>Code is supported with a lot of comments. It will guide you step by step through entire idea of implementation.
1135
<br/>
1236
<br/>Each example consists of three files:

0 commit comments

Comments
 (0)