Error in solving QuickMDP using DiscreteValueIteration #448
Unanswered
Manavvora
asked this question in
Debugging Help
Replies: 1 comment 8 replies
-
Hi, This appears to be because your transition function is allowing transitions outside of your defined state space. s = (100,91)
s ∈ states(m) # true
a = 1
tdist = transition(m, s, a) # deterministic
sp = tdist.val # (100,101)
sp ∈ states(m) # false |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am trying to solve an MDP declared using QuickMDP where I've explicitly declared n_states = 101*102 = 10302. However, when I try to solve the MDP using DiscreteValueIteration, I get the following error message:
Warning: Problem creating an ordered vector of states in ordered_states(...). There is likely a mistake in stateindex(...) or n_states(...).
│
│ n_states(...) was 20604.
states corresponding to the following indices were missing from states(...) : (A list of indices here)
I have not explicitly defined the stateindex in the QuickMDP.
This is how I have defined the MDP:
a = [],
m = QuickMDP(
actions = [0, 1],
#obstype = Tuple{Int64, Int64},
discount = 0.95,
a = [],
temp = (),
for i in 0:101
for j in 0:100
push!(a, (i,j))
end
end,
states = a,
statetype = Tuple{Int, Int},
)
Edit 1: The previous error has been sorted but the new error I am facing is this:
KeyError: key (100, 101) not found
Beta Was this translation helpful? Give feedback.
All reactions