Skip to content

custom system dynamic for 6dof Robot MPC #220

@francescoo99

Description

@francescoo99

Hi everyone,
I'm trying to implement a cartesian model predictive control for a 6dof robot manipulator. The idea is to give as a target a cartesian pose and exploit the robot kinematic model (x_dot = J * q_dot) where:

  • the state (x) is the endeffector cartesian pose
  • the control input are the joint velocities (q_dot)
  • the state derivative is the endeffector cartesian velocity

By this way I can describe my non linear system dyanmic as: x_dot = f(x, u, t).

As explained in the documentation, to implement my own system dynamics, I created a class (let's call it mySystemDynamics) that inherits from ControlledSystem and I wrote the computeControlledDynamics method in this way:

virtual void computeControlledDynamics(const StateVector<state_dim_, double>& state, const time_t&, const ControlVector<control_dim_, double>& , StateVector<state_dim_, double>& derivative) override

{

        derivative = J_  * control;
    
}

The problem is that when I send a cartesian pose as a target, the system starts to converge to the desired pose but after a short amount of time it starts doing random things in which I couldn't find a pattern to debug the issue. Moreover, I found out that if I check the last control action by doing

mySystemDynamics.getLastControlAction()

I obtain a zero vector even if I set the controller at each control loop with:

mySystemDynamics.setController(new_policy_)

In the main control loop, I followed every step in this example . The only difference is that I'm using a CostFunctionQuadraticSimple as a cost function rather than the one used in the example. Everything else is the same as the example.

These are my optimal control and MPC settings for both initial guess computation and actual MPC:

            // First opt prob options
            ilqr_options_.dt = 1.0 / (double)rate_; // the control discretization in [sec] 0.002
            ilqr_options_.integrator = ct::core::IntegrationType::EULER;
            ilqr_options_.discretization = NLOptConSettings::APPROXIMATION::FORWARD_EULER;
            ilqr_options_.max_iterations = 10;
            ilqr_options_.nlocp_algorithm = NLOptConSettings::NLOCP_ALGORITHM::ILQR;
            ilqr_options_.lqocp_solver = NLOptConSettings::LQOCP_SOLVER::GNRICCATI_SOLVER; // the LQ-problems are solved using a custom Gauss-Newton Riccati solver
            ilqr_options_.printSummary = true;
            ilqr_options_.min_cost_improvement = 1e-11;
            
            /*MPC sover*/
            ilqr_options_mpc_.dt = 1.0 / (double)rate_; // the control discretization in [sec] 0.002
            ilqr_options_mpc_.integrator = ct::core::IntegrationType::EULER;
            ilqr_options_mpc_.discretization = NLOptConSettings::APPROXIMATION::FORWARD_EULER;
            ilqr_options_mpc_.max_iterations = 1;
            ilqr_options_mpc_.nlocp_algorithm = NLOptConSettings::NLOCP_ALGORITHM::ILQR;
            ilqr_options_mpc_.lqocp_solver = NLOptConSettings::LQOCP_SOLVER::GNRICCATI_SOLVER; // the LQ-problems are solved using a custom Gauss-Newton Riccati solver
            ilqr_options_mpc_.printSummary = true;


            // MPC options
            mpc_options_.stateForwardIntegration_ = true;
            mpc_options_.coldStart_ = false;
            mpc_options_.stateForwardIntegration_dt_ = 1.0 / (double)rate_;
            mpc_options_.postTruncation_ = true;
            mpc_options_.mpc_mode = ct::optcon::MPC_MODE::CONSTANT_RECEDING_HORIZON; 
            mpc_options_.stateForwardIntegratorType_ = ct::core::IntegrationType::EULER;
            mpc_options_.measureDelay_ = true;
            mpc_options_.print();

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions