Horizon
These simulations show how the horizon affects vehicles' motion. The simulation on the left illustrates the scenario where two vehicles attempt to reach their final positions while avoiding each other and static obstacles. With horizon = 5, vehicles do not have the capability to see further in the future; therefore, they desire to minimize the cost greedily. As a result, they stuck at the local minimum and stop.
However, when increasing the time horizon to 10, things begin to change. The predicted motion shown in the right simulation is much longer and they know how to directly avoid each vehicle in the future. Since both vehicles optimize the predicted motion in 10 steps further, they would not stuck at the local minimum.
However, when increasing the time horizon to 10, things begin to change. The predicted motion shown in the right simulation is much longer and they know how to directly avoid each vehicle in the future. Since both vehicles optimize the predicted motion in 10 steps further, they would not stuck at the local minimum.
Constraints Handler
The left and right figures show the state and control versus time. With MPC, the controller can handle the state and control constraints while optimize a specified performance index. The control input, position and velocities of both vehicles are bounded in our simulation results.
Cost Function
Our cost function is used to minimize the error between the goal state and the current state with predicted states up to N horizon; besides, the control input is also minimized to save the vehicles' power. The cost function versus time is show as below, when the prediction horizon increases the cost function drops dramatically down to zero. With N less than 6, both vehicles will stuck at local minimum and cannot move further.
Dynamic Obstacles
With Dynamic Obstacles, the original MPC solves treats the dynamic obstacles as static at each iteration. Without prediction of the obstacles motion. The vehicles easily collide with the obstacles, leading to no feasible solutions to optimization problem. In order to solve this problem, we estimated the obstacles' motion with velocity and take into account for the future obstacles
motion to solve the MPC. The result is promising.
motion to solve the MPC. The result is promising.
Dynamic Targets
With dynamic targets, the MILP algorithm shows that it is able to handle both vehicles' state and control constraints, and simultaneously
avoid other vehicles and obstacles while tracking dynamic targets. The left simulation shows the formation control. The red vehicle attempts to track the red point, and the green vehicle tries to track the red vehicle, and so on. The right simulation shows that even with obstacles, both vehicles can track their own dynamic targets without collision.
avoid other vehicles and obstacles while tracking dynamic targets. The left simulation shows the formation control. The red vehicle attempts to track the red point, and the green vehicle tries to track the red vehicle, and so on. The right simulation shows that even with obstacles, both vehicles can track their own dynamic targets without collision.