登入選單
返回Google圖書搜尋
Learning Shepherding Behavior
註釋Artificial shepherding strategies, i.e. using robots to move individuals to given locations, have many applications. For example, people can be guided by mobile robots from dangerous places or swimming robots may help to clean up oil spills. This thesis uses a multiagent system to model the robots and sheep. We analyze the complexity of the shepherding task and present a greedy algorithm that only needs linear time to compute a solution that is proven to be close to optimal. Additionally, we analyze to what extend such strategies can be learned as learning usually provides powerful solutions. This thesis focuses on reinforcement learning (RL) as learning method. To enable RL agents to use their knowledge more efficiently in continuous or large state spaces (as e.g. in the shepherding task), methods to transfer knowledge to unseen but similar situations are required. The approaches developed in this thesis, GNG-Q and I-GNG-Q, combine RL with adaptive neural algorithms and enable the agent to learn behavior in parallel with its representation. Both are based upon the growing neural gas, which is an unsupervised learning approach that learns a vector quantization by placing and adjusting units in the input space. GNG-Q groups states that are spatial close and share the same behavior while I-GNG-Q combines the learned behavior from a larger area of the approximation which results in smoother value functions. Thus, GNG-Q performs a state-space abstraction and I-GNG-Q approximates the value function. Both methods monitor the agent's policy during learning to find regions of the approximation that have to be refined. Amongst many others, the core advantages of our approaches are that they do not need the model of the environment and that the resolution of the approximation is determined automatically. The experimental evaluation underlines that the behaviors learned using our approaches are highly efficient and effective. ; eng