登入選單
返回Google圖書搜尋
New Contributions to Vision-Based Human Computer Interaction in Local and Global
註釋Vision-based human-computer interaction means to use computer-vision technology for interaction of a user with a computer-based application. This idea has recently found particular interest of research. Among the many possibilities of implementing interaction, we focus on hand-based interaction, expressed by single hand postures, sequences of hand postures, and pointing. Two system architectures are presented which address different scenarios of interaction, and which establish the frame for several problems for which solutions are worked out. The system ZYKLOP treats hand gestures performed in a local environment, for example, on a limited area of the table-top. The goal with respect to this classical scenario is a more reliable system behaviour. Contributions concern color-based segmentation, forearm-hand separation as a precondition to more shape-based hand gesture classification, and classification of static and dynamic gestures. The ARGUS concept makes a first step towards the systematic analysis of hand gesture based interaction combined with pointing in a spatial environment with sensitive regions.Special topics addressed within the architectural framework of ARGUS include the recognition of details from the distance, compensation of varying illumination, changing orientation of the hand with respect to the cameras, estimation of pointing directions, and object recognition.