Volume 11 Issue 10 - December 18, 2009 PDF
Motion Overview of Human Actions
Jackie Assa1, Daniel Cohen-Or1, I-Cheng Yeh2 and Tong-Yee Lee2,*

1School of Computer Science, Tel Aviv University
2Department of Computer Science and Information Engineering, College of Electrical Engineering and Computer Science, National Cheng Kung University

Jackie Assa, Daniel Cohen-Or, I-Cheng Yeh, and Tong-Yee Lee, "Motion Overview of Human Actions," ACM Transactions on Graphics (SIGGRAPH Asia 2008 issue), Vol. 27, No. 5, Dec. 2008, Article 115.

Font Enlarge
Project Web Site: http://graphics.csie.ncku.edu.tw/Motion_Overview/

Currently motion capture (mocap) data becomes a leading role in animations, games and 3D environments. Many of these applications require the creation of expressive overview video clips capturing the human motion. For this purpose, an automatic method of computing camera control paths is required to generate a video which expresses and highlights the detailed human actions. This problem is considered hard because of the large configuration space, as well as the huge number of factors that can affect the camera control [1]. Many of the previous studies [2][3] of this problem had formulated it as an optimization problem which maximizes viewpoint properties such as the subject’s visibility, angle to its movement axis, while considering global properties such as the camera speed and director guidelines. Without respect to the analysis of character motion, applying these previous methods to human action scenes proves to be ineffective. For example, the viewpoints are shown in Figure 1, and in extreme cases impose movements which include fast or shaky camera control path.
Figure 1: Examples of poor selections from camera control algorithms which do not consider the human actions. (top row) Poor selection of viewpoint and occlusion of significant body parts for the actions. (bottom row) An image sequence caused by an abrupt camera movement does not illustrate well the action (fall).

In this paper[4], we present a novel technique to generate an overview video based on the analysis of motion capture data. We base our method on quantum annealing optimization with an objective function that respects the analysis of the character motion and the camera movement constraints. It automatically generates a smooth camera control path, splitting it to several shots if required. To evaluate our method, we introduce a novel camera placement metric which is evaluated against previous work and conduct a user study comparing our results with the various systems. Our results and user study can be found in our project web site: http://graphics.csie.ncku.edu.tw/Motion_Overview/ .

Our technique consists of the following stages: first we analyze a motion clip data for salient segments and detect salient body parts within these segments (Figure 2). Next we calculate the viewpoint potential metric (Figure 3) and fuse it with the saliency information to generate a viewpoint potential space. Our optimization technique processes this potential space and designs a multi-shot path (Figure 4) following a predefined set of camera movement constraints. The last stage of our algorithm sets the field of view and renders the overview video clip. We show two results in Figure 4. For a better impression we recommend the reader to examine the supplementary video of this work in our project web site.
Figure 2. The generated saliency function for a given clip. The lower graph indicates frame saliency, some of the keyframes are shown with their respective pose. The upper graph illustrates the relative significance of the various body parts (shown in the colored character on the left). The body relative significance of the body parts is considered only during the salient motion segments, shown in yellow.

Figure 3: We measure the viewpoint quality as a combination of generic descriptors and pose specific ones. We visualize each of the resulting descriptors quality maps (yellow describes high descriptor value).

To evaluate our method, we also introduce an user study which include 30 non-professional computer graphics students. The users were asked to grade attributes of video clips showing the same animation with different camera control methods. This test was blind, with counterbalancing to avoid order effects of the examined results. Users were asked to grade how informative each clip was (how well it describes the presented motion), how professional the camera control looks, and how expressive it is. Each grade ranges in a scale of 1-5 where 5 is the most informative, professional and expressive. The results of the user study indicate that our method usually generates satisfactory paths, matching up the informative, professional and expressiveness grades of the camera paths generated by the professional animator.
Figure 4. Some of our results. Due to space limitations, we show a sample set of images taken from path viewpoint values with ratio of 80% and better. We also show multi-shot paths for these two examples.

We consider this work to be among the first steps in camera path planning which is affected by the captured motion and the finer details of its actions. We argue that such a camera path is essential to the expressiveness of the generated path. Future work in this direction can extend this work in providing additional attributes that can be translated to potential maps, additional motifs and to techniques for evaluating complex scenes with more than a single character.

[1] CHRISTIE, M., AND OLIVIER, P. 2006. Camera control in computer graphics. In Eurographics 2006 Star Report, 89–113.
[2] DRUCKER, S. M., AND ZELTZER, D. 1994. Intelligent camera control in a virtual environment. In Proceedings of Graphics Interface ’94, 190–199.
[3] HALPER, N., HELBING, R., AND STROTHOTTE, T. 2001. A camera engine for computer games: Managing the trade-off between constraint satisfaction and frame coherence. In EG 2001 Proceedings, Blackwell Publishing, vol. 20(3), 174–183.
[4] Jackie Assa, Daniel Cohen-Or, I-Cheng Yeh, and Tong-Yee Lee, "Motion Overview of Human Actions," ACM Transactions on Graphics (SIGGRAPH Asia 2008 issue), Vol. 27, No. 5, Dec. 2008, Article 115.
< Previous
Next >
Copyright National Cheng Kung University