Technical details

To visualize traces of the movements of multiple balls in 3D space, this system incorporates the following three technologies. The flow of processing involves extracting balls from multi-view videos, calculating their positions in 3D space from the positions and orientations of cameras, and finally rendering balls and their traces in CG over live video.
・Tracking balls with multi-view videos
 Balls are automatically extracted and tracked on the basis of video and movement features taken from multi-view
 camera videos. A ball’s past position is used to predict its position in the next frame. By limiting the search
 range, the tracking accuracy and processing speed can both be improved.
・Calibrating multi-view pan-tilt cameras
 To accurately calculate 3D positions and composite CG over video, we improved the accuracy of multi-view camera
 calibration by using multiple calibration patterns. We also developed a calibration technique for easily
 adjusting the installation of cameras on location.
・Compositing 3D CG in real-time
 3D positions of balls are calculated in real-time by triangulation from the information of cameras and 2D
 positions of the balls. CG compositing can be accurately performed with the above results and by simulating lens
 distortion. Using 3D positions and transition data, each ball’s height and speed are displayable.

NHK STRL

 Exhibition content

We developed a system that tracks multiple balls in 3D space and uses CG to visualize their movements in real-time. The system extracts the balls from only videos taken by multi-view cameras and calculates their positions in 3D space from the positions and orientations of the cameras. Thus, traces of their movements are rendered in the video in CG. By calculating the speed of balls using their positional information, both their position and speed data can be displayed.