4-Dimensional Measurement System - 3D Measurement of the Foot While Walking -
Most of the existing systems for measuring shapes of human body parts apply 3D reconstruction methods which use controlled light sources (e.g. scanning by laser beam). Such methods require several seconds to acquire data. Therefore, the subject must stand still during the measurement. The purpose of our research is to record the changes of foot shape while walking.
In the field of Computer Vision, 3D reconstruction from multiple images is a common challenge, and much research has been conducted. We have constructed a system to take multiple, trigger-synchronized movies.We are researching to create a 3D reconstruction from each set of frames to produce 4-Dimensional measurement data, charting 3D shapes through time. While our ultimate goal is to reconstruct the 4-Dimensional shape of the whole foot, currently, the focus is on the basic feature values (ball, instep, and heel) of the foot while walking.
The 3 lines in the figure are called "ball", "instep", and "heel", and each line is defined by anatomical landmarks. In Japanese Industrial Standard (JIS), shoe size is defined by 3 parameters; "length of foot", "perimeter of ball", and "width of ball". At the same time, "instep" and "heel" are used to evaluate how a shoe fits. Thus, these 3 features are basic and important. In this research, we apply tape to these features, and reconstruct the 3D shape of those features as outlined by the tape.
- Camera calibration
- We created an easy interface to apply Tsai's method.
- Feature region detection in each image
- Feature regions are detected by subtraction of the background image, checking color values, and other indicators.
- Edge-based, stereo matching methods
- In order to get stabilized matching, search for corresponding points only along the edges of the detected feature regions.
- Using the zero-crossing method, estimate the edge of the feature regions with sub-pixel precision.
- Selection of camera pairs for stereo matching
- Selection using the direction of cameras
- Selection using epipolar geometry
We have 6, IEEE1394 cameras synchronized by a trigger signal. Each camera is connected to a Windows PC. The capturing program is remotely controlled via TCP/IP. This system captures multiple stereo movies at 14FPS (70msec interval). The captured images are sized XGA (1027x768 pixels) and have 16bpp (4 Bytes in 2 pixels) YUV pixels.
I . Input images
II . Detected feature regions in each image
III . Principles for the selection of camera pairs for stereo matching
III-A . Do not use a pair of cameras showing images in opposing directions as shown below.
III-B . Do not use a camera pair when the edge direction is perpendicular to a point on the epipolar line.
III-C . Only use points that have a properly matching correspondence.
IV . Reconstructed 3D shape from the above input images
V . Input movies (mpeg at half resolution) and a movie of the reconstructed 3D shape
Input Camera #1(160kB)
Input Camera #2(160kB)
Input Camera #3(160kB)
Input Camera #4(160kB)
Input Camera #5(160kB)
Input Camera #6(160kB)
Reconstructed 3D shape(9MB)