Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

High Accuracy Optical Flow Serves 3-D Pose Tracking: Exploiting Contour and Flow Based Constraints

MPG-Autoren
/persons/resource/persons45312

Rosenhahn,  Bodo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen

https://rdcu.be/dHSST
(Verlagsversion)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Brox, T., Rosenhahn, B., Cremers, D., & Seidel, H.-P. (2006). High Accuracy Optical Flow Serves 3-D Pose Tracking: Exploiting Contour and Flow Based Constraints. In A. Leonarids, H. Bishof, & A. Prinz (Eds.), Computer Vision -- ECCV 2006 (pp. 98-111). Berlin, Germany: Springer.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-000F-2312-A
Zusammenfassung
Tracking the 3-D pose of an object needs correspondences between 2-D features
in the image and their 3-D counterparts in the object model. A large variety of
such features has been suggested in the literature. All of them have drawbacks
in one situation or the other since their extraction in the image and/or the
matching is prone to errors. In this paper, we propose to use two complementary
types of features for pose tracking, such that one type makes up for the
shortcomings of the other. Aside from the object contour, which is matched to a
free-form object surface, we suggest to employ the optic flow in order to
compute additional point correspondences. Optic flow estimation is a mature
research field with sophisticated algorithms available. Using here a high
quality method ensures a reliable matching. In our experiments we demonstrate
the performance of our method and in particular the improvements due to the
optic flow.
We gratefully acknowledge funding by the German Research Foundation (DFG) and
the Max Planck Center for Visual Computing and Communication.