Skip to main content
Log in

Travel distance estimation from visual motion by leaky path integration

  • Research Article
  • Published:
Experimental Brain Research Aims and scope Submit manuscript

Abstract

Visual motion can be a cue to travel distance when the motion signals are integrated. Distance estimates from visually simulated self-motion are imprecise, however. Previous work in our labs has given conflicting results on the imprecision: experiments by Frenz and Lappe had suggested a general underestimation of travel distance, while results from Redlick, Jenkin and Harris had shown an overestimation of travel distance. Here we describe a collaborative study that resolves the conflict by tracing it to differences in the tasks given to the subjects. With an identical set of subjects and identical visual motion simulation we show that underestimation of travel distance occurs when the task involves a judgment of distance from the starting position, and that overestimation of travel distance occurs when the task requires a judgment of the remaining distance to a particular target position. We present a leaky integrator model that explains both effects with a single mechanism. In this leaky integrator model we introduce the idea that, depending on the task, either the distance from start, or the distance to target is used as a state variable. The state variable is updated during the movement by integration over the space covered by the movement, rather than over time. In this model, travel distance mis-estimation occurs because the integration leaks and because the transformation of visual motion to travel distance involves a gain factor. Mis-estimates in both tasks can be explained with the same leak rate and gain in both conditions. Our results thus suggest that observers do not simply integrate traveled distance and then relate it to the task. Instead, the internally represented variable is either distance from the origin or distance to the goal, whichever is relevant.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Bremmer F, Lappe M (1999) The use of optical velocities for distance discrimination and reproduction during visually simulated self-motion. Exp Brain Res 127:33–42

    Article  PubMed  CAS  Google Scholar 

  • Etienne AS, Maurer R, Seguinot V (1999) Path integration in mammals and its interaction with visual landmarks. J Exp Biol 199:201–209

    Google Scholar 

  • Foley JM (1980) Binocular distance perception. Psychol Rev 87:411–434

    Article  PubMed  CAS  Google Scholar 

  • Frenz H, Lappe M (2005) Absolute travel distances from optic flow. Vis Res 45:1679–1692

    Article  PubMed  Google Scholar 

  • Frenz H, Bremmer F, Lappe M (2003) Discrimination of travel distances from ‘situated’ optic flow. Vis Res 43:2173–2183

    Article  PubMed  Google Scholar 

  • Gillner S, Mallot HA (1998) Navigation and acquisition of spatial knowledge in a virtual maze. J Cogn Neurosci 10:445–463

    Article  PubMed  CAS  Google Scholar 

  • Judd SPD, Collett TS (1998) Multiple stored views and landmark guidance in ants. Nature 392:710–714

    Article  CAS  Google Scholar 

  • Knapp JM, Loomis JM (2004) Limited field of view of head-mounted displays is not the cause of distance underestimation in virtual environments. Presence 13(5):572–577

    Article  Google Scholar 

  • Lappe M, Frenz H, Bührmann T, Kolesnik M (2005) Virtual odometry from visual flow. In: Rogowitz BE, Pappas TN, Daly SJ (eds.) Proceedings of SPIE/IS&T Conference Human Vision and Electr. Imag. X, SPIE, vol 5666, pp 493–502

  • Loomis JM, Da Silva JA, Fujita N, Fukusima SS (1992) Visual space perception and visually directed action. J Exp Psychol Hum Perc Perform 18:906–921

    Article  CAS  Google Scholar 

  • Loomis JM, Klatzky RL, Golledge RG, Cicinelli JG, Pellegrino JW, Fry PA (1993) Nonvisual navigation by blind and sighted: assessment of path integration ability. J Exp Psych Gen 122:73–91

    Article  CAS  Google Scholar 

  • Loomis JM, Klatzky RL, Golledge RG, Philbeck JW (1999) Human navigation by path integration. In: Golledge RG (ed) Wayfinding: Cognitive mapping and other spatial processes. Johns Hopkins, Baltimore, pp 125–151

    Google Scholar 

  • Luneburg RK (1950) The metric of binocular visual space. Opt Soc Am 40:627–642

    Article  Google Scholar 

  • Maurer R, Seguinot V (1995) What is modelling for? A critical review of the models of path integration. J Theor Biol 175:457–475

    Article  Google Scholar 

  • Mittelstaedt ML, Glasauer S (1991) Idiothetic navigation in gerbils and humans. Zool Jb Physiol 95:427–435

    Google Scholar 

  • Mittelstaedt H, Mittelstaedt ML (1973) Mechanismen der Orientierung ohne richtende Außenreize. Fort Zool 21:46–58

    Google Scholar 

  • Mittelstaedt ML, Mittelstaedt H (1980) Homing by path integration in a mammal. Naturwissenschaften 67:566–567

    Article  Google Scholar 

  • Mittelstaedt M, Mittelstaedt H (2001) Ideothetic navigation in humans: estimation of path length. Exp Brain Res 139:318–332

    Article  PubMed  CAS  Google Scholar 

  • Peruch P, May M, Wartenberg F (1997) Homing in virtual environments: Effects of field of view and path layout. Perception 26:301–311

    Article  PubMed  CAS  Google Scholar 

  • Plumert JM, Kearney JK, Cremer JF, Recker K (2005) Distance perception in real and virtual environments. ACM Trans Appl Percept 2(3):216–233

    Article  Google Scholar 

  • Redlick FP, Jenkin M, Harris LR (2001) Humans can use optic flow to estimate distance of travel. Vis Res 41:213 – 219

    Article  PubMed  CAS  Google Scholar 

  • Riecke BE, van Veen HAHC, Bülthoff HH (2002) Visual homing is possible without landmarks: a path integration study in virtual reality. Presence 11:443–473

    Article  Google Scholar 

  • Robinson M, Laurence J, Hogue A, Zacher JE, German A, Jenkin M (2002) Ivy: basic design and construction details. In: Proceedings of ICAT, Tokyo

  • Sun H-J, Campos JL, Young M, Chan GSW (2004) The contributions of static visual cues, nonvisual cues, and optic flow in distance estimation. Perception 33:49–65

    Article  PubMed  Google Scholar 

  • Thompson WB, Willemsen P, Gooch AA, Creem-Regehr SH, Loomis JM, Beall AC (2004) Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence 13(5):560–571

    Article  Google Scholar 

Download references

Acknowledgments

M.L. is supported by the German Science Foundation DFG LA-952/2 and LA-952/3, the German Federal Ministry of Education and Research BioFuture Prize, and the EC Project Drivsco. L.R.H. and M.J. are supported by the National Sciences and Engineering Research Council of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Lappe.

Appendix

Appendix

In the adjust-target condition, the subjects first experience the visual motion for a particular travel distance. Thereafter the subjects adjust the target such that its distance from the observer matches the travel distance. We assume that the subjects monitor their current perceived position p(x) from the starting position p(0) =  0 during the movement, and adjust the target such that its distance is the value of p at the end of the movement. The current position is updated during the movement according to the following leaky integrator differential equation:

$$ \frac{{\rm d}p}{{\rm d}x} = -\alpha p + k, $$
(1)

where dx is the change of position of the subject along the trajectory of the movement, α is the rate of decay of the integrator, and k is the gain of the sensory (visual) input. If k = 1 the visual motion is transformed perfectly into the instantaneous travel distance. In this equation, in each step dx, the state variable p is reduced proportional to its current value (due to the leak) and incremented by the distance given by the gain k of the step.

The general solution to this differential equation is

$$ p(x) = {\rm e}^{-\alpha x + b} + \frac{k}{\alpha}. $$

The value of b is constrained by the starting position of the integrator, p(0) =  0. Therefore,

$$ {\rm e}^b + \frac{k}{\alpha} = 0, $$

or

$$ b = \ln \left( -\frac{k}{\alpha}\right) . $$

Thus the solution to Eq. 1 is given by

$$ p(x) = {\rm e}^{-\alpha x + \ln( -\frac{k}{\alpha})} + \frac{k}{\alpha} = \frac{k} {\alpha} (1-{\rm e}^{-\alpha x}). $$
(2)

In the move-to-target condition, the subjects first see the target at a particular distance. Then, the target is extinguished and the subjects move towards the target. The subjects stop the movement by pressing a button when they feel that the target is reached. We assume that the subjects monitor the current perceived distance D(x) to the target during the movement and press the button when this distance becomes zero. We denote the initial distance to the target by D(0) =  D 0 The current distance is updated during the movement according to the following differential equation:

$$ \frac{{\rm d}D}{{\rm d}x} = -\alpha D - k. $$
(3)

This is similar to to Eq. 1, but now the state variable is the distance to the target rather than the perceived position along the trajectory, and this state variable is decremented over the course of the movement according to −k.

The general solution to this differential equation is

$$ D(x) = {\rm e}^{-\alpha x + b} - \frac{k}{\alpha}. $$
(4)

Again, the value of b is constrained by the starting value of the integrator. This is now D(x =  0) =  D 0. Therefore,

$$ {\rm e}^b - \frac{k}{\alpha} = D_0, $$

and

$$ b = \ln \left( D_0 + \frac{k}{\alpha} \right). $$

Thus the solution to Eq. 4 is

$$ D(x) = {\rm e}^{-\alpha x + \ln( D_0 + \frac{k}{\alpha})} - \frac{k}{\alpha} = D_0 {\rm e}^{-\alpha x} -\frac{k}{\alpha} (1-{\rm e}^{-\alpha x}). $$
(5)

Equation 5 gives the current distance to the target as a function of true position x along the movement trajectory. To calculate the position at which the subject presses the putton we have to find the position p hit at which the D(x) becomes zero:

$$ D(p_{\rm hit}) = 0. $$

From

$$ {\rm e}^{-\alpha p_{\rm hit} + \ln( D_0 + \frac{k}{\alpha})} - \frac{k}{\alpha} = 0 $$

we find

$$ -\alpha p_{\rm hit} + \ln \left( D_0 + \frac{k}{\alpha}\right) = \ln \left({k \over \alpha} \right), $$

and finally

$$ p_{\rm hit}(D_0) = \frac{1}{\alpha} \left[ \ln \left( D_0 + \frac{k}{\alpha}\right) - \ln \left( \frac{k}{\alpha} \right) \right] = \frac{1}{\alpha} \ln \left( \frac{k}{\alpha} \right) [ \ln( D_0 - 1)]. $$
(6)

Equations 2 and 6 are used fit the data from the adjust-target and the move-to-target conditions, respectively, with α and k as parameters. The predictions for the part-of-the-way condition is then derived from using the best-fit parameters α and k in Eq. 5.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lappe, M., Jenkin, M. & Harris, L.R. Travel distance estimation from visual motion by leaky path integration. Exp Brain Res 180, 35–48 (2007). https://doi.org/10.1007/s00221-006-0835-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00221-006-0835-6

Keywords

Navigation