Summary: | Memory-based human navigation is a complex task that remains poorly understood. Prominent theories of human spatial memory are in disagreement regarding the nature of the long-term spatial representations accessed to recognize ones present position and locate an unseen goal, while implicitly assuming that both behaviors are subserved by the same type of representations (McNamara, 2003; Mou, Zhang, & McNamara, 2004; Sholl, 2001; Wang & Spelke, 2000, 2002). Recent experiments, however, have indicated that the mental representations accessed to recognize locations differ from those used to recall the locations of unseen goals (McNamara, 2003; Mou, Zhang et al., 2004; Shelton & McNamara, 2004a, 2004b; Sholl, 2001; Valiquette & McNamara, 2007; Wang & Spelke, 2000, 2002). A possible explanation for these findings is that snapshot-like visual representations are employed to perform recognition tasks, whereas goal localization depends on amodal spatial representations. The experiment presented here explored the nature of these representations. Participants viewed a layout composed of objects with visual characteristics that differ markedly depending on viewing direction, allowing visual representations to be decoupled from spatial representations in judgments of relative direction, scene recognition, and priming tasks.
|