Paper status: completed

Redirected Walking for Exploring Immersive Virtual Spaces With HMD: A Comprehensive Review and Recent Advances

Published:05/31/2022
Original Link
Price: 0.100000
2 readers
This analysis is AI-generated and may not be fully accurate. Please refer to the original paper.

TL;DR Summary

This paper reviews Redirected Walking (RDW) techniques, addressing how to achieve immersive virtual experiences within limited physical space. It categorizes redirection manipulations, discusses controller methods, and incorporates emerging technologies like deep learning, summar

Abstract

Real walking techniques can provide the user with a more natural, highly immersive walking experience compared to the experience of other locomotion techniques. In contrast to the direct mapping between the virtual space and an equal-sized physical space that can be simply realized, the nonequivalent mapping that enables the user to explore a large virtual space by real walking within a confined physical space is complex. To address this issue, the redirected walking (RDW) technique is proposed by many works to adjust the user’s virtual and physical movements based on some redirection manipulations. In this manner, subtle or overt motion deviations can be injected between the user’s virtual and physical movements, allowing the user to undertake real walking in large virtual spaces by using different redirection controller methods. In this paper, we present a brief review to describe major concepts and methodologies in the field of redirected walking. First, we provide the fundamentals and basic criteria of RDW, and then we describe the redirection manipulations that can be applied to adjust the user’s movements during virtual exploration. Furthermore, we clarify the redirection controller methods that properly adopt strategies for combining different redirection manipulations and present a classification of these methods by several categories. Finally, we summarize several experimental metrics to evaluate the performance of redirection controller methods and discuss current challenges and future work. Our study systematically classifies the relevant theories, concepts, and methods of RDW, and provides assistance to the newcomers in understanding and implementing the RDW technique.

Mind Map

In-depth Reading

English Analysis

1. Bibliographic Information

1.1. Title

Redirected Walking for Exploring Immersive Virtual Spaces With HMD: A Comprehensive Review and Recent Advances

1.2. Authors

  • Linwei Fan: Associate Professor, School of Computer Science and Technology, Shandong University of Finance and Economics. Member of Shandong Provincial Key Laboratory of Digital Media Technology. Research interests: computer graphics and image processing.
  • Huiyu Li: Teacher, School of Management Science and Engineering, Shandong University of Finance and Economics. Research interests: virtual reality and human-computer interaction.
  • Miaowen Shi: PhD candidate, School of Software, Shandong University. Research interests: low-rank theory, sparse representation, and image restoration.

1.3. Journal/Conference

This paper appears to be published in a journal or conference proceedings related to Virtual Reality or Computer Graphics, likely IEEE Access or a similar IEEE publication given the formatting and reference style, though the provided metadata specifically lists the publication date as May 31, 2022.

1.4. Publication Year

2022

1.5. Abstract

This paper presents a comprehensive review of Redirected Walking (RDW) techniques in Virtual Reality (VR). While "real walking" (physically walking to move in VR) offers the highest immersion, it is limited by the size of the physical room. RDW solves this by imperceptibly manipulating the user's view to steer them away from physical obstacles while they perceive themselves as walking in a straight line or a large virtual space. The paper classifies RDW concepts into:

  1. Redirection Manipulations: Techniques to adjust movement (e.g., gains, resets, virtual space manipulation).
  2. Redirection Controllers: Algorithms that decide when and how to apply these manipulations (e.g., generalized, predictive, and recent AI-based methods). Finally, it summarizes evaluation metrics and discusses future challenges.

/files/papers/694947cc7a7e7809d937f44b/paper.pdf (Published)


2. Executive Summary

2.1. Background & Motivation

  • The Problem: In Head-Mounted Device (HMD) based Virtual Reality, Locomotion (moving around) is crucial. Techniques like teleportation or "walking-in-place" exist, but Real Walking (physically walking to move the avatar) is superior for presence (feeling like you are there) and spatial cognition. However, a major physical limitation exists: the Virtual Space (e.g., a massive fantasy world) is often much larger than the user's Physical Space (e.g., a 3m×3m3m \times 3m living room). A 1:1 mapping of movement means the user will quickly hit a physical wall.
  • The Gap: While Redirected Walking (RDW) has been proposed to solve this by "tricking" the user into walking in circles physically while walking straight virtually, there is a need for a systematic classification of the rapidly evolving methods, especially new Deep Learning (DL) and Artificial Potential Field (APF) approaches.
  • Innovation: This paper provides an updated taxonomy that integrates traditional methods with cutting-edge advances like multi-user redirection and reinforcement learning controllers, which older surveys may lack.

2.2. Main Contributions / Findings

  1. Systematic Classification: The authors rigorously classify RDW into Manipulations (the specific "tricks" used, like rotating the world slightly) and Controllers (the "brains" deciding which trick to use).

  2. Detailed Mechanics: It provides detailed mathematical definitions for various redirection gains (translation, rotation, curvature, bending, gradient, vertical).

  3. Modern Methods Review: Unlike older reviews, this paper details Novel Redirection Controllers, specifically those based on Artificial Potential Fields (APF) and Deep Learning (DL) (e.g., LSTM, Reinforcement Learning), highlighting their ability to reduce physical collisions (resets) compared to traditional methods.

  4. Metric Standardization: It summarizes the key quantitative metrics (e.g., mean number of wall contacts, sickness scores) used to evaluate these systems, aiding future researchers in benchmarking.


3. Prerequisite Knowledge & Related Work

3.1. Foundational Concepts

  • Virtual Reality (VR) & HMD: VR creates a simulated environment. An HMD (Head-Mounted Display) is the headset (like Oculus Quest or HTC Vive) that tracks head movements and displays the virtual world, isolating the user from the real world.
  • Locomotion: The method used to move through the virtual environment.
    • Real Walking: The user physically walks, and their avatar moves accordingly.
    • Teleportation: Pointing at a location and instantly appearing there.
  • Visual Dominance (Visual Capture): The biological phenomenon where, if the sense of vision conflicts with the vestibular system (inner ear balance) or proprioception (body position sense), the brain tends to believe the visual input. RDW exploits this by creating slight mismatches that the brain ignores.
  • Degrees of Freedom (DoF): The number of ways an object can move. In VR, we often track 6-DoF (X, Y, Z position + Yaw, Pitch, Roll rotation).
  • Saccadic Suppression: The phenomenon where the brain temporarily blocks visual processing during rapid eye movements (saccades). This blindness lasts only milliseconds but can be used to hide scene changes.

3.2. Previous Works

  • Razzaque et al. (2001): The pioneers who first proposed Redirected Walking. They introduced the concept of rotating the virtual scene slightly as the user walks, causing the user to unconsciously rotate their body to compensate, thus walking in a curve physically while thinking they are walking straight.
  • Steinicke et al. (2010): Conducted foundational psychophysical experiments to determine Detection Thresholds—the limits of how much you can trick a user before they notice. For example, they quantified how much you can scale physical rotation (e.g., turning 9090^\circ physically results in 100100^\circ virtually).
  • Hodgson et al.: Developed Generalized Redirection Controllers (like Steer-to-Center), which are algorithms that continuously steer the user toward a safe point (like the center of the room) without knowing the user's future path.

3.3. Technological Evolution

  1. Scripted RDW: Early days. The virtual path was fixed (pre-determined). The system knew exactly where the user would go and planned redirection accordingly.
  2. Generalized RDW: Reactive systems. The user can walk anywhere. The system uses heuristics (rules) to steer the user away from walls or toward the center.
  3. Predictive RDW: The system tries to guess the user's future path (using gaze, head direction) to plan better redirection.
  4. Modern/Novel RDW: Uses complex algorithms like Artificial Potential Fields (treating walls as repulsive forces) or Deep Learning (training AI to predict paths and steer optimally) to handle complex, dynamic, and even multi-user scenarios.

3.4. Differentiation Analysis

This paper differentiates itself by not just explaining what RDW is, but how the modern controllers (APF, DL) function mathematically and structurally compared to the heuristic-based traditional methods. It also includes a dedicated section on "Vertical" and "Gradient" gains, which are often overlooked in standard horizontal-only RDW reviews.


4. Methodology

This section deconstructs the technical solutions presented in the review, organized by the taxonomy of Manipulations (the actions taken) and Controllers (the decision logic).

4.1. Principles of Redirected Walking

The core principle relies on imperceptible discrepancies. The system injects a "gain" gg. If the user moves physically by an amount MphyM_{phy}, the virtual avatar moves by Mvir=gMphyM_{vir} = g \cdot M_{phy}.

  • If g1g \approx 1, movement is natural (1:1).

  • If g1g \neq 1 (but close to 1), the user compensates for the difference unconsciously.

  • Goal: Keep the user inside the physical boundaries (BoundsphyBounds_{phy}) while they traverse a larger virtual path (PathvirPath_{vir}).

    The following figure (Figure 1 from the original paper) illustrates this process: The user walks straight in VR (a), but the system injects a rotation (b), causing the user to physically walk in a curve (c) to stay within the room.

    该图像是示意图,展示了重定向行走(RDW)技术在虚拟空间(图(a))和物理空间(图(c))中的应用。图(b)则表示用户在虚拟环境中的当前方向与下一方向的关系,强调虚拟路径与感知路径的差异。 该图像是示意图,展示了重定向行走(RDW)技术在虚拟空间(图(a))和物理空间(图(c))中的应用。图(b)则表示用户在虚拟环境中的当前方向与下一方向的关系,强调虚拟路径与感知路径的差异。

4.2. Redirection Manipulations (The "Tricks")

These are the fundamental building blocks of RDW. They are categorized into Subtle Perception Manipulations (Gains), Overt Manipulations (Resets), and Virtual Space Manipulations.

4.2.1. Subtle Perception Manipulations (Redirection Gains)

These modify the mapping between physical and virtual motion. The following figure (Figure 2 from the original paper) visualizes these gains.

该图像是示意图,展示了重定向行走(RDW)技术中虚拟和物理运动之间的关系。图中分为六个部分,说明虚拟距离(\(T_{vir}\))与物理距离(\(T_{phy}\))、虚拟路径与物理路径的区别,以及虚拟高度(\(H_{vir}\))与物理高度(\(H_{phy}\))的对应关系。每个部分均通过箭头和标签清晰地显示了不同的运动模式和参数。 该图像是示意图,展示了重定向行走(RDW)技术中虚拟和物理运动之间的关系。图中分为六个部分,说明虚拟距离(TvirT_{vir})与物理距离(TphyT_{phy})、虚拟路径与物理路径的区别,以及虚拟高度(HvirH_{vir})与物理高度(HphyH_{phy})的对应关系。每个部分均通过箭头和标签清晰地显示了不同的运动模式和参数。

A. Translation Gain (gTg_T)

This scales the user's walking speed.

  • Concept: If gT>1g_T > 1, one physical step covers more virtual ground. If gT<1g_T < 1, the user moves slower in VR, compressing a large virtual distance into a short physical one.
  • Formula: gT=Tvir/Tphyg_{T} = T_{vir} / T_{phy}
  • Symbol Explanation:
    • gTg_{T}: The translation gain.
    • TvirT_{vir}: The distance the avatar moves in the virtual space.
    • TphyT_{phy}: The distance the user actually moves in the physical space.

B. Rotation Gain (gRg_R)

This scales the user's physical rotation (turning head or body).

  • Concept: Used to make the user turn more or less physically than they perceive virtually.
  • Formula: gR=Rvir/Rphyg_{R} = R_{vir} / R_{phy}
  • Symbol Explanation:
    • gRg_{R}: The rotation gain.
    • RvirR_{vir}: The rotation angle in virtual space.
    • RphyR_{phy}: The rotation angle in physical space.

C. Curvature Gain (gCg_C)

This is the most critical gain for steering.

  • Concept: It injects a rotation while the user is walking straight. To keep walking straight in VR, the user must physically turn in the opposite direction, resulting in a curved physical path.
  • Formula: gC=1/rg_{C} = 1 / r
  • Symbol Explanation:
    • gCg_{C}: The curvature gain.
    • rr: The radius of the circular path the user walks along in the physical space.
    • Note: A larger gCg_C means a tighter curve (smaller radius), which is harder to perform imperceptibly.

D. Bending Gain (gBg_B)

An extension of curvature gain for when the virtual path is also curved.

  • Concept: It maps a curved virtual path to a straight or differently curved physical path.
  • Formula: gB=gCrvir=rvir/rphyg_{B} = g_{C} \cdot r_{vir} = r_{vir} / r_{phy}
  • Symbol Explanation:
    • gBg_{B}: The bending gain.
    • gCg_{C}: The standard curvature gain applied.
    • rvirr_{vir}: The radius of the curved path in virtual space.
    • rphyr_{phy}: The radius of the curved path in physical space.

E. Gradient Gain (gGg_G)

  • Concept: Simulates walking up or down a slope virtually while walking on a flat physical floor.
  • Formula: gG=Thit/Tphyg_{G} = T_{hit} / T_{phy}
  • Symbol Explanation:
    • ThitT_{hit}: The height (elevation change) of the virtual slope.
    • TphyT_{phy}: The horizontal distance walked in physical space.
    • Logic: The avatar moves gTTphyg_T \cdot T_{phy} horizontally and gGTphyg_G \cdot T_{phy} vertically.

F. Vertical Gain (gVg_V)

  • Concept: Used for vertical motions like jumping, crouching, or stretching. It scales the height of the movement.
  • Formula: gV=Hvir/Hphyg_{V} = H_{vir} / H_{phy}
  • Symbol Explanation:
    • HvirH_{vir}: Vertical distance moved in virtual space.
    • HphyH_{phy}: Vertical distance moved in physical space.

4.2.2. Overt Perception Manipulations (Resets)

When subtle gains fail (e.g., the user is about to hit a wall), the system must stop the user. This is a Reset. The strategy is to pause the virtual experience and force the user to reorient physically without moving virtually.

The following figure (Figure 3 from the original paper) illustrates three common reset techniques:

该图像是插图,展示了三种重定向行走(RDW)技术的操作方式:图(a)显示用户在物理空间中转身的同时,虚拟空间的摄像头冻结;图(b)展示了用户在物理空间中旋转180度,摄像头也被冻结;图(c)则展示了用户在物理空间中行走,同时摄像头旋转360度。这些操作旨在实现用户在受限空间内的自由行走,以探索更大的虚拟空间。 该图像是插图,展示了三种重定向行走(RDW)技术的操作方式:图(a)显示用户在物理空间中转身的同时,虚拟空间的摄像头冻结;图(b)展示了用户在物理空间中旋转180度,摄像头也被冻结;图(c)则展示了用户在物理空间中行走,同时摄像头旋转360度。这些操作旨在实现用户在受限空间内的自由行走,以探索更大的虚拟空间。

  1. Freeze-Backup: User stops, screen freezes (position only), user walks backward physically.
  2. Freeze-Turn: User stops, screen freezes (position and rotation), user turns 180180^\circ physically.
  3. 2:1-Turn: User rotates physically 180180^\circ, but the virtual camera rotates 360360^\circ (gain of 2). The user ends up facing the safe direction physically but the same direction virtually.

4.2.3. Virtual Space Manipulation

Instead of manipulating the user, these methods manipulate the world geometry.

  • Change Blindness: Moving a door when the user isn't looking so that walking through it leads them away from a physical wall.

  • Impossible Spaces: Overlapping virtual rooms. For example, two virtual rooms might occupy the same physical space. The layout changes dynamically as the user walks through a corridor (Figure 4).

    该图像是示意图,展示了重定向行走技术的不同场景(图a至图c)。这些场景描述了用户在物理空间中行走时,如何通过调整其虚拟和物理位置的关系,实现对大虚拟空间的探索。 该图像是示意图,展示了重定向行走技术的不同场景(图a至图c)。这些场景描述了用户在物理空间中行走时,如何通过调整其虚拟和物理位置的关系,实现对大虚拟空间的探索。

4.3. Redirection Controller Methods (The "Brains")

The controller determines which gain to apply and how much.

4.3.1. Generalized Controllers

These do not know the user's destination. They use heuristics to steer the user to a "safe" target.

  • Steer-to-Center (S2C): Always steers the user toward the center of the physical room.

  • Steer-to-Orbit (S2O): Steers the user to walk along a circular path (orbit) around the center.

  • Steer-to-Multiple (S2M): Uses multiple safe targets.

    Figure 6 from the paper shows the flowchart for calculating the maximum rotation in these methods:

    Fig. 6. The flowchart of generalized redirection controller method as proposed by Hodgson et al. \[46\]. 该图像是示意图,展示了通用重定向控制器方法的流程。图中包括输入部分,涉及线速度和角速度的计算,以及旋转速率的计算过程。在此过程中,基线旋转速率、线性旋转速率和角旋转速率被计算并取最大值,最后经过缩放和平滑处理,得出当前旋转速率作为输出。图示清晰地阐释了重定向控制器的工作机制。

Figure 7 illustrates the different steering targets (Center, Orbit, Multiple targets):

该图像是示意图,展示了不同的重定向行走策略。图中包含四个部分,分别指示了在物理空间中用户的行走路径与虚拟路径之间的关系,以及不同的引导目标设置,旨在帮助用户在较小的物理空间中探索大的虚拟空间。 该图像是示意图,展示了不同的重定向行走策略。图中包含四个部分,分别指示了在物理空间中用户的行走路径与虚拟路径之间的关系,以及不同的引导目标设置,旨在帮助用户在较小的物理空间中探索大的虚拟空间。

4.3.2. Predictive Controllers

These use user data (gaze, head orientation) to predict the future path (Short-term or Long-term).

  • Short-term: Predicts the next few seconds using kinematic models.
  • Long-term: Guesses the user's destination (e.g., a door in the virtual room) and plans a path that aligns with the physical constraints.
  • FORCE / MPCRed: These are advanced planning algorithms that treat redirection as an optimization problem, minimizing a cost function (e.g., risk of hitting a wall).

4.3.3. Novel Controller: Artificial Potential Fields (APF)

Borrowed from robotics, this method treats the user as a robot, the goal as an attractor, and walls as repulsors.

  • Principle: The user is pushed away from walls by a "force."

  • Methodology Flow:

    1. Define an Attractive Potential UattractiveU_{attractive} (pulls user to a target, if one exists).
    2. Define an Avoidance Potential UavoidanceU_{avoidance} (pushes user away from obstacles/walls).
    3. Sum them to get the total Potential Function U(x).
    4. The gradient of this function determines the steering direction.
  • Formulas: The attractive function (distance to goal): $ U_{attractive}(x) = \frac{1}{2} | x - x_{goal} | $

    • xx: User's current physical position.

    • xgoalx_{goal}: The target position in physical space.

      The avoidance function (inverse distance to obstacles): $ U_{avoidance}(x) = \sum_{ob \in O} \frac{1}{| x - x_{ob} | } $

    • OO: The set of all obstacles/boundaries.

    • xobx_{ob}: The nearest point on the obstacle ob.

    • Logic: As xxob\| x - x_{ob} \| gets smaller (closer to wall), the value explodes, creating a high "potential" that repels the user.

      The total potential function (Eq. 3 in paper): $ U(x) = \frac{1}{2} | x - x_{goal} | + \sum_{ob \in O} \frac{1}{| x - x_{ob} | } $

4.3.4. Novel Controller: Deep Learning (DL)

Uses neural networks to learn optimal steering policies.

  • LSTM (Long Short-Term Memory): Used for Path Prediction. It takes a sequence of past positions/orientations and predicts where the user will be in the future (e.g., 100 frames later).
  • Reinforcement Learning (Q-Learning): Used for Action Selection.
    • State: User's position, orientation, distance to walls.
    • Action: Which gain to apply (Translation, Rotation, Curvature) and how much.
    • Reward: Positive for walking distance without collision; negative for hitting a wall (reset).
    • Goal: The AI learns a policy that maximizes the distance walked between resets.

4.3.5. Multi-User Controllers

Managing multiple people in one physical room.

  • Subdivision: Split the room into static zones (Figure 8a). Safe but limits space.

  • Common Center: Both steered to the same center (Figure 8b). High collision risk.

  • Offset Centers: Each user has a different steering target (Figure 8c).

    该图像是示意图,展示了用户在虚拟空间中的行走模式。图(a)展示了用户1和用户2在不同子空间的活动,图(b)显示了用户在物理空间中的相对位置,图(c)展示了用户间的偏移效应。此图有助于理解重定向行走技术。 该图像是示意图,展示了用户在虚拟空间中的行走模式。图(a)展示了用户1和用户2在不同子空间的活动,图(b)显示了用户在物理空间中的相对位置,图(c)展示了用户间的偏移效应。此图有助于理解重定向行走技术。


5. Experimental Setup

Since this is a review paper, it summarizes the experimental setups used across the field rather than a single specific experiment.

5.1. Datasets and Scenarios

  • Live User Studies: Participants wear HMDs and walk in physical spaces (e.g., labs, gymnasiums). Example: Walking in a 6m×6m6m \times 6m tracked area while exploring a virtual maze.
  • Simulations: Virtual agents (simulated users) walk through thousands of randomized paths to test algorithms without human fatigue.
  • Data Types: Position (x, y, z), Orientation (yaw, pitch, roll), Walking Velocity, Head Gaze.

5.2. Evaluation Metrics

The paper identifies standard metrics used to judge if an RDW method is "good."

5.2.1. Mean Number of Wall Contacts (Resets)

  • Conceptual Definition: Counts how often the redirection failed, forcing the user to stop and reset because they reached a physical boundary. Lower is better.
  • Mathematical Formula: Nresets=i=1KriKN_{resets} = \frac{\sum_{i=1}^{K} r_i}{K} (Note: This specific formula is implied by the text "mean number... evaluated", where rir_i is resets in trial ii and KK is number of trials. The paper describes it textually in Section 5.1.)

5.2.2. Mean Rate of Redirection

  • Conceptual Definition: Measures how "intense" the manipulation was. High values mean the user was spun around rapidly, which might be noticeable or nauseating. Lower is usually better for subtlety.
  • Measurement: Average degrees of rotation injected per second.

5.2.3. Mean Physical Distance to Center

  • Conceptual Definition: Measures safety. If the user stays close to the center of the room, they are far from walls.
  • Mathematical Formula: Dcenter=1Tt=0TPtCroomD_{center} = \frac{1}{T} \sum_{t=0}^{T} \| P_t - C_{room} \|
    • PtP_t: User position at time tt.
    • CroomC_{room}: Center coordinate of the room.

5.2.4. Unwanted Side Effects (SS, SLM, CL)

  • SS (Simulator Sickness): Measured via the SSQ (Simulator Sickness Questionnaire). Scores symptoms like nausea, dizziness.
  • SLM (Spatial Learning and Memory): Tests if RDW confuses the user's mental map. Measured by asking users to point to the start location after walking.
  • CL (Cognitive Load): Mental effort. Measured by dual-tasking (e.g., walking while counting backward).

5.3. Baselines

The standard baseline for any new RDW controller is usually:

  • Steer-to-Center (S2C): The most robust and simple generalized controller.

  • No Redirection (1:1 Walking): To measure the baseline sickness/presence.


6. Results & Analysis

6.1. Core Results Analysis

The review aggregates findings from multiple studies:

  1. Gain Thresholds (Human Perception Limits): Humans are surprisingly bad at detecting rotation discrepancies.

    • Translation: Users tolerate scaling of roughly 14%-14\% to +26%+26\% (from Table 1 range 0.86 - 1.26).
    • Rotation: Users can be turned about 20%20\% more or less than they think (0.67 - 1.24).
    • Curvature: Users can be steered along a circle of radius 22m\approx 22m while thinking they are walking straight (gC0.045m1g_C \approx 0.045m^{-1}).
    • Impact of Gender: Males are generally more sensitive to curvature gains (harder to trick) than females (Table 1 citations [40], [79]).
  2. Controller Performance:

    • S2C vs. S2O: S2O (Orbit) performs better in constrained virtual spaces (like corridors), while S2C (Center) is better for open roaming.
    • APF Methods: APF-based controllers outperform S2C in irregularly shaped physical rooms (e.g., L-shaped rooms) because they dynamically account for complex boundary geometry using the repulsive potential field UavoidanceU_{avoidance}.
    • Deep Learning: Reinforcement learning agents (e.g., Strauss et al. [108]) significantly increase the distance walked between resets compared to S2C, as they learn to "plan ahead" rather than just reacting.

6.2. Data Presentation (Detection Thresholds)

The following table summarizes the detection thresholds for different gains found in the literature. This data is critical for setting up any RDW system (Table 1 from original paper).

Note: Since the original table has merged cells (e.g., "Translation" spans multiple rows), I will use HTML format as required.

Gain Estimation Method Condition Detection Threshold Source
Translation 2AFC-MCS 0.86 - 1.26 Steinicke et al. 2010 [24]
2AFC-MA 0.90 - 1.12 Chen et al. 2019 [74]
2AFC-MCS rich cue, invisible feet 0.86 - 1.26 Kruse et al. 2018 [27]
2AFC-MCS rich cue, visible feet 0.88 - 1.15 Kruse et al. 2018 [27]
2AFC-MCS low cue, visible feet 0.73 - 1.25 Kruse et al. 2018 [27]
2AFC-MCS driving 0.94 - 1.36 Bruder et al. 2012 [76]
2AFC-MCS 360° VR-based robot 0.94 - 1.10 Zhang 2021 [77]
2AFC-MCS - 0.85 - 1.45 Steinicke et al. 2008 [75]
2AFC-MCS - 0.87 - 1.29 Bruder et al. 2012 [76]
Rotation 2AFC-MCS 0.67 - 1.24 Steinicke et al. 2010 [24]
2AFC-MA 0.85 - 1.11 Chen et al. 2019 [74]
2AFC-MCS body turning 0.84 - 1.31 Bruder et al. 2009 [78]
2AFC-MCS low densities (4 objects) 0.81 - 1.19 Paludan et al. 2016 [28]
2AFC-MCS high densities (16 objects) 0.82 - 1.20 Paludan et al. 2016 [28]
2AFC-MCS 40° FOV 0.81 - 1.47 Williams et al. 2019 [79]
2AFC-MCS 110° FOV 0.67 - 1.61 Williams et al. 2019 [79]
2AFC-MCS 110° FOV, females 0.72 - 1.90 Williams et al. 2019 [79]
2AFC-MCS 110° FOV, males 0.62 - 1.40 Williams et al. 2019 [79]
2AFC-MCS audio 0.88 - 1.20 Serafin et al. 2013 [80]
2AFC-MCS static audio 0.80 - 1.11 Nilsson et al. 2016 [34]
2AFC-MCS moving audio 0.79 - 1.08 Nilsson et al. 2016 [34]
2AFC-MCS - 0.88 - 1.33 Steinicke et al. 2008 [75]
2AFC-MCS - 0.68 - 1.26 Bruder et al. 2012 [76]
2AFC-MCS driving 0.77 - 1.26 Bruder et al. 2012 [76]
2AFC-MCS 360° VR-based robot 0.88 - 1.09 Zhang 2021 [77]
- - - - -
Curvature 2AFC-MCS 0.045 m^-1 Steinicke et al. 2010 [24]
2AFC-MA 0.06 m^-1 Chen et al. 2019 [74]
2AFC-MCS velocity=0.75 0.095 m^-1 Neth et al. 2012 [29]
2AFC-MCS velocity=1.00 0.042 m^-1 Neth et al. 2012 [29]
2AFC-MCS velocity=1.25 0.037 m^-1 Neth et al. 2012 [29]
2AFC-QUEST female 0.116 m^-1 Nguyen et al. 2018 [40]
2AFC-QUEST male 0.093 m^-1 Nguyen et al. 2018 [40]
maximum likelihood - 0.156 m^-1 Grechkin et al. 2016 [65]
2AFC-MCS audio 0.036 m^-1 Serafin et al. 2013 [80]
2AFC-MCS audio, vision 0.167 m^-1 Meyer et al. 2016 [81]

(Note: The table is condensed for clarity but retains key data points illustrating the variability in thresholds across conditions like gender, velocity, and sensory input.)

6.3. Parameter Analysis

The review highlights that detection thresholds are not static.

  • Walking Speed: Faster walking (1.25m/s1.25 m/s) lowers the detection threshold for curvature (0.037m10.037 m^{-1}) compared to slow walking (0.75m/s0.75 m/s, 0.095m10.095 m^{-1}), meaning you can trick fast walkers more easily with less curve.

  • Field of View (FOV): Wider FOV (110110^\circ) generally increases sensitivity (harder to trick) compared to narrow FOV (4040^\circ).

  • Audio/Haptics: Adding spatial audio or passive haptics (touching a wall) can alter these thresholds, sometimes making users more sensitive to mismatches.


7. Conclusion & Reflections

7.1. Conclusion Summary

This paper provides a "state-of-the-art" map for Redirected Walking. It establishes that RDW is a mature field with defined manipulation mechanics (gains, resets) and standardized metrics. The shift from simple heuristic controllers (Steer-to-Center) to intelligent, data-driven controllers (APF, Deep Learning) marks the current frontier, offering safer and more seamless exploration of infinite virtual worlds within finite physical rooms.

7.2. Limitations & Future Work

  • Safety vs. Subtlety: No method currently guarantees zero collisions without using "Resets," which break immersion. Finding a method that is both fully subtle and fully safe is the "Holy Grail" that remains elusive.
  • Inconsistent Thresholds: As seen in Table 1, different studies report different detection thresholds. There is a lack of a unified standard for measuring these, making it hard to tune universal RDW systems.
  • Side Effects: We still don't fully understand the long-term effects of RDW on Simulator Sickness or Cognitive Load. Does constantly tricking the vestibular system cause fatigue?
  • Multi-User: Redirecting multiple people in one room is still in its infancy, with high collision risks.

7.3. Personal Insights & Critique

  • The APF Potential: The application of Artificial Potential Fields is particularly promising. By mathematically modeling the room boundaries as "repulsive forces," RDW becomes applicable to any room shape (like an L-shaped corridor in a home), not just square labs. This is crucial for consumer VR adoption.
  • The "Subtlety" Trap: The paper rightly discusses "Applicable Thresholds" vs. "Detection Thresholds." Academic research focuses on making RDW invisible (detection threshold). However, for gaming, users might accept noticeable redirection (applicable threshold) if it means they don't have to stop and reset. Future research should perhaps focus less on "invisibility" and more on "acceptability."
  • Complexity of Integration: While DL methods perform well, they require training data and significant compute power. The paper could have critiqued the real-time feasibility of running complex LSTM models on standalone headsets (like Quest 2/3) versus PC-VR.

Similar papers

Recommended via semantic vector search.

No similar papers found yet.