Definition
G_{t:t+n}
:&= \sum\limits_{k=0}^{n-1}\gamma^{k} R_{t+k+1} + \gamma^{n}V(S_{t+n}) \\
&= R_{t+1} + \gamma R_{t+2} + \dots + \gamma^{n-1} R_{t+n} + \gamma^{n}V(S_{t+n})
\end{aligned}$$
n-step returns can be considered approximations to the full [[Return]], truncated after $n$ steps and then corrected for remaining missing terms by the [[State-Value Function|state-value]] estimator $V(S_{t+n})$.
# Examples
- $G_{t:t+1}$: (one-step) [[Temporal Difference Learning|TD]] target
- $G_{t:t+n}$: [[n-Step TD]] target
- $G_{t:\infty}$: [[Monte Carlo Method|MC]] target