1. Introduction
Although Denoising Diffusion Probabilistic Models (DDPM) have shown great success in generative tasks, their sampling process is often slow due to the need for many iterative steps (e.g., 1000 steps), for each requiring a forward pass through the neural network.
To address this issue, researchers from Stanford University proposed Denoising Diffusion Implicit Models (DDIM), which allow for accelerated sampling without the need for additional training from a pre-trained DDPM model. DDIM achieve this by introducing a non-Markovian diffusion process that enables fewer sampling steps while maintaining high-quality outputs.
2. Key Concepts
DDIM follow a similar notation to DDPM, where x t x_t x t represents the noisy data at time step t t t , and ϵ θ ( x t , t ) \epsilon_\theta(x_t, t) ϵ θ ( x t , t ) is the neural network predicting the noise component. The key difference lies in the sampling process.
p θ ( x 0 ) = ∫ p θ ( x 0 : T ) d x 1 : T p_\theta(\mathrm x_0) = \int p_\theta(\mathrm x_{0:T}) \mathrm d\mathrm x_{1:T}
p θ ( x 0 ) = ∫ p θ ( x 0 : T ) d x 1 : T (1)
where
p θ ( x 0 : T ) = p ( x T ) ∏ t = 1 T p θ ( x t − 1 ∣ x t ) p_\theta(\mathrm x_{0:T}) = p(\mathrm x_T) \prod_{t=1}^{T} p_\theta(\mathrm x_{t-1}|\mathrm x_t)
p θ ( x 0 : T ) = p ( x T ) t = 1 ∏ T p θ ( x t − 1 ∣ x t ) (2)
DDPM use a special property of the forward process:
q ( x t ∣ x 0 ) = N ( x t ; α ˉ t x 0 , ( 1 − α ˉ t ) I ) q(\mathrm x_{t}|\mathrm x_0) = \mathcal N(\mathrm x_t; \sqrt{\bar \alpha_t} \mathrm x_0, (1-\bar \alpha_t) \mathbf I)
q ( x t ∣ x 0 ) = N ( x t ; α ˉ t x 0 , ( 1 − α ˉ t ) I ) (3)
so that x t \mathrm x_t x t can be reparameterized as:
x t = α ˉ t x 0 + 1 − α ˉ t ϵ , ϵ ∼ N ( 0 , I ) \mathrm x_t = \sqrt{\bar \alpha_t} \mathrm x_0 + \sqrt{1-\bar \alpha_t} \epsilon, \quad \epsilon \sim \mathcal N(0, \mathbf I)
x t = α ˉ t x 0 + 1 − α ˉ t ϵ , ϵ ∼ N ( 0 , I ) (4)
Therefore, the posterior distribution (which the model learns to predict) can be derived as:
q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x t − 1 , x 0 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) = N ( x t − 1 ; μ ~ t ( x t , x 0 ) , β ~ t I ) \begin{aligned}
q(\mathrm x_{t-1}|\mathrm x_t,\mathrm x_0) &= \frac{q(\mathrm x_t|\mathrm x_{t-1}, \mathrm x_0) q(\mathrm x_{t-1}|\mathrm x_0)}{q(\mathrm x_t|\mathrm x_0)} \\
&= \mathcal N\left(\mathrm x_{t-1}; \tilde \mu_t(\mathrm x_t, \mathrm x_0), \tilde \beta_t \mathbf I\right)
\end{aligned}
q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x 0 ) q ( x t ∣ x t − 1 , x 0 ) q ( x t − 1 ∣ x 0 ) = N ( x t − 1 ; μ ~ t ( x t , x 0 ) , β ~ t I ) (5)
the derivation of μ ~ t ( x t , x 0 ) \tilde \mu_t(\mathrm x_t, \mathrm x_0) μ ~ t ( x t , x 0 ) and β ~ t \tilde \beta_t β ~ t relies on the Markovian assumption and the reparameterization of x t \mathrm x_t x t .
The key idea of DDIM is to generalize this posterior distribution to a non-Markovian form with arbitrary transition steps.
3. Forward Process
The forward process of DDPM is defined as:
q ( x 1 : T ∣ x 0 ) = ∏ t = 1 T q ( x t ∣ x t − 1 ) q(\mathrm x_{1:T}|\mathrm x_0) = \prod_{t=1}^{T} q(\mathrm x_t|\mathrm x_{t-1})
q ( x 1 : T ∣ x 0 ) = t = 1 ∏ T q ( x t ∣ x t − 1 ) (6)
where
q ( x t ∣ x t − 1 ) = N ( x t ; 1 − β t x t − 1 , β t I ) q(\mathrm x_t|\mathrm x_{t-1}) = \mathcal N(\mathrm x_t; \sqrt{1-\beta_t} \mathrm x_{t-1}, \ \beta_t \mathbf I)
q ( x t ∣ x t − 1 ) = N ( x t ; 1 − β t x t − 1 , β t I ) (7)
What we are going to use is the property that we can directly sample x t \mathrm x_t x t from x 0 \mathrm x_0 x 0 :
q ( x t ∣ x 0 ) = N ( x t ; α ˉ t x 0 , ( 1 − α ˉ t ) I ) q(\mathrm x_t|\mathrm x_0) = \mathcal N(\mathrm x_t; \sqrt{\bar \alpha_t} \mathrm x_0, (1-\bar \alpha_t) \mathbf I)
q ( x t ∣ x 0 ) = N ( x t ; α ˉ t x 0 , ( 1 − α ˉ t ) I ) (8)
Consider an arbitrary set of time steps k , s ∈ { 0 , 1 , … , T } k,s \in \{0, 1, \ldots, T\} k , s ∈ { 0 , 1 , … , T } with s ≤ k − 1 s \leq k - 1 s ≤ k − 1 . We want to find the distribution q ( x s ∣ x k , x 0 ) q(\mathrm x_s|\mathrm x_k, \mathrm x_0) q ( x s ∣ x k , x 0 ) , so that we can skip some steps during sampling.
However, unlike the Markovian case, we cannot derive this distribution directly. Instead, we make an assumption that it follows a Gaussian distribution:
q ( x s ∣ x k , x 0 ) = N ( k x 0 + m x k , σ 2 I ) q(\mathrm x_s |\mathrm x_k, \mathrm x_0) = \mathcal N(k\mathrm x_0 + m\mathrm x_k, \sigma^2 \mathbf I)
q ( x s ∣ x k , x 0 ) = N ( k x 0 + m x k , σ 2 I ) (9)
where k , m , σ k, m, \sigma k , m , σ are coefficients to be determined. We need to solve these three coefficients.
x s = k x 0 + m x k + σ ϵ , ϵ ∼ N ( 0 , I ) = k x 0 + m ( α ˉ k x 0 + 1 − α ˉ k ϵ ′ ) + σ ϵ , ϵ ′ ∼ N ( 0 , I ) = ( k + m α ˉ k ) x 0 + m 1 − α ˉ k ϵ ′ + σ ϵ ∼ N ( ( k + m α ˉ k ) x 0 , m 2 ( 1 − α ˉ k ) + σ 2 ) \begin{aligned}
x_s &= k \mathrm x_0 + m \mathrm x_k + \sigma \epsilon, \quad \epsilon \sim \mathcal N(0, \mathbf I) \\
&= k \mathrm x_0 + m(\sqrt{\bar \alpha_k} \mathrm x_0 + \sqrt{1-\bar \alpha_k} \epsilon') + \sigma \epsilon, \quad \epsilon' \sim \mathcal N(0, \mathbf I) \\
&= (k + m \sqrt{\bar \alpha_k}) \mathrm x_0 + m \sqrt{1-\bar \alpha_k} \epsilon' + \sigma \epsilon\\
&\sim \mathcal N\left((k + m \sqrt{\bar \alpha_k}) \mathrm x_0, m^2 (1-\bar \alpha_k) + \sigma^2\right)
\end{aligned}
x s = k x 0 + m x k + σ ϵ , ϵ ∼ N ( 0 , I ) = k x 0 + m ( α ˉ k x 0 + 1 − α ˉ k ϵ ′ ) + σ ϵ , ϵ ′ ∼ N ( 0 , I ) = ( k + m α ˉ k ) x 0 + m 1 − α ˉ k ϵ ′ + σ ϵ ∼ N ( ( k + m α ˉ k ) x 0 , m 2 ( 1 − α ˉ k ) + σ 2 ) (10)
To ensure consistency with the forward process, we need to match the mean and variance with those of q ( x s ∣ x 0 ) q(\mathrm x_s|\mathrm x_0) q ( x s ∣ x 0 ) :
q ( x s ∣ x 0 ) = N ( α ˉ s x 0 , ( 1 − α ˉ s ) I ) q(\mathrm x_s|\mathrm x_0) = \mathcal N(\sqrt{\bar \alpha_s} \mathrm x_0, (1-\bar \alpha_s) \mathbf I)
q ( x s ∣ x 0 ) = N ( α ˉ s x 0 , ( 1 − α ˉ s ) I ) (11)
This gives us two equations:
k + m α ˉ k = α ˉ s m 2 ( 1 − α ˉ k ) + σ 2 = 1 − α ˉ s \begin{aligned}
k + m \sqrt{\bar \alpha_k} &= \sqrt{\bar \alpha_s} \\
m^2 (1-\bar \alpha_k) + \sigma^2 &= 1-\bar \alpha_s
\end{aligned}
k + m α ˉ k m 2 ( 1 − α ˉ k ) + σ 2 = α ˉ s = 1 − α ˉ s (12)
We treat σ \sigma σ as a hyperparameter to control the stochasticity of the sampling process. By solving the above equations, we can obtain:
m = ( 1 − α ˉ s ) − σ 2 1 − α ˉ k k = α ˉ s − m α ˉ k \begin{aligned}
m &= \sqrt{\frac{(1-\bar \alpha_s) - \sigma^2}{1-\bar \alpha_k}} \\
k &= \sqrt{\bar \alpha_s} - m \sqrt{\bar \alpha_k}
\end{aligned}
m k = 1 − α ˉ k ( 1 − α ˉ s ) − σ 2 = α ˉ s − m α ˉ k (13)
With these coefficients, we can define the DDIM sampling step from x k \mathrm x_k x k to x s \mathrm x_s x s as:
q ( x s ∣ x k , x 0 ) = N ( x s ; k x 0 + m x k , σ 2 I ) = N ( x s ; α ˉ s x 0 + ( 1 − α ˉ s ) − σ 2 ⋅ x k − α ˉ k x 0 1 − α ˉ k , σ 2 I ) \begin{aligned}
q(\mathrm x_s |\mathrm x_k, \mathrm x_0) &= \mathcal N\left(\mathrm x_s; k \mathrm x_0 + m \mathrm x_k, \sigma^2 \mathbf I\right)\\
&= \mathcal N\left(\mathrm x_s; \sqrt{\bar \alpha_s} \mathrm x_0 + \sqrt{(1-\bar \alpha_s) - \sigma^2} \cdot \frac{\mathrm x_k - \sqrt{\bar \alpha_k} \mathrm x_0}{\sqrt{1-\bar \alpha_k}}, \sigma^2 \mathbf I\right)
\end{aligned}
q ( x s ∣ x k , x 0 ) = N ( x s ; k x 0 + m x k , σ 2 I ) = N ( x s ; α ˉ s x 0 + ( 1 − α ˉ s ) − σ 2 ⋅ 1 − α ˉ k x k − α ˉ k x 0 , σ 2 I ) (14)
The magnitude of σ \sigma σ controls how stochastic the forward process is; when σ → 0 \sigma \to 0 σ → 0 , we reach an extreme case where as long as we observe x 0 x_0 x 0 and x t x_t x t for some t t t , then x t − 1 x_{t-1} x t − 1 becomes known and fixed.
The forward process of DDIM indeed changes the forward process from a Markovian one to a non-Markovian one, and makes an assumption on the form of the transition distribution, this causing a doubt on whether the learned model can still work well under this new forward process.
However, what DDPM truly use in the forward process is the reparameterization trick x t = α ˉ t x 0 + 1 − α ˉ t ϵ \mathrm x_t = \sqrt{\bar \alpha_t} \mathrm x_0 + \sqrt{1-\bar \alpha_t} \epsilon x t = α ˉ t x 0 + 1 − α ˉ t ϵ , which is still valid in DDIM since the forward process of DDIM doesn’t change this property, and thus the learned model can still be used in DDIM sampling, which we will explain in the next section.
4. Sampling Process
As we have trained a DDPM model to predict ϵ θ ( x t , t ) \epsilon_\theta(\mathrm x_t, t) ϵ θ ( x t , t ) , we can use it to estimate x t − 1 \mathrm x_{t-1} x t − 1 from x t \mathrm x_t x t :
x t − 1 = 1 α t ( x t − 1 − α t 1 − α ˉ t ϵ θ ( x t , t ) ) + σ t ϵ , ϵ ∼ N ( 0 , I ) \begin{aligned}
\mathrm x_{t-1} &= \frac{1}{\sqrt{\alpha_t}}\left(\mathrm x_t - \frac{1-\alpha_t}{\sqrt{1-\bar \alpha_t}} \epsilon_\theta(\mathrm x_t, t)\right) + \sigma_t \epsilon, \quad \epsilon \sim \mathcal N(0, \mathbf I)\\
\end{aligned}
x t − 1 = α t 1 ( x t − 1 − α ˉ t 1 − α t ϵ θ ( x t , t ) ) + σ t ϵ , ϵ ∼ N ( 0 , I ) (15)
Essentially, what we have learned in DDPM is to predict x t − 1 \mathrm x_{t-1} x t − 1 from x t \mathrm x_t x t :
p θ ( x t − 1 ∣ x t ) = N ( x t − 1 ; μ ~ θ ( x t , t ) , σ t 2 I ) p_\theta(\mathrm x_{t-1}|\mathrm x_t) = \mathcal N\left(\mathrm x_{t-1}; \tilde \mu_\theta(\mathrm x_t, t), \sigma_t^2 \mathbf I\right)
p θ ( x t − 1 ∣ x t ) = N ( x t − 1 ; μ ~ θ ( x t , t ) , σ t 2 I ) (16)
The goal of DDPM is to minimize the difference between p θ ( x t − 1 ∣ x t ) p_\theta(\mathrm x_{t-1}|\mathrm x_t) p θ ( x t − 1 ∣ x t ) and q ( x t − 1 ∣ x t , x 0 ) q(\mathrm x_{t-1}|\mathrm x_t,\mathrm x_0) q ( x t − 1 ∣ x t , x 0 ) . However, DDPM perform the reparameterization of x t \mathrm x_t x t in terms of x 0 \mathrm x_0 x 0 and ϵ \epsilon ϵ :
x t = α ˉ t x 0 + 1 − α ˉ t ϵ , ϵ ∼ N ( 0 , I ) \mathrm x_t = \sqrt{\bar \alpha_t} \mathrm x_0 + \sqrt{1-\bar \alpha_t} \epsilon, \quad \epsilon \sim \mathcal N(0, \mathbf I)
x t = α ˉ t x 0 + 1 − α ˉ t ϵ , ϵ ∼ N ( 0 , I ) (17)
Thus, in essence, we learn the noise ϵ \epsilon ϵ to predict x 0 \mathrm x_0 x 0 from x t \mathrm x_t x t in DDPM:
x 0 = x t − 1 − α ˉ t ϵ α ˉ t \mathrm x_0 = \frac{\mathrm x_t - \sqrt{1-\bar \alpha_t} \epsilon}{\sqrt{\bar \alpha_t}}
x 0 = α ˉ t x t − 1 − α ˉ t ϵ (18)
With the learned ϵ \epsilon ϵ , during the sampling process of DDIM, we can substitute the predicted noise ϵ θ ( x t , t ) \epsilon_\theta(\mathrm x_t, t) ϵ θ ( x t , t ) for ϵ \epsilon ϵ to give an estimate of x 0 \mathrm x_0 x 0 :
x ^ 0 ( x k , k ) = x k − 1 − α k ϵ θ ( x k , k ) α k \mathrm{\hat{x}}_0(\mathrm{x}_k, k) = \frac{\mathrm{x}_k - \sqrt{1-\alpha_k} \mathrm{\epsilon}_\theta(\mathrm{x}_k, k)}{\sqrt{\alpha_k}}
x ^ 0 ( x k , k ) = α k x k − 1 − α k ϵ θ ( x k , k ) (19)
Then we can generalize this to predict x s \mathrm x_s x s from x k \mathrm x_k x k :
x s = α ˉ s x ^ 0 + ( 1 − α ˉ s ) − σ 2 ⋅ x k − α ˉ k x ^ 0 1 − α ˉ k + σ ϵ , ϵ ∼ N ( 0 , I ) = α ˉ s ( x k − 1 − α ˉ k ϵ θ ( x k , k ) α ˉ k ) + ( 1 − α ˉ s ) − σ 2 ⋅ x k − α ˉ k ( x k − 1 − α ˉ k ϵ θ ( x k , k ) α ˉ k ) 1 − α ˉ k + σ ϵ = α ˉ s ( x k − 1 − α ˉ k ϵ θ ( x k , k ) α ˉ k ) + ( 1 − α ˉ s ) − σ 2 ⋅ 1 − α ˉ k ⋅ ϵ θ ( x k , k ) 1 − α ˉ k + σ ϵ = α ˉ s ( x k − 1 − α ˉ k ϵ θ ( x k , k ) α ˉ k ) + ( 1 − α ˉ s ) − σ 2 ⋅ ϵ θ ( x k , k ) + σ ϵ \begin{aligned}
\mathrm x_s &= \sqrt{\bar \alpha_s} \mathrm {\hat{x}}_0 + \sqrt{(1-\bar \alpha_s) - \sigma^2} \cdot \frac{\mathrm x_k - \sqrt{\bar \alpha_k} \mathrm {\hat{x}}_0}{\sqrt{1-\bar \alpha_k}} + \sigma \epsilon, \quad \epsilon \sim \mathcal N(0, \mathbf I)\\
&= \sqrt{\bar \alpha_s} \left(\frac{\mathrm x_k - \sqrt{1-\bar \alpha_k} \epsilon_\theta(\mathrm x_k, k)}{\sqrt{\bar \alpha_k}}\right) + \sqrt{(1-\bar \alpha_s) - \sigma^2} \cdot \frac{\mathrm x_k - \sqrt{\bar \alpha_k} \left(\frac{\mathrm x_k - \sqrt{1-\bar \alpha_k} \epsilon_\theta(\mathrm x_k, k)}{\sqrt{\bar \alpha_k}}\right)}{\sqrt{1-\bar \alpha_k}} + \sigma \epsilon\\
&= \sqrt{\bar \alpha_s} \left(\frac{\mathrm x_k - \sqrt{1-\bar \alpha_k} \epsilon_\theta(\mathrm x_k, k)}{\sqrt{\bar \alpha_k}}\right) + \sqrt{(1-\bar \alpha_s) - \sigma^2} \cdot \sqrt{1-\bar \alpha_k} \cdot \frac{\epsilon_\theta(\mathrm x_k, k)}{\sqrt{1-\bar \alpha_k}} + \sigma \epsilon\\
&= \sqrt{\bar \alpha_s} \left(\frac{\mathrm x_k - \sqrt{1-\bar \alpha_k} \epsilon_\theta(\mathrm x_k, k)}{\sqrt{\bar \alpha_k}}\right) + \sqrt{(1-\bar \alpha_s) - \sigma^2} \cdot \epsilon_\theta(\mathrm x_k, k) + \sigma \epsilon
\end{aligned}
x s = α ˉ s x ^ 0 + ( 1 − α ˉ s ) − σ 2 ⋅ 1 − α ˉ k x k − α ˉ k x ^ 0 + σ ϵ , ϵ ∼ N ( 0 , I ) = α ˉ s ( α ˉ k x k − 1 − α ˉ k ϵ θ ( x k , k ) ) + ( 1 − α ˉ s ) − σ 2 ⋅ 1 − α ˉ k x k − α ˉ k ( α ˉ k x k − 1 − α ˉ k ϵ θ ( x k , k ) ) + σ ϵ = α ˉ s ( α ˉ k x k − 1 − α ˉ k ϵ θ ( x k , k ) ) + ( 1 − α ˉ s ) − σ 2 ⋅ 1 − α ˉ k ⋅ 1 − α ˉ k ϵ θ ( x k , k ) + σ ϵ = α ˉ s ( α ˉ k x k − 1 − α ˉ k ϵ θ ( x k , k ) ) + ( 1 − α ˉ s ) − σ 2 ⋅ ϵ θ ( x k , k ) + σ ϵ (20)
By iteratively applying this process with a reduced number of steps, we can efficiently generate high-quality samples from the diffusion model without the need for additional training.
5. The Choice of σ
When σ = 0 \sigma = 0 σ = 0 , the sampling process becomes deterministic, meaning that for a given initial noise input, the output will always be the same. This can be beneficial in scenarios where reproducibility is important, or when we want to ensure that the generated samples are consistent.
On the other hand, when σ > 0 \sigma > 0 σ > 0 , the sampling process introduces stochasticity, allowing for more diverse outputs from the same initial noise input. This can be advantageous in creative applications where variability is desired, such as in image generation or other generative tasks.
When σ = 1 − α ˉ t − 1 1 − α ˉ t β t \sigma = \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} σ = 1 − α ˉ t 1 − α ˉ t − 1 β t , the sampling process of DDIM is equivalent to that of DDPM.
To prove this, we can substitute this value of σ \sigma σ into the DDIM sampling equation:
x t − 1 = α ˉ t − 1 ( x t − 1 − α ˉ t ϵ θ ( x t , t ) α ˉ t ) + ( 1 − α ˉ t − 1 ) − σ 2 ⋅ ϵ θ ( x t , t ) + σ ϵ = α ˉ t − 1 α ˉ t x t − α ˉ t − 1 1 − α ˉ t α ˉ t ϵ θ ( x t , t ) + ( 1 − α ˉ t − 1 ) − 1 − α ˉ t − 1 1 − α ˉ t β t ⋅ ϵ θ ( x t , t ) + 1 − α ˉ t − 1 1 − α ˉ t β t ϵ = 1 α t ( x t − 1 − α t 1 − α ˉ t ϵ θ ( x t , t ) ) + 1 − α ˉ t − 1 1 − α ˉ t β t ϵ = μ ~ θ ( x t , t ) + σ t ϵ \begin{aligned}
\mathrm x_{t-1} &= \sqrt{\bar \alpha_{t-1}} \left(\frac{\mathrm x_t - \sqrt{1-\bar \alpha_t} \epsilon_\theta(\mathrm x_t, t)}{\sqrt{\bar \alpha_t}}\right) + \sqrt{(1-\bar \alpha_{t-1}) - \sigma^2} \cdot \epsilon_\theta(\mathrm x_t, t) + \sigma \epsilon \\
&= \frac{\sqrt{\bar \alpha_{t-1}}}{\sqrt{\bar \alpha_t}} \mathrm x_t - \frac{\sqrt{\bar \alpha_{t-1}} \sqrt{1-\bar \alpha_t}}{\sqrt{\bar \alpha_t}} \epsilon_\theta(\mathrm x_t, t) + \sqrt{(1-\bar \alpha_{t-1}) - \frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} \cdot \epsilon_\theta(\mathrm x_t, t) + \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} \epsilon \\
&= \frac{1}{\sqrt{\alpha_t}}\left(\mathrm x_t - \frac{1-\alpha_t}{\sqrt{1-\bar \alpha_t}} \epsilon_\theta(\mathrm x_t, t)\right) + \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} \epsilon\\
&= \tilde \mu_\theta(\mathrm x_t, t) + \sigma_t \epsilon
\end{aligned}
x t − 1 = α ˉ t − 1 ( α ˉ t x t − 1 − α ˉ t ϵ θ ( x t , t ) ) + ( 1 − α ˉ t − 1 ) − σ 2 ⋅ ϵ θ ( x t , t ) + σ ϵ = α ˉ t α ˉ t − 1 x t − α ˉ t α ˉ t − 1 1 − α ˉ t ϵ θ ( x t , t ) + ( 1 − α ˉ t − 1 ) − 1 − α ˉ t 1 − α ˉ t − 1 β t ⋅ ϵ θ ( x t , t ) + 1 − α ˉ t 1 − α ˉ t − 1 β t ϵ = α t 1 ( x t − 1 − α ˉ t 1 − α t ϵ θ ( x t , t ) ) + 1 − α ˉ t 1 − α ˉ t − 1 β t ϵ = μ ~ θ ( x t , t ) + σ t ϵ (21)
where σ t = 1 − α ˉ t − 1 1 − α ˉ t β t \sigma_t = \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} σ t = 1 − α ˉ t 1 − α ˉ t − 1 β t which is exactly the same as the β ~ t = 1 − α ˉ t − 1 1 − α ˉ t β t \tilde \beta_t = \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t} β ~ t = 1 − α ˉ t 1 − α ˉ t − 1 β t in DDPM.
DDIM introduce a hyperparameter η \eta η to control the level of stochasticity in the sampling process. The relationship between σ \sigma σ and η \eta η is defined as:
σ t = η 1 − α ˉ t − 1 1 − α ˉ t β t \sigma_t = \eta \sqrt{\frac{1-\bar \alpha_{t-1}}{1-\bar \alpha_{t}}\beta_t}
σ t = η 1 − α ˉ t 1 − α ˉ t − 1 β t (22)
When η = 0 \eta = 0 η = 0 , the sampling process is deterministic, while when η = 1 \eta = 1 η = 1 , it becomes equivalent to the stochastic sampling of DDPM. By adjusting η \eta η , users can control the trade-off between sample diversity and fidelity according to their specific needs.