- Published on
Generative Models Under Geometric Constraints
- Authors
- Name
- Alex Rosen
Introduction
Diffusion models and DDPM [1] were originally inspired by a random walk from non-equilibirum thermodynamics, where we define a process to gradually move from a training sample to noise we call an equilibrium state [2]. This perspective unveiled a new framework to train a generative model by learning to reverse this process moving from an equilibrium state to a high-concentration sample resembling our dataset.
It turns out that this point of view for training a generative model has many deep and intuitive properties, from an absence of joint training objectives to the reverse random walk encouraging a diverse set of samples. Still, one desireable property seems to remain elusive: How do we bake constraints into the reverse process or final sample? How might we sample a configuration of objects in a room with guarantees they don't overlap? My work attempts to pave a path forward for diffusion-esque algorithms that facilitate sampling under geometric constraints.
Background on diffusion
DDPMs belong to a class of generative models whose sampling process involves moving backwards through a finite first-order Markov chain starting from an isotropic Gaussian noise equilibrium. Given the previous element in our chain (starting with a sample from our training set), we define the forward process for as
which we can sample directly from as
for and . This forward process is repeated for steps (usually 1000), and we choose so that equilibrium is approximately standard normal.
At inference time, start with a sample and seek to repeatedly sample given until we reach . Using the reparameterization trick, we can write
for , allowing us to show the mean of the reverse posterior
is
So, we train a network to predict this given and , and sample as this with small additive Gaussian noise dependent on .
With the success of this method came many subsequent alterations, such as speeding up the reverse process [3], reasoning about a more suitable variance schedule [4], or performing the diffusion process in a latent space [5]. Relevant follow-up work also includes cold diffusion [6], which generalizes the idea of diffusion to non-Gaussian or even deterministic equilibria. A much simpler algorithm, cold diffusion attempts to predict back , learning a restoration operation given some degradation :
To sample an , we start with , predict , run this prediction through our forward process to produce an , and repeat until timestep zero.
Incorporating constraints into the reverse process and final generation are typically problem specific, resulting in a variety of different methods, often approximations. My algorithm is an attempt at solving problems with geometric constraints in greater generality.
Proposed Method
The formulations of diffusion in the previous section rely on an intermediate prediction of , whether it's explicit with cold diffusion or implicitly through the mean of a posterior with DDPM. In order to account for constraints during sampling or constraints in the final output, we need some process involving these constraints during training, which we can introduce by having a direct prediction back one timestep, as opposed to an intermediate prediction or sampling process through .
My method begins with removing the Markov chain from the diffusion process and replacing it with linear interpolation between equilibria noise and training examples. At training time, we sample a timestep uniformly from , a training example and an equilibrium from any deterministic or non-deterministic distribution . Lying on the line segment between and , we define
that approaches as and as . To devise a sampling algorithm to obtain given , we can compute and using the same source of randomness , and directly predict without conditioning on .
Now that we have a method to directly predict , we can rephrase this prediction to incorporate constraints; instead of a model learning , suppose we learn an initial velocity vector to apply to points in space . Then, we can feed these velocity vectors and initial positions to a simulation, run this simulation, and record the final positions as our prediction for . That is, for a differentiable physics engine taking initial positions and velocities to final positions, a model , and some loss between and , we substitute the gradient descent step on
for a step on
This introduction of allows the reverse process to move through space under constraints, such as object-to-object collisions, object-to-scene collisions, deformations, a background vector field, etc.
At inference time, this algorithm remains similar to DDPM and cold diffusion. We sample and repeatedly compute down to timestep zero.
Applications
Current focus: The applications I aim to focus on in a first publication are:
- 3D point cloud generation with spheres instead of points. The goal is to match generation quality and diversity of current methods (for points only) while being able to guarantee that spheres don't overlap. This application would make my algorithm approachable to understand and it would be a straightforward way to demonstrate that it holds up against DDPM.
- Layout planning—like the previous application but with more complicated shapes and geometry.
- Solving jigsaw or Voronoi diagram related puzzles.
Denoising in the real world: If performing inference in the real-world rather than simulation, the velocity vectors would be applied to the objects in positions and would be measured from the scene after some fixed period of time. The differentiable physics engine may also be used to introduce small sources of randomness during training time simulations to ensure final positions are not fully determined by and our learned velocities are robust against discrepancies between simulation and the real world.
Image generation under geometric constraints: I've thought a lot about using this method to have greater control over what diversity refers to in the context to image generation.
Figure 1 shows Figure 3 mapped to a normalized HSV color space and Figure 2 shows the result of an iterative separation algorithm to transform the points so that nearly all of them are 0.1 apart. Figure 4 shows Figure 3 mapped back to the pixel domain. Obviously, Figure 4 looks terrible, but what's interesting is that the image now appears diverse in color—notice the changes in the blues of the sky and water. With the right choice of the amount of separation and a large enough dataset, perhaps the algorithm I propose could be used to go beyond current notions of diversity for generative models (referring to outputs that look different or seem to encompass an entire distribution) and introduce notions of diversity in terms of the color spectrum across an image or ratios of frequency (using geometry in the Fourier domain to have samples ranging from a large number of low frequency to a large number of high frequency components). This is something I would come back to after a first publication.