The long-established default in the modeling space has been the transition from pushing pixels to manipulating points in space. Originally defined with a lightpen scanning physical models or handwritten points in code into point clouds. Over time, the ambiguity of points in space led to the dominance of the triangle or surface representation. Today, we explore whether this is still the best format for our tools to work in. What are the advantages and challenges of moving to volumetric tools?
Note:: This is a large topic so there will be a second part next week
Key Takeaways
- Triangles have superior mathematical properties for rendering efficiency.
- Volumes are better sources of truth and easier to sample.
- All volumes have a surface, but not all surfaces have a valid volume.
- GPU advancements now enable direct work with volumetric data in creative workflows.
- Future 3D modeling likely shifts to volume-first tools, with triangles remaining in final rendering.
- Hybrid approaches will bridge the gap between volumetric and surface representations.
The Fundamentals: Surface vs. Volume Representations
Comparing Cubes and Spheres: A Tale of Two Geometries
Before we delve into why triangles have become the dominant format, we need to explore the more fundamental problem space of surface vs volume representation.
In this ideal use case, a cube can easily be defined with 12 triangles with no loss of data. However, in the case of a sphere using purely planar triangles, it is effectively impossible to have no data loss, so we always have some degree of data loss or must use more implicit surface representations. The issue is that worst case volume scales in a cubic, in the common case surfaces are a squared progression though the worst case surfaces are infinity. Though in practical terms this is avoidable for surfaces in general volumes are more predictable though heavier weight data formats.
The Spectrum of 3D Data Formats
As you can see, the space of surfaces is DOMINATED by triangles, but it's important to realize the breadth of the ecosystem out there. Here is a brief overview of most the formats that come to mind:
The Reign of Triangles: Understanding the Status Quo
Mathematical Superiority of Triangles
The mathematical superiority of triangles is hard to argue against:
- Given any three points (not in a line - collinear) you get a valid triangle
- Barycentric coordinates for interpolation is easy and fast
- Given a fixed 2d perspective it is easy to sort and organize
- Line to Triangle intersection is fast, solid and predictable
The first two are strong arguments which always hold, but interestingly, the third point is only relevant when moving data from 3D to a framebuffer. The last one has wide application to creative tooling as we often want to project to or select a surface.
Historical Context: The Triangle Limit
Another significant reason for the dominance of triangles, which we touched on in the intro, is the data compression factor. Any artist of a certain age will be strongly aware of the triangle limit. Working with any abstractions from the real final data loses artistic control, which was very obvious when our limits were so low.
It is my firm belief that when talking about graphics engines and final render pipelines, triangles will remain a dominant format long into the future. Even with the increased usage of ray tracing, triangles will always need a good strong option for exporting to triangles regardless of format.
The Volume Advantage: Challenging the Surface Paradigm
The False Equivalence: Surfaces vs. Volumes
For any volume there exists a valid surface,
but not every surface represents a valid volume
This is the key argument for why our toolchains should maintain volumetrics for as long as possible. Even real-time graphics engines often need volumetrics for lighting, physics, or other calculations.
Conversion Challenges: From Surface to Volume and Back
Due to the dominance of surface representations in our tool chain, the methods for generating volumes from surfaces are very mature and well-established. However, they often require assumptions or are fragile to non-watertight meshes or other open cases which break the conversion. The algorithms are often very expensive for high-quality volumes.
The conversion of volumes to surfaces is less well represented. The most well-understood method, marching cubes, was locked behind a software patent for a key period of graphics development, stunting the growth of these methods. It is no longer patented, and more superior methods like dual contouring now exist.
The Complexity Spectrum: From Explicit to Implicit Representations
Explicit Representations: Triangles and Voxels
Triangles and dense voxel grids are at the most explicit end of the spectrum. There is a one-to-one mapping of data which is very predictable to process. Though as covered in earlier articles, our bottleneck on modern hardware is typically related to I/O. So even when a format takes a little more time to process, it is worth the tradeoff in many cases.
Complex Explicit Representations: Textures and UV Mapping
This moves into more complex formats, with things like textured surfaces using displacement maps and other UV mapped 2D data. This is a complex explicit representation but is still explicit; it requires even more data lookup, as well as some calculations. UV Mapping also adds an additional fragile quality beyond the scope of this article to discuss.
Implicit Representations: NURBS and Signed Distance Fields
Another approach which does not require additional data lookup but calculation are methods like NURBs and 2D signed distance fields. NURBS (Non-Uniform Rational B-Splines) are mathematical surfaces with a high degree of precision and flexibility. They are defined by control points, weights, and knot vectors, allowing for smooth, easily manipulable surfaces. Car manufacturers and people looking for precision surfaces really like these models, but they tend to be very computationally expensive.
Game Engine Approaches: BSP Trees and CSG
Game engines have traditionally preferred BSP Trees and CSG, typically building BSP from CSG. Constructive Solid Geometry (CSG) combines simple shapes to create complex 3D models, while Binary Space Partitioning (BSP) trees efficiently subdivide space for rendering and collision detection. In Quake, CSG was used to design levels, and BSP trees generated from these designs enabled real-time rendering of complex 3D environments on mid-1990s hardware by quickly determining visible polygons from any viewpoint.
Advanced Volumetric Representations: Sparse Voxel Grids
While CSG works well for coarse level design, it does not scale well to artistic shape and form. Another approach taken by photogrammetry and simulation formats like OpenVDB is sparse voxel grids. This takes the approach of representing volumetric data efficiently by storing only the relevant, non-empty voxels in a hierarchical data structure.
Sparse voxel grids or level sets divide space into a grid but only allocate memory and compute resources for areas containing actual data or near the surface of objects. This allows for highly detailed and complex shapes to be represented without the memory overhead of storing empty space, making it possible to handle much higher resolution volumes than traditional dense grids.
Adapative grids are also really interesting by storing data at each graph point you are able to reduce the node count. Also You can adaptivly sample the grid for the desired level of detail. This maps well to certain GPU texture optimisation modes. Though typically a blend of approaches is used with a low level lookup vs a high level grid. Finally using scene tree for the highest level of sparse data.
Beyond Binary: Occupancy and Density in 3D Representation
Occupancy: The Traditional Approach
The traditional triangle surface or voxel grid works on the concept of one-bit occupancy. There is either a surface or volume there, or there is not.
Density: Adding a New Dimension to 3D Data
Density doesn't really exist in most surface representations, though some volumetric capture data from medical imaging and volume simulation software creates it. Density is great for VFX use cases like clouds and fire simulation. It also has pretty awesome physics and sculpting properties. For physics simulation, light transport, and even gameplay, this is an interesting property. In the case of tools, you can squish and pull while maintaining volume, like real materials would.
This dynamic manipulation property is one I really wanted to explore in the future, though I am now unlikely to dedicated the time.
Material Properties: Transparency and Surface Interactions
There is a concept of material transparency. In the world of surfaces, we can use those additional data channels to lay on alpha or transparency, but it is fundamentally not a concept which maps well to our real-world understanding of light transport and physics. In most cases when we need to calculate surface interactions, we fake it or we infer a volume to calculate thickness, sometimes assuming a simple in-and-out model based on surface facing. In the world of volumes, we just say this volume or sub-volume has an index of refraction of X.
Performance and Practicality: Navigating the Tradeoffs
The Cache Compromise: Balancing Speed and Memory
When working with volumetric data, caching strategies become crucial for maintaining performance. Both volume-to-surface and surface-to-volume conversions can benefit from caching, reducing uncertainties and improving overall performance. However, caching comes with its own set of challenges:
- Cached data is heavy to move, potentially causing I/O bottlenecks.
- Cached data often needs reconstruction, which can be computationally expensive.
- Caches consume valuable video RAM, a limited resource on many systems.
Surface Regeneration: The Volumetric Editing Challenge
One of the key challenges when working with volumetric data is the need to regenerate surface representations when editing volumes. This process can be computationally intensive and may introduce latency in interactive editing scenarios. Several strategies can be employed to mitigate this issue including incremental update and multi resolution.
In Dreams we had a fast path surface generation for responsive sculpting then a more correct slow pass which was done in the background as the stroke completes. Most of the time a user is idle in creative applications. That sounds strange but even in a flow state from a computers point of view the time between brush strokes is massive.
An alternative approach is avoid caching entirely and work directly with ray casting applications. I think in realtime modelling applications the computer graphics hardware is fast heading towards that direction. Though when not working directly with surfaces a caching approach is still likely to be superior in render times.
GPU-Centric Workflows: New Horizons in 3D Processing
Recent advancements in GPU technology and memory management have opened up new possibilities for volumetric workflows:
Direct Memory Access on GPUs: Modern GPUs can load memory directly from the hard drive into GPU memory using Direct Storage APIs. This capability alleviates many issues related to data transfer and management.
Keeping calculations in video RAM: By performing most calculations directly in GPU memory, we can avoid expensive host-to-device memory transfers.
GPU-based volumetric operations: Implementing volume editing and surface generation algorithms directly on the GPU can significantly improve performance for interactive workflows.
However, working entirely on the GPU comes with its own set of challenges:
- Limited memory: GPU memory is typically more expensive and limited in size compared to system memory.
- Complexity: All operations must be written in GPU code, which can increase development complexity.
- CPU-GPU synchronization: Managing data coherence between CPU and GPU can introduce additional complexity.
Looking Forward: Emerging Solutions and Future Directions
Lightweight Volumetric Formats: Addressing the Data Challenge
To address the challenges of storing and transferring volumetric data, several lightweight formats have emerged:
- Implicit Signed Distance Functions (SDFs): These provide a compact representation of complex volumetric data.
- Constructive Solid Geometry (CSG): Allows for efficient representation of certain types of geometry through boolean operations.
These formats offer reduced storage requirements compared to explicit voxel grids and can be more efficient for certain types of geometric operations and queries. However, they may require more computation to evaluate compared to explicit representations, and certain operations might be more complex or time-consuming. In a future article I do want to talk about some of the data advantages of implicit SDF over CSG though again this article is already long.
Hybrid Approaches: Combining Surface and Volumetric Strengths
As we look to the future, it's likely that hybrid approaches, leveraging the strengths of both surface and volumetric representations, will play an increasingly important role in creative tools. These hybrid methods could potentially offer the best of both worlds: the editing flexibility of volumetric data with the rendering efficiency of surface representations.
Conclusion: Shaping the Future of 3D Creative Tools
The choice between surface and volumetric representations in creative tools is not a simple one. Each approach offers distinct advantages and challenges, and the optimal choice often depends on the specific requirements of the application at hand.
Surface representations, particularly triangles, have long dominated the field due to their mathematical properties and rendering efficiency. However, volumetric approaches offer advantages in terms of representing complex geometries, handling multi-scale detail, and providing a more intuitive editing experience in some scenarios.
Through due to the imbalance of the false equivlence volumes are a fundementally a better source of truth than surfaces. Computers are now fast enough to work directly with volumes, which open up new workflows.
As hardware capabilities continue to evolve, particularly in the realm of GPU computing and memory management, we may see a shift towards more volumetric workflows. The ability to work directly with volumetric data on the GPU, coupled with advanced caching strategies and lightweight volumetric formats, could potentially overcome many of the traditional barriers to volumetric adoption.
Ultimately, the ongoing evolution of 3D representation methods continues to shape the field of creative tools and computer graphics. As we push the boundaries of what's possible in 3D modeling and rendering, we can expect to see continued innovation in data structures, algorithms, and workflows that bridge the gap between surface and volumetric approaches.
The future of 3D modeling for realtime applications is complex but I believe it is time for a fundemental shift to a volume first creative tools workflow. Dreams, Modeller and their cohourts are the first in a wave of change to come. Though triangles will still often be the last geometry before rasterisation.