The Creativity-Determinism Paradox in Procedural Generation

This post is the second in our series on procedural generation in game design. The first post covered how game developers need to make judgement calls, using lessons from Failbetter Games Sunless Seas Sunless Skies as examples. In this post we look to elaborate on some takeaways using another example. 

In 2016, indie studio Hello Games launched their game “No Man’s Sky.” Media coverage was tremendous. Sony Interactive Entertainment helped to promote and publish the game, and expectations were high. The promise was that the player would travel a universe consisting of no less than 18 quintillion (that’s 18 zeros!) planets—all of which are procedurally generated real-time only when the player visits them! Some who still knew the classic 1984 science-fiction exploration game “Elite” or one of its successors were thrilled by the thought of playing a game that is even vaster and more immersive—and all of this with up-to-date graphics and gameplay.

When the first version of the game came out, excitement soon changed to disappointment, although some praised the game for its technical achievements. Yes, you could travel a virtually unlimited number of planets with plenty flora and fauna. But there was a problem: users felt that these planets looked similar and the gameplay was repetitive. One user wrote on a No Man’s Sky discussion board on July 31, 2018: “I have seen quite a few planets now in the starting galaxy Euclid and they really get boring after a while because the procedural generation is quite the same all the time” (link).

A screenshot from No Man’s Sky (Hello Games) taken in 2017

What Went Wrong?

Before I continue I would like to mention that the game went through an evolution by means of multiple updates that included adding a multiplayer component, improving on the graphics (the actual algorithms for procedural generation were improved), changing the user interface, adding a VR option, and some other things—which led the game to indeed get much more favorable reviews. But let’s now look at what happened with the original release.

While there were a number of issues with the original release (including that features were missing that had been announced to be part of the game), it seems that the uniformity of the game content and the repetitive experience were a core problem. In this blog I thus want to look at one of the key challenges involved in using procedural tools (or, what I broadly understand to be “autonomous design tools”)—the tension between the danger of producing boring repetitive design on the one hand, and the promise of generating potentially creative content at unprecedented scale and speed on the other.

On the one hand, these tools are deterministic systems—they are algorithms whose output depends on the input they receive. Albeit, No Man’s Sky’s game space (those quintillions of planets) is created at run-time, it is still deterministic. This determinism even holds true for approaches that have the word “random” in them (like pseudo random number generation, which, not coincidentally, also has the word “pseudo” in it). However, some of these systems provide output based on a variety of input variables that cannot possibly anticipated by human designers—their outcomes are complex to the extent that humans may thus indeed perceive them as “creative.” Moreover, they do so at unprecedented scale and speed. No human could handcraft 18 quintillion planets!

So—how do developers handle this tension? Through our research we have identified a number of strategies designers use to navigate this tension. One category of such strategies is what we call “architectural structuring.

Architectural Structuring

The key is that using procedural tools is always in relation to fundamental, broad architectural choices designers and game architects make. Good games are not only creative, but there are other important aspects such as their gameplay stability (e.g., you don’t want a game to be an entirely different game every time you restart it) and overall playability. Here are some key design strategies:

  • Structuring for coherence: While players want variety in content, they still don’t want widely differing experiences each time the play. Structuring for coherence thus means to define rules that create a sense of consistency. The key outcome of this design strategy is to combine content variety with consistent user experiences. For instance, in a game called “Sunless Sea,” worlds are generated that allow users to become experienced in playing the game over time. This would not be possible if there was no coherence from one instance of playing the game to another.
  • Structuring for procedural completion: In many games, there is a manually crafted basis, or structure, of the game world, and procedural generation helps to complete the design (which effectively can mean that most of the actual content is procedurally generated). A key outcome of this design strategy is thus to allow for much greater magnitude of the work and also improved efficiency of the creative talent involved. In Star Citizen, for instance, broad areas (up to the level of planets) are defined and then filled procedurally.
  • Structuring for navigating granularity: Designers may manually design different elements of the game environment at different levels of abstraction—tools then fill in the gaps. The key outcome of this design strategy is—in alignment with that of “structuring for procedural completion”—to scale the magnitude of work and offer greater efficiency to creative talent. However, here the focus is also on adding more flexibility in designing general (more abstract) and more specific game elements. In Star Citizen, for instance, the map architecture (i.e., the planet architecture) involves different levels of granularity.
  • Structuring for novelty: This design strategy involves arranging different modular operators and interfaces in such way that the tool maintains clear limits (we want output that follows certain rules), but also allows for variety in user experience. The key outcome is thus more novelty in procedurally generated content. In Sunless Skies (the successor to Sunless Sea), for instance, the map architecture was adjusted based on the experiences from Sunless Sea in a way that fosters content variety by using a particular structure in terms of overlapping regions to foster a rich, more varied user experience.

Procedural content generation is already helping generate open worlds and unprecedented scale and with lots of variety. However, this new approach to design is not only a technical solution—understanding how designers and their specific design choices interact with these systems is fundamental.  Following from this thought, another fundamental practice is “injecting variety”—when designers take whatever an autonomous design tool has generated and then do their magic. But this I will attend to in another blog.

Further Reading:

Seidel, S., Berente, N., & Gibbs, J. (2019). Designing with Autonomous Tools: Video Games, Procedural Generation, and Creativity. Proceedings of the Fortieth Conference on Information Systems, Munich, Germany.
https://aisel.aisnet.org/icis2019/future_of_work/future_work/14/

Procedural Generation is an Art

As we pointed out in a previous post, beautiful, creative game design has become easier due to software, hardware, and workflow advancements. Is procedural generation one of the approaches that can help? Or does it get in the way of beautiful, creative work?

Procedural generation is an approach to creating video game content autonomously.  No human creates the content that players see – it is generated by algorithms that follow certain deterministic procedures. The promise of procedural generation is that volumes of game content – particularly stuff like scenery and landscape – can be generated easily, with few human designers.

The downside is that game content generated procedurally risks being boring, vanilla, and uninteresting. Or, on the other extreme, generate such variety that it is incoherent and the game has no continuity in the player experience. Because of this, procedural generation sometimes gets a bad rap. But it is still essential for designers that want to scale content.

So how can game developers use procedural generation and still generate interesting content, while maintaining consistency in the game? Just because it is autonomous, does not mean it is automatically good.  Coming up with good procedurally generated content is an art.  It requires navigating a delicate balance. Take Sunless Sea, for example…

Sunless Sea is a clever game from Failbetter Games that heavily leverages procedural generation. It involves exploring a sea in the dystopic world of “Fallen London” with a steamship. In developing the game over time, the team played with a variety of approaches to enhance the creative feel of the game and to keep it from being boring.

Liam Welton, one of the developers, wrote a chapter on “Aesthetics in Procedural Generation” in the book, Procedural Generation in Game Design that describes their experience.  What is super-interesting about this chapter is that it shows how procedural generation is by no means automatic. There is a continuous tradeoff of continuity and novelty. Developers need to actively design and manage the parameters as “design rules” for the game and it involves a continuous iteration. 

According to Welton, “procedural generation is a team member that can follow hard-and-fast rules to the letter, but can fall short when it comes to making judgements in less strictly defined areas.”  As a team member, the team is both enabled and constrained by the particulars of the procedural engine.

In the book chapter, Welton describes how they needed to provide both a sense of continuity and novelty for repeated game play.  They initially chose a grid-and-tile approach where certain tiles of content were fixed and other tiles were procedurally generated. Their main activity involved identifying the right resolution for grids, and to put procedurally generated grids in five regions to maintain  some continuity. After much work, they describe how they arrived on this final grid.

What is fascinating about this book chapter is the complex and iterative way they arrived at the final solution, and it is clear that the solution could have been otherwise.  There is no “right” way to parameterize procedural generation in game design.  It is full of judgement calls and preferences – an art form in its own right.  Procedural generation does not kill creativity, it simply changes the way designers execute their work.  It is another tool, a teammate.

After Sunless Sea, the game developers created Sunless Skies and learned from their experiences with the grid and tile, and they blogged about their new approach..  To improve gameplay they chose an entirely different approach to laying out the world – more of a hub and spoke model:

This hub and spoke model was a new way to balance fixed predictability of stable ports, with more of an emergent gameplay experience.

The Failbetter Games experience with configuring the procedural engines for Sunless Sea and Sunless Skies is a fascinating story of a team learning to design with their procedural teammate over time.

For more information, see the book chapter:

Welton (2017) “Aesthetics in Procedural Generation,” (In Short and Adams, eds.) Procedural Generation in Game Design. CRC Press, 2017.

And the blog:

https://www.failbettergames.com/sunless-skies-pre-production-talkin-bout-proc-generation/

Digital Art Renaissance

by Gregory Yepes

This past decade has seen the birth several key technologies and applications which have empowered digital artists to author work at an unprecedented level of realism.  Effects that once required deep knowledge across teams of specialists can now be created by generalists that deal with all aspects of their creations. Innovations like physically-based shading, GPU advancements, and workflow enhancements all played a part. In this article I’d like to highlight, and celebrate, how we have arrived at where we are today .

Specialists -> Generalists

Before 2010, high-end digital imagery required a complex pipeline that usually followed a process involving modelers, texture artists or surfacers, look development artists, lighters, and compositors.  While these roles still exist today, we are seeing more artists able to wield the tools to allow them to be more self-sufficient.  A clear example of such work can be seen here: ZBrushCentral Gallery

Not to downplay the amazing craft these artists bring to the table, but I’d like to highlight a few developments which have removed some hurdles which previously existed from inception to final image. One of these developments is physically-based shading.

SIGGRAPH 2012: Physically-Based Shading at Disney

A lot of us saw Wreck-It Ralph in theaters, but not everyone realizes what that movie signified for digital imagery.  Up to this point, there were a wide array of specialized algorithms to describe a material’s appearance within a rendering simulation.  What the team at Disney working on Wreck-It Ralph accomplished, and shared with the community in SIGGRAPH 2012, was take a set of common materials and create a unified way to represent them while allowing for a high degree of artistic control to modify that appearance.  This effort also moved us closer to using real-world lighting properties to help us describe how to illuminate these materials.  Things that were previously arduous, such as chipped paint on a metal surface, were now much more accessible to artists without requiring a high degree of specialized knowledge.

Several technology companies started embracing these advancements, and for the first time, we started seeing offline renderers such as Arnold and V-Ray, start to converge with real-time solutions such as Unreal and Unity. On the asset authoring side, we start to see newcomers such as Allegorithmic (now Adobe) with Substance Painter and Substance Designer embracing these concepts.  So what we start to see is film, games, and tools along the pipeline, embracing a solution allowing artists to achieve a higher level of realism while reducing complexity. This next level of functionality, however, needs incredibly high graphics processing performance.

GPU Advancements

While there were advancements on the software side, there was also increased activity in the hardware side, particularly with graphics processors.  The hardware normally required to create high-end digital imagery was not cheap due to the enormous amount of computation required.  As we see graphics processor manufacturers such as NVidia and AMD growing in prominence, this meant that more powerful and affordable graphics cards could be utilized to create high-end imagery.  This obviously meant a great deal for real-time game engines which rely heavily on GPU processing, such as Unreal and Unity, but we also start to see more GPU renderers enter the arena such as Redshift, Octane, Corona, and many more. Software and hardware improvements alone would not have brought about the sweeping changes we see, unless artists changed what they were doing.

Workflow Improvements

While there was a strong potential to evolve due to advances in physical-based shading and GPUs, we also had entrenched ways of working which were not easy to change overnight.  No single solution can take complete credit for changing the way artists worked, but we have noticed a few standouts along the way:

ZBrush + Keyshot: one of the digital sculpting tools favored by artists, partners up with an advanced renderer allowing sculptors to preview their work in breathtaking fidelity.

(Image by Marco Plouffe)

Allegorithmic’s Substance Painter + Real-time + IRay: Having embraced physically based shading very early on, Substance Painter, allows texture artists to preview their work under lighting conditions resembling real-world scenarios using HDR captures.

(Image by Jonathan Benainous)

Marmoset Toolbag: While still representing a somewhat stand-alone stage of the pipeline, Marmoset Toolbag is a convenient way to assign materials and textures to a model in order to preview an asset using a real-time rendering solution.

(Image by Marmoset Toolbag)

Unity / Unreal: As the image fidelity with real-time game engines improved, we also had a more unified environment in which artists could author their creations.
YouTube: Overgrown Ruins by Maverick

These examples start showing us the potential, for a single artist to work in a more self-sufficient capacity, to take an idea from inception all the way to a polished visualization of the final product.

Gallery

So here we are!  There is no better way to illustrate where we are today other than to share with you one of my favorite art galleries:
ARTSTATION.COM

Leave your comments.