Levels drive the core interaction in most free-to-play puzzle games. The quality of this content, and the pacing of level features, rewards, and difficulty has an enormous impact on both retention and monetization. While the overall visual presentation, simplicity in design, story, setting, a deep and resonant meta game, and the quality of retention features all have their impact – for most players, the levels are the key driver of whether they stay and play or go away.
When creating and releasing a regular stream of high-performing free-to-play puzzle levels, four best practices are crucial:
- Building a comprehensive level creation cycle (Part 1)
- Meeting all technical requirements for a good starting situation (Part 2, coming soon)
- Setting reliable level design goals (Part 2, coming soon)
- Being aware what makes a good level (Part 3, coming soon)
This first article details the first practice listed above. The rest will be covered in subsequent posts.
A Strong Level Creation Cycle in 6 Phases
Phase 1: Level Concepting
The specific way you come up with your level concepts and how you prepare these for the actual level creation in a level editor (in-engine) can vary widely based on your teams’ preferences. Whatever your process, the better you prepare the whole level concept, the faster you will realize your idea without losing time on open design questions or experimenting with single level elements or placement with an unclear concept.
I am somewhat traditional in that I prepare scribbles in a notebook including the whole level layout, obstacle placements, info on main and side goals, available moves and other variables so that the whole vision is clear, can quickly be built in the editor and is playable as early as possible. Sometimes, if you start from scratch with an empty game-board in the level editor, valuable work time is lost trying out different layouts, mechanics and obstacle combinations without knowing if everything fits together. Having a clear concept in mind (and on paper) speeds up this process substantially.
Phase 2: Level Creation, Internal Testing And Feedback Loops
Level creation is an effort-intensive and challenging process process that varies game to game. Each game will have different objectives for level sets, and different conditions, components, and levers impacting difficulty, pacing, or completion time. What is common in is that, no matter what level concepts you create, you will need talented and experienced level designers to sit down and do this level creation work to take those concepts and use them as guides to make playable content in-game. It’s reasonably common for a level-based free-to-play to have hundreds, or even thousands of levels, and each of those levels typically requires several hours of building, testing, and iteration to make it good enough to consider having it in your final product. This means hundreds, or even thousands of man-hours go into level creation for a typical level-based game.
Imagining the task in this light, you can see how well-developed and easy-to-use level design tools, and experienced level designers with strong ability to quickly create and refine this content can pay substantial dividends in terms of the quality-for-cost equation of this part of the game design and production process.
After the technical process of level creation, a useful tool to prove and judge a level’s quality and difficulty level is to follow up with internal testing and feedback loops. This, of course, requires teammates being involved or additional level designers to play your levels. Ask them to give feedback on your design and balancing so you are able to improve the whole user experience. Collect the feedback in simple sheets for the levels with columns for different people’s feedback, comments and testing results. These internal feedback loops are low-hanging fruit that, if acted upon, will improve a level’s performance, identify potential issues and better prepare it for actual user tests and live distribution.
Click HERE to view an example feedback spreadsheet
Phase 2B: Usability Testing And Fine Tuning
One of the most valuable activities you can do is to perform usability tests with real players from your defined target audience. Preferably this would happen before beginning to balance difficulty, allowing you to to improve a level’s feel by adapting layout and mechanics from actual player input. Feedback from your teammates is very useful, but it never replaces real player input. Your audience reacts very differently in terms of behavior, play time, steps taken, and perceived excitement and complexity than the gaming professionals that make up the bulk of your teammates. Most importantly, players don’t know the game by heart and aren’t as skilled as a designer that has been working on it for quite some time.
It is always exciting and rewarding to see how players react to, interact with, and finally judge your creations. This gives you a fresh view on your design and lets you fine-tune your level for your actual audience.
If you don’t have the chance to do these tests on-site, there are online tools and platforms, such as PlaytestCloud, that remotely recruit players and deliver recorded playtest sessions for review.
Phase 3: Balancing
Balancing your levels to conform to the desired overall difficulty curve and preparing them for live consumption takes the most time, but this can be mitigated with good balancing tools and workflows. Where possible, we recommend using AI/Machine Learning to auto-play levels and understand win/loss metrics before you have a large established player-base. It will never replace testing a level on your own or having actual players test your level, but it is definitely a way to get trustworthy data on your level difficulty (e.g. win rate, objective completion rate). Where it is not possible to use an AI tool, the more traditional way to do this is to team up with your available teammates and collect their data on play-throughs, and then balance based on those results before taking your levels live and analyzing early player data (for example, from soft launch player behavioral data).
An important side-note: Do not attempt to adapt your auto-play AI to live dynamic difficulty balancing! It is tempting to try to adapt this AI to manage dynamic difficulty adjustment to reduce user churn. However, developers that I have worked with that have tried this have failed to produce desirable results. This is because the AI works works against the human-designed difficulty/progression curves, upsetting the careful balance of friction vs. progression that leads to optimized retention and monetization in a free-to-play puzzle game. It is theoretically possible to develop an AI that would take in a variety of behavioral triggers to “tune” the perfect balance of friction vs. results on a player-by-player basis, however this is a much more complicated endeavor than it appears on the surface, with a high risk of failure, and a much higher cost than a good level design team and a simpler heuristic churn-prevention algorithm.
Phase 3B: A/B Testing And First Rebalancing
Testing your level balancing in A/B tests with a low percentage of players is key to deeply understanding the nature and impact of balancing changes across your full progression. Even only a few hundred players compared against the whole player base will allow you to confidently compare level design KPIs (Key Performance Indicators) like win rate, moves left when level won, or spend per level (e.g. extra moves bought, booster usage), in order to determine whether your changes are likely to positively impact player behavior.
This will speed up rebalancing and optimization in much shorter timeframes and be confident in the expected results of a change before pushing them live for 100% of your player base.
A/B testing can be enabled fairly easily using off-the-shelf analytics packages like DeltaDNA or Game Analytics, however, be aware of the GIGO rule. While A/B tests are technically reasonably easy to setup, you want to be very careful in the design of your tests and the instrumentation of your data to ensure you are accurately measuring what you think you are measuring, and are doing so in a way that will generate statistically significant results. Qualified input from a Data Analyst/Data Scientist with experience in A/B testing is highly recommended to ensure good results.
Phase 4: Live Phase and Rebalancing Loop
After a level or a level pack goes live there should be a constant rebalancing loop happening regularly. This allows for fine-tuning of level difficulty and performance based on your difficulty curve and KPI goals, and keeping the system in tune as new content is added. Meaningful gains in retention and monetization are possible only through changes in level difficulty progression, rewards, more exciting level variations, and the pace and timing of new feature utilization – so don’t underestimate the impact of ongoing attention to your level designs, even after they go live.
Also, managing a live game, in most cases, requires constantly extending the game to include new content. Adding this new content may change the game’s overall progression needs, including changing the optimal timing for for difficulty pinch-points relative to end-of-content. In most cases, this will necessitate constant rebalancing to reduce the difficulty of the early content (including the First Time User Experience) to push players deeper into the game so they can see its best and most interesting parts.