1) Introduction

During this three-month internship (June–August 2022), I worked on a Unity prototype named Gauntlet 109 at Pickwitt. The goal wasn’t to “finish a game.” The goal was to deliver a playable prototype strong enough to support a fundraising pitch.

Technically, the hardest part wasn’t one specific feature. The real challenge was inheriting an existing prototype with almost no documentation, being unable to contact the previous developer, and still needing to ship fast. That forced me to work on two fronts at once: rebuilding a clean base and producing major systems quickly—especially an in-game level editor usable with a controller and a custom UI navigation system for gamepad.

In this post, I explain everything I worked on technically, in detail: how I handled the restart decision, how I rebuilt gameplay foundations, how I designed a shareable external level format (.glvl), how I built an in-game editor “game inside the game,” and how I implemented a gamepad-first UI workflow that supports dynamic UI generation.

Early prototype vs restarted clean prototype
Early prototype vs restarted clean prototype

2) Background / Context

Studio setup and constraints

Pickwitt was essentially a solo studio led by a creative director, supported by a team of interns. The internship was fully remote, using Discord/WhatsApp for coordination. We had at least one weekly meeting to show progress to the creative director, plus quick daily check-ins to detect tasks that were slipping.

A key constraint: there was no lead developer to guide implementation decisions. That meant I had to do a lot of research and validation myself to solve problems, while still delivering usable features on time.

Team and Stack : Team: Creative director + 5 interns (producer, tech GD, 2 artists, programmer) Engine: Unity (stable version) Objective: Playable prototype for investors Control focus: Controller-first Key Feature: In-game level editor

Why it mattered (for me, technically)

Remote production + inherited codebase + missing documentation is the kind of setup that can kill velocity. It forces a brutal question early: do we continue on a fragile base, or do we restart to regain speed and control?

3) Main Content

3.1 The First Technical Reality: Inheriting a Prototype Without Documentation

At the start, I did what I had to do to understand intent: I read and re-read the game design documentation until I had a clear picture of what the creative director wanted. That took around three days and helped me surface questions and unclear areas.

Then I attacked the codebase. That’s where the real problems started: There was little or no documentation in the project. The previous developer was unreachable. Navigating the code was slow. Adding features was slow, because I couldn’t confidently extend systems without risking breakage.

For roughly two weeks, I tried to build on top of the existing code anyway. The conclusion was clear: we were not producing fast enough. Either: the code didn’t match a long-term vision and blocked complex features, or the architecture was so limiting that I ended up needing to rewrite big parts just to implement anything.

3.2 Refactor vs Restart: Why I Pushed for a Restart

Restarting a project is common in games, but it’s not automatically the best option. It can delay production and cut features. I had to treat it as a serious engineering decision, not an emotional reaction.

So I did something concrete: I built a fictitious planning scenario where we restart and still try to hit the remaining roadmap. On paper, it looked unrealistic: reproduce six months of development in two weeks, then still deliver planned features. But the alternative was worse: if we didn’t restart, production time would double or triple, and we’d ship a weaker prototype with fewer systems.

3.3 Rebuilding a Clean Base in Two Weeks (Functional Over Polished)

Once we restarted, the objective was not polish. The objective was a clean foundation that lets me add features rapidly. In those two weeks, I rebuilt a solid base for development:

  • I reprogrammed the controller logic.
  • I rebuilt the base character and core interactions.
  • I implemented systems to allow playing multiple maps in a row (a session flow / map chaining foundation).

This gave me a minimal but functional gameplay loop. It wasn’t “fun” yet, but it was stable, extensible, and fast to iterate on. That mattered more than visuals at that stage.

3.4 Delivering Features Under Pressure

After rebuilding the base, I focused on delivering the planned work fast: completing the mechanics for the playable character Sphinx, implementing enemy AI, adding collectibles, and the level editor system.

By the end of the first month: I completed the first playable character “Sphinx”, implemented two enemies (including AI), and the art team produced environment pieces and animations.

3.5 In-Game Level Editor (Controller-First): A “Game Inside the Game”

This was one of the biggest technical systems I produced. The constraint was strict: The editor must run inside the game, not in Unity editor tools, and it must be usable with a controller.

From a dev perspective, this is expensive because I’m basically building tooling at runtime. Building it inside the game multiplies time and complexity, because I must recreate selection tools, placement tools, property editing, UI navigation, saving/loading, and testing workflows.

3.5.1 Why I chose gamepad-only for v1

For the first version, I chose to handle only controller input. Supporting mouse would have extended the scope too much. However, I designed the editor in a way that a mouse controller could later be “plugged” into the same system.

3.5.2 External file format: .glvl (Gauntlet Level)

Levels do not save as Unity scene files or prefabs. I built the system so each level is saved externally as its own .glvl file.

The key technical detail: to make levels stable across versions, I ensured the file stores no direct asset references. Instead, the file stores tags. The runtime reads the file and generates the level by interpreting tags. This avoids the classic “asset reference breaks across builds” problem and makes files portable.

3.5.3 Editor startup: 50×50 grid and external walls

When the editor starts, I create an empty world grid of 50×50 tiles. I auto-place and configure outer walls so the game understands they’re external boundaries.

Empty editor grid with external walls
Empty editor grid with external walls

3.5.4 Object inspection and editable properties via joystick

The important part: each element has properties that I can change directly with the joystick. Example: walls have properties like destroyable, pushable, external. I also track edge/corner information used to compute correct sprite display.

3.5.5 Controller placement workflow

  • Right/Left bumpers: cycle placeable items
  • Action button (A/X): place the selected element

After placing an object, I run calculations to adjust visuals (notably for walls) and show the object properties.

3.5.6 Example object: monster generator

A monster generator object has an HP value (editable numeric value via joystick) and a Spawn tag (selectable from a list via joystick). The spawn list is authored by the dev team, which keeps allowed types controlled and predictable.

3.5.7 Editor menu: save/load/test + metadata

I added a menu that supports saving, loading, testing the current level, and configuring metadata. In the load menu, I implemented procedural generation of buttons that are still fully usable with a controller.

3.6 Custom Gamepad UI Navigation System (No Plugins)

Unity has UI navigation and plugins, but I built my own. I needed a system that works reliably with a controller, supports dynamic UI generation, and supports multiple menus without loading prefabs sequentially.

3.6.1 Workflow

I place and arrange UI as usual in Unity. Then I use a drag-and-drop workflow in the inspector to register UI elements into my navigation layer.

3.6.2 Menu Navigator: tagging elements and defining directional links

I created a concept of a Menu Navigator that groups clickable/selectable UI elements. For each element, I define what happens when the user presses Up/Down/Left/Right. This makes the UI navigation deterministic and controller-friendly.

3.6.3 Dynamic UI generation

I built the system so it can generate UI elements at runtime (example: load menu listing levels), and still remain controller-navigable.

3.7 The Final Rush: Why Visual Integration Proved the Tech Was Right

Near the end, the project shifted into a “make it pitchable” phase: menu visuals improved, lighting was added, and final assets integrated. What mattered technicaly is that because I built the UI system modularly, integrating new art assets didn’t require rewriting the structure.


Common Pitfalls & Misconceptions

  • Inherited Prototypes: Thinking continuing an inherited prototype is always cheaper than restarting.
  • In-game Tools: Underestimating how hard an in-game editor is. It’s a full toolchain inside runtime.
  • Input: Assuming Unity’s default UI navigation is enough for controller + dynamic content.
  • Data: Saving levels using engine-linked references breaks portability. Tags are safer.

4) Takeaways / Opinion

5 Key Takeaways : 1. Restarting a prototype can be the correct decision when the inherited architecture blocks production. 2. A clean base (controller + character + map flow) is more valuable than half-working features. 3. An in-game level editor is expensive because it recreates engine tooling in runtime. 4. Saving levels as external tag-driven files (.glvl) is the right move for sharing and version stability. 5. Custom controller UI navigation is justified when UI is dynamic and controller-first.