Support me
Back to Blog

2026-04-01 · 8 min read

I Rebuilt an Old Flash Avatar Maker for the Modern Web

A while ago, I came across a Japanese retro avatar maker with a style I immediately liked. The lines were simple, the colors were restrained, and the whole thing had that unmistakable early-internet charm.

The problem was obvious: it was built in Flash. Today, the only practical way to run it in a browser is through a compatibility layer like Ruffle.

Technically, that works. In practice, it did not feel great. The page would stutter, my laptop fan would spin up, and a couple of times the browser window crashed entirely. What should have been a small, relaxing toy ended up feeling more like a stress test.

So I started wondering: if the core of the tool is just layered assets, option switching, and live preview, why not rebuild it with modern frontend tools?

That eventually became Square Face Generator. It keeps the retro spirit of the original, but the underlying implementation is now fully modern web tech, so it feels much smoother to use.

This is a short write-up of how I built it.


1. Why Rebuild It at All?

The reason was simple: I wanted it to work properly in today’s browsers.

The original version had a few big limitations:

  • It depended on Flash-era assets and runtime logic
  • It could only run through a compatibility layer
  • That compatibility layer had to simulate an old environment
  • The result was much heavier than a native browser implementation

An avatar maker like this is not conceptually complicated. It is mostly layered image assets, feature switching, color changes, and real-time preview. Modern Canvas is a natural fit for that kind of interaction. There is no real need to carry an old runtime along with it.

So my goal was not to make a shallow clone that only looked similar. I wanted to preserve the original feel and interaction style, while moving the whole experience into a browser-native implementation.


2. The Overall Approach

The work broke down into two main parts.

First, I Took the Flash File Apart

I used FFDEC to decompile the original Flash file. That gave me two important things: the asset files, and the ActionScript logic behind the editor.

This step mattered a lot. If I had only copied what I could see from screenshots, I might have reproduced the surface, but I would have missed the structure underneath: how the parts were organized, how the layers were ordered, and how the editor decided what to draw.

Once the file was unpacked, the model became clearer. It was a classic layered asset editor. Bangs, side hair, eyes, mouth, hats, glasses, clothes, and other parts were drawn on top of each other in a fixed order.

The next challenge was turning that Flash-era asset model into a frontend architecture that would be easy to maintain, easy to extend, and fast enough to render in the browser.

Then I Rebuilt the Runtime with Modern Frontend Tools

I did not try to translate the old code line by line. That would have been the wrong abstraction.

Instead, I took a more practical route:

  • Keep the original visual style and asset system
  • Build a new avatar editor framework from scratch
  • Gradually plug the extracted assets into the new framework
  • Let the modern runtime carry the old content

The stack is intentionally simple: Next.js, React, TypeScript, and Canvas 2D. The main goal was to keep the rendering path light and close to what browsers already do well.


3. Turning an Avatar Maker into a Frontend Model

Once I started building, it became clear that the hardest part was not the UI. The important piece was the rendering model.

I ended up splitting the system into three layers.

Layer One: Asset Discovery and Configuration

The first question was not “how do I draw this?” but “how does the system know which assets exist?”

I built an asset discovery mechanism that scans the asset folders and reads metadata from file names. For example, a suffix like _x-35_y25_s0.5 tells the renderer to apply an offset and scale when drawing that asset.

This has a few advantages:

  • I do not need to hard-code a huge table of coordinates
  • New assets can be added by following the naming convention
  • Configuration and rendering logic stay separate

I spent quite a bit of time polishing this part. The key was to keep alignment information close to the assets themselves, while also making the preload flow predictable.

Layer Two: State

An avatar is really just a state snapshot.

Which eyes are selected, which mouth is selected, whether a hat is enabled, what colors are being used, and whether a part has been manually moved or scaled — all of that can live in one state object.

Every editor action updates that state. The renderer watches the state and redraws the Canvas when it changes.

That makes several things simpler:

  • The boundary between UI and rendering is clearer
  • Random generation, editing, and reset behavior are easier to implement
  • Future features like shareable configurations or history recovery would be easier to add

Layer Three: Drawing

Everything eventually flows into a Canvas drawing function.

That function walks through the parts in order, finds the asset selected by the current state, calculates its position from default config and filename metadata, applies scale, anchor, and offset rules, handles mirroring or tinting when needed, and then draws the result to the Canvas.

This layer is not the flashiest part of the project, but it is the most important one.

Once the number of assets grows, small inconsistencies start to matter:

  • Different assets have different dimensions
  • Their visual centers are not always the same
  • Some parts need to align to the center of the face
  • Some assets should keep their original colors, while others should follow skin or part colors
  • Some hats are split into front and back layers

If all of that were handled with hard-coded coordinates, the project would become painful to maintain very quickly. Instead, most of the behavior is now expressed through a combination of config, metadata, and drawing rules.


4. The Most Time-Consuming Part Was Asset Alignment

If you have worked on this kind of remake before, you probably know the feeling: the slowest part is often not writing code. It is aligning the assets.

Assets extracted from a Flash file do not magically line up in a new frontend environment. In the original system, alignment depended on its own coordinate system, anchor rules, and runtime behavior. Once the assets were pulled out on their own, all the hidden assumptions disappeared. Hats drifted, glasses were the wrong size, and facial features did not stack cleanly.

The solution I settled on was simple but effective: encode offset and scale data directly in the file names.

For example:

  • hat_30_x50_y-50.png
  • glasses_21_x-35_y25_s0.5.png

At runtime, the system parses those suffixes and feeds them into the drawing calculation. This turned asset tuning from “change code and test again” into “rename the file and let the system pick it up.”

I also went through several rounds of rendering consistency fixes. The oval avatar version needed extra attention, especially around background color and pattern layering, so that manual selection, random generation, and reset behavior all produced consistent results.


5. A Small Optimization That Made a Big Difference

Another important part of the experience was initial load speed.

Avatar generators have a built-in tension: users should be able to start immediately, but the app also has a lot of image assets behind it.

Loading everything up front makes the first screen slower. Loading nothing in advance makes part switching feel flickery. I ended up using a two-stage loading strategy:

  • Stage one: load only the assets needed for the default avatar, so the page becomes usable quickly
  • Stage two: load the remaining assets quietly in the background

From the user’s point of view, the page opens quickly, and switching parts still feels smooth.

This is not a clever trick, but it is a useful one. Users do not care how many files are being loaded behind the scenes. They care whether the tool is ready when they want to use it.

It is also one of the advantages of rebuilding the project for the modern web. You can design loading around the user’s path through the app instead of being constrained by the old runtime.


6. Why the Experience Feels So Much Better

When I say the experience feels dramatically better, I do not mean that as a formal benchmark. I mean it in the everyday sense of using the tool.

The improvement comes from a few places:

  • There is no Flash compatibility layer in the way
  • The rendering path is shorter and more direct
  • Asset loading can be prioritized around what the user needs first
  • Configuration-driven logic is easier to maintain than scattered hard-coded values
  • Modern browsers are simply good at this kind of work

With the old version, I had to wonder whether the browser would freeze. With the rebuilt version, it opens quickly, edits smoothly, and exports without drama.

For an avatar maker, being light and fast matters more than adding a pile of complicated features.


7. Where AI Helped

I did use AI while building this project, but not as a one-prompt website generator.

It was more of an accelerator.

I first mapped out the original structure myself and separated the problems: which ones were asset problems, which ones were rendering problems, and which ones were interaction problems. Then I used AI to help scaffold parts of the editor, organize TypeScript types, and fill in some repetitive implementation work. After that, I still had to bring in the extracted assets, tune the alignment, and make the final decisions by hand.

The hard part was never “write a React page.” The hard part was understanding the old system, rebuilding its asset model, and dealing with the small rendering details that decide whether the final result feels right.

AI saved me time on the mechanical parts, but the direction, architecture, tuning, and trade-offs still had to come from me.


8. What I Took Away from the Project

At first, I thought of this as a nostalgic remake or a small technical experiment.

After finishing it, I felt differently. A lot of old products are not outdated because their ideas are bad. They are outdated because they are trapped inside old technology stacks.

They may still have strong concepts, mature interaction patterns, and distinctive visual styles. Some of them still have a place in people’s memory. What aged out was often just the technical shell around them.

Rebuilding that kind of experience with modern frontend tools can be genuinely worthwhile. It is a way to move a piece of old web culture into the present without losing what made it interesting.

That is more than a visual refresh. It is an engineering rebuild.


9. The Finished Version

The final version is called Square Face Generator.

It keeps the classic Japanese retro avatar style, but the whole editor now runs natively in modern browsers. No Flash, no compatibility layer, real-time editing, and PNG export.

You can try it here: squareface.me

If you have worked on Flash migrations, asset remakes, Canvas editors, or old-resource reconstruction, I would be happy to compare notes.