top of page

prj#03

Algorithmic Dream

09/2025

The Goal

 

If dreams explore possibilities, why not play with an algorithm to try building images?


This project sets out to generate images by combining and evolving simple shapes—much like dreams combine and evolve ideas.


Before Gen AI was even a thing, Genetic Algorithms (GA) were already around in the 70s.


By evolving these primitive shapes, we aim to create complex and intriguing images inspired by reference works.

This project works with Genetic Algorithms.

Building the GA

The engine of this project is split into two main parts. One handles the orchestration — starting from an initial population, iterating across generations, scoring candidates, and keeping track of the best results. The other runs the actual Genetic Algorithm logic.

The scoring function relies mostly on pixel color comparison between the generated image and the reference. There were experiments with mixing in CLIP features too, but in the end raw pixel checks turned out to be more reliable. Each genome stores its own cached results — features, baked image, score — so the process doesn’t repeat the heavy lifting unnecessarily.

The population evolves using a combination of strategies:

Elites: the best matches move forward directly

Mutation: genomes get tweaked by either adding new shapes or perturbing existing ones.

- Tournament selection: parents are picked through a competitive mini-game , and then combined through one of five crossover strategies: half-and-half, alternating merge, random slice, weighted random, or top-shapes mix.

Mutate again: just in case evolution feels lazy.

With this setup, each generation gets a fair mix of survival, competition, and chaos

Below is the output of 3k generation. Reminder this all started with a single color background image.

gen_1840.png

 

Experimental sankey mermaid schematic view

sankey-mermaid.jpg

Learning by Failing

I started by asking if CLIP could guide the process, comparing generated images to a reference. It’s works recognizing concepts, but not at pixel-level details.

 

Next, I tried BLIP-2, hoping its captions could steer the GA. It could spot a “red circle on a white background,” but not whether my GA’s blob was truly red.

 

SAM (Segment Anything Model) was next: segment the reference and rebuild shape by shape. It ignored similarity.

 

Parallel runs and experimenting with brush shapes helped slightly — circles worked best — but the GA still struggled.

Below is the evaluation of scores with artifacts trying to learn from shapes coming from SAM (and not helping, honestly speaking. There is a possible solution for this though)

inverted-peak2.jpg

The challenge was learning to draw details on images.

 

I tried zooming in and running the GA on sections of the images, as shown below with Meninas, Margarita, and Flowers details.

These are three different results using zoomed reference images with approximately 2k generations. There is still room to improve rough details on all with more generations.

meninas_zoom_in.jpg

 

The Final Show

This gallery shows several runs of the Genetic Algorithm attempting to match well-known reference images.

 

The images are familiar enough that no introduction is needed — the GA did the talking.

(note: gallery of videos created procedural more about this in future projects)

Reminder: long before Gen AI existed, Genetic Algorithms (GAs) have been around since the 1970s.

bottom of page