Guide

Note from the author

Basically, I use math and some arbitrary binning to automatically generate color palettes for uploaded images. Everything is done in the browser, so no files are uploaded or stored anywhere. The math can be a bit janky at times, so I also included the ability to manually edit and fine-tune your own colors. The generated color palettes are really just a starting point. For the super nerds out there, below is the general programmatic approach I used to do the approximation.

Back to Generator

Simple Summary

The website shrinks your image, groups similar colors together, and throws away tiny one-off color noise. That leaves a shortlist of the main colors actually doing visual work in the image.

From there, it tries to guess which color is probably the background, scores the rest by how common, vivid, and distinct they are, and assigns them jobs like base, secondary, dark neutral, or accent.

If the image does not have enough clearly different colors, the website fills in the gaps with the next-best options or close tonal variants.

Controls

Color bands

This sets how many horizontal bands appear in the final square. Higher values ask the palette selector to find more distinct roles or generate additional fallback bands if the source image is too simple.

In practice:

  • `3` produces a tighter, simpler hierarchy.
  • `5` or `6` pushes the system to preserve more nuance and accent colors.

Sample detail

This controls how large the image is when the website samples it for color extraction. The image is downscaled to this size on its longest edge before clustering.

Higher sample detail means:

  • more pixels are analyzed
  • smaller color differences are preserved
  • the result is a bit slower but more precise

Lower sample detail means:

  • fewer pixels are analyzed
  • the palette is smoother and more simplified
  • analysis is faster

Key Terms

Prominence

How much of the image a cluster occupies. A color that covers more pixels has higher prominence.

prominence = cluster.count / totalPixels

Saturation

How colorful versus gray a color is. Low saturation means muted or neutral. High saturation means vivid.

saturation = delta / maxChannel
where delta = maxChannel - minChannel

Brightness

How light a color is. Higher brightness means the color is closer to white.

brightness = max(r/255, g/255, b/255)

Chroma

In this implementation, `chroma` is treated as the same value as saturation and is used as the vividness term in scoring.

chroma = saturation

Distance from background

How different a color is from the detected background color. Larger distance means stronger contrast from the backdrop.

distanceFromBackground =
  colorDistance(cluster, background) / 441.6729559300637

Neutral bias

A helper term that prefers subdued colors for background and dark-neutral roles.

neutralBias = 1 - chroma

Detail / edge energy

A lightweight high-frequency measure based on local brightness changes. Detailed subject regions score higher, while smooth background regions score lower.

detail =
normalized local luminance difference energy

Algorithm

  1. Resize for analysis. The image is scaled so its longest edge matches the chosen sample detail, but never below 48 pixels on either side, then drawn to an off-screen canvas with smoothing enabled.
  2. Collect pixels and local detail. Fully or mostly transparent pixels are skipped, the remaining RGB channels are quantized into 8-value steps, and each pixel gets a normalized detail score from local brightness differences to the right, down, and diagonal neighbors.
  3. Cluster colors. A k-means style pass starts with evenly sampled seed colors, uses about three times the requested band count with a minimum of eight clusters, and runs 10 assignment-and-average iterations in RGB space.
  4. Merge near-duplicates and remove tiny groups. After clustering, any two clusters closer than 26 units in RGB distance are merged, color metrics are recomputed, and clusters smaller than the minimum share threshold are discarded.
  5. Detect background. The website ranks clusters by a background score that favors colors that are large, neutral, moderately bright, and low-detail; if background removal is enabled, that top candidate is only removed when it also passes extra prominence, saturation, brightness, and detail checks.
  6. Score remaining colors. Each surviving cluster gets prominence, chroma, distance-from-background, and one combined score built from those three signals.
  7. Assign palette roles. The website fills roles in order, using different scoring rules for base/background, secondary, dark neutral, primary accent, and secondary accent so each slot prefers a different kind of color.
  8. Fallback if needed. If there still are not enough bands, the website first takes the strongest unused clusters, then creates derived lighter or darker variants from already selected colors until the requested count is reached.
  9. Normalize weights and render. Each selected role gets a default weight, those weights are normalized to add up to 100%, sorted largest to smallest, converted to hex colors, and then drawn as stacked horizontal bands.

Exact Formulas

Below are the main equations used in the implementation.

1. Analysis resize

scale = min(1, sampleDetail / max(imageWidth, imageHeight))
analysisWidth = max(32, round(imageWidth * scale))
analysisHeight = max(32, round(imageHeight * scale))

This resizes the image to a smaller analysis canvas so the site can work faster while still keeping the overall color structure.

2. Cluster filtering threshold

quantizedChannel = clamp(round(channel / 8) * 8, 0, 255)

minimumClusterShare = max(0.0025, 0.018 / requestedBands)
keep cluster if cluster.count > totalPixels * minimumClusterShare

This block simplifies raw pixel colors slightly and removes tiny clusters that are too small to matter in the final palette.

3. Saturation and brightness

r' = r / 255
g' = g / 255
b' = b / 255

maxChannel = max(r', g', b')
minChannel = min(r', g', b')
delta = maxChannel - minChannel

saturation = (maxChannel === 0) ? 0 : delta / maxChannel
brightness = maxChannel

This converts RGB into easier signals for palette ranking: how vivid a color is and how light it is.

4. RGB distance

colorDistance(a, b) =
sqrt((a.r - b.r)^2 + (a.g - b.g)^2 + (a.b - b.b)^2)

This measures how different two colors are in RGB space, which the site uses for contrast and duplicate merging.

5. Background score

luminance = r * 0.299 + g * 0.587 + b * 0.114

detailEnergy =
  abs(luminance(x, y) - luminance(x + 1, y)) * 0.45
  + abs(luminance(x, y) - luminance(x, y + 1)) * 0.45
  + abs(luminance(x, y) - luminance(x + 1, y + 1)) * 0.10

cluster.detail = average(detailEnergy for pixels assigned to the cluster)

backgroundScore =
  count * 1.2
  + (1 - saturation) * 120
  - abs(brightness - 0.78) * 40
  - detail * 110

This estimates which cluster is most likely to be the background by favoring large, smooth, neutral colors and penalizing detailed ones.

The cluster with the highest `backgroundScore` is treated as the detected background candidate.

6. Background removal rule

prominence = background.count / totalPixels
detailThreshold = max(0.16, average(cluster.detail) * 0.9)

ignore background if:
  prominence > 0.22
  and background.saturation < 0.24
  and background.brightness > 0.55
  and background.detail < detailThreshold

This is the final gate for background removal, so a color is ignored only if it is large, subdued, bright enough, and not too detailed.

7. Cluster scoring

prominence = cluster.count / totalPixels
chroma = saturation
distanceFromBackground = colorDistance(cluster, background) / 441.6729559300637

score =
  prominence * 0.45
  + chroma * 0.30
  + distanceFromBackground * 0.25

This creates one overall importance score for each cluster before the website starts assigning palette roles.

8. Role scores

neutralBias = 1 - chroma
brightnessGap = abs(cluster.brightness - background.brightness)

base/background =
  prominence * 0.70 + neutralBias * 0.20 + brightness * 0.10

secondary =
  prominence * 0.45 + distanceFromBackground * 0.25
  + neutralBias * 0.20 + (1 - brightnessGap) * 0.10

dark neutral =
  (1 - brightness) * 0.45 + neutralBias * 0.30
  + prominence * 0.20 + distanceFromBackground * 0.05

primary accent =
  score * 0.40 + chroma * 0.35 + distanceFromBackground * 0.25

secondary accent =
  score * 0.35 + chroma * 0.30
  + distanceFromBackground * 0.25 + prominence * 0.10

This role logic decides which cluster behaves like the base, secondary, dark neutral, or accent colors in the final hierarchy.

9. Default role weights

base/background = 0.42
secondary = 0.24
dark neutral = 0.16
primary accent = 0.10
secondary accent = 0.08
support accent = 0.06

These are the default visual proportions the website assigns to each palette role before any manual slider edits happen.

10. Normalize band weights

normalizedWeight_i = weight_i / sum(all weights)

This converts all band weights into percentages of a whole so the final square always adds up correctly.

After normalization, bands are sorted from largest weight to smallest weight.

11. Rendered band heights

bandHeight_i = round(canvasHeight * normalizedWeight_i)
lastBandHeight = canvasHeight - sum(previous band heights)

This turns normalized weights into actual rendered band heights on the canvas from top to bottom.

12. Manual slider redistribution

minimumWeight = 0.01
requestedWeight = clamp(requestedWeight, minimumWeight, 1 - minimumWeight * (n - 1))
delta = requestedWeight - currentWeight

if delta > 0:
  subtract delta evenly from all other bands, but never below minimumWeight

if delta < 0:
  add |delta| evenly across all other bands

normalize all weights again afterwards

This explains how moving one band slider redistributes weight across the rest while keeping the full palette at 100%.