如果您需要获取矩形区域的平均颜色,而不是单个像素的颜色,请查看另一个问题:
👉 JavaScript - 从图像的特定区域获取平均颜色
无论如何,两者都是以非常相似的方式完成的:
🔍 从图像或画布中获取单个像素的颜色/值
要获得单个像素的颜色,您首先要将该图像绘制到画布上,您已经完成了该操作:
const image = document.getElementById('image');
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
const width = image.width;
const height = image.height;
canvas.width = width;
canvas.height = height;
context.drawImage(image, 0, 0, width, height);
然后像这样获取单个像素的值:
const data = context.getImageData(X, Y, 1, 1).data;
// RED = data[0]
// GREEN = data[1]
// BLUE = data[2]
// ALPHA = data[3]
🚀 通过一次获取所有 ImageData 来加速瘦身
您需要使用相同的CanvasRenderingContext2D.getImageData()来获取整个图像的值,您可以通过更改其第三个和第四个参数来实现。该函数的签名是:
ImageData ctx.getImageData(sx, sy, sw, sh);
sx
: 矩形左上角的 x 坐标,从中提取 ImageData。
sy
: 矩形左上角的 y 坐标,从中提取 ImageData。
sw
:将从中提取 ImageData 的矩形的宽度。
sh
:将从中提取 ImageData 的矩形的高度。
你可以看到它返回一个ImageData
对象,无论是什么。这里的重要部分是该对象具有.data
包含我们所有像素值的属性。
但是,请注意.data
property 是 1-dimension Uint8ClampedArray
,这意味着所有像素的组件都已展平,因此您会得到如下所示的内容:
假设您有一个像这样的 2x2 图像:
RED PIXEL | GREEN PIXEL
BLUE PIXEL | TRANSPARENT PIXEL
然后,你会像这样得到它们:
[ 255, 0, 0, 255, 0, 255, 0, 255, 0, 0, 255, 255, 0, 0, 0, 0 ]
| RED PIXEL | GREEN PIXEL | BLUE PIXEL | TRANSPAERENT PIXEL |
| 1ST PIXEL | 2ND PIXEL | 3RD PIXEL | 4TH PIXEL |
由于调用getImageData
是一个缓慢的操作,您只能调用一次来获取所有图像的数据(sw
=图像宽度,sh
=图像高度)。
然后,在上面的例子中,如果你想访问 的组件TRANSPARENT PIXEL
,即x = 1, y = 1
这个假想图像的位置,你会i
在它ImageData
的data
属性中找到它的第一个索引:
const i = (y * imageData.width + x) * 4;
✨ 让我们看看它在行动
const solidColor = document.getElementById('solidColor');
const alphaColor = document.getElementById('alphaColor');
const solidWeighted = document.getElementById('solidWeighted');
const solidColorCode = document.getElementById('solidColorCode');
const alphaColorCode = document.getElementById('alphaColorCode');
const solidWeightedCOde = document.getElementById('solidWeightedCode');
const brush = document.getElementById('brush');
const image = document.getElementById('image');
const canvas = document.createElement('canvas');
const context = canvas.getContext('2d');
const width = image.width;
const height = image.height;
const BRUSH_SIZE = brush.offsetWidth;
const BRUSH_CENTER = BRUSH_SIZE / 2;
const MIN_X = image.offsetLeft + 4;
const MAX_X = MIN_X + width - 1;
const MIN_Y = image.offsetTop + 4;
const MAX_Y = MIN_Y + height - 1;
canvas.width = width;
canvas.height = height;
context.drawImage(image, 0, 0, width, height);
const imageDataData = context.getImageData(0, 0, width, height).data;
function sampleColor(clientX, clientY) {
if (clientX < MIN_X || clientX > MAX_X || clientY < MIN_Y || clientY > MAX_Y) {
requestAnimationFrame(() => {
brush.style.transform = `translate(${ clientX }px, ${ clientY }px)`;
solidColorCode.innerText = solidColor.style.background = 'rgb(0, 0, 0)';
alphaColorCode.innerText = alphaColor.style.background = 'rgba(0, 0, 0, 0.00)';
solidWeightedCode.innerText = solidWeighted.style.background = 'rgb(0, 0, 0)';
});
return;
}
const imageX = clientX - MIN_X;
const imageY = clientY - MIN_Y;
const i = (imageY * width + imageX) * 4;
// A single pixel (R, G, B, A) will take 4 positions in the array:
const R = imageDataData[i];
const G = imageDataData[i + 1];
const B = imageDataData[i + 2];
const A = imageDataData[i + 3] / 255;
const iA = 1 - A;
// Alpha-weighted color:
const wR = (R * A + 255 * iA) | 0;
const wG = (G * A + 255 * iA) | 0;
const wB = (B * A + 255 * iA) | 0;
// Update UI:
requestAnimationFrame(() => {
brush.style.transform = `translate(${ clientX }px, ${ clientY }px)`;
solidColorCode.innerText = solidColor.style.background
= `rgb(${ R }, ${ G }, ${ B })`;
alphaColorCode.innerText = alphaColor.style.background
= `rgba(${ R }, ${ G }, ${ B }, ${ A.toFixed(2) })`;
solidWeightedCode.innerText = solidWeighted.style.background
= `rgb(${ wR }, ${ wG }, ${ wB })`;
});
}
document.onmousemove = (e) => sampleColor(e.clientX, e.clientY);
sampleColor(MIN_X, MIN_Y);
body {
margin: 0;
height: 100vh;
display: flex;
flex-direction: row;
align-items: center;
justify-content: center;
cursor: none;
font-family: monospace;
overflow: hidden;
}
#image {
border: 4px solid white;
border-radius: 2px;
box-shadow: 0 0 32px 0 rgba(0, 0, 0, .25);
width: 150px;
box-sizing: border-box;
}
#brush {
position: absolute;
top: 0;
left: 0;
pointer-events: none;
width: 1px;
height: 1px;
mix-blend-mode: exclusion;
border-radius: 100%;
}
#brush::before,
#brush::after {
content: '';
position: absolute;
background: magenta;
}
#brush::before {
top: -16px;
left: 0;
height: 33px;
width: 100%;
}
#brush::after {
left: -16px;
top: 0;
width: 33px;
height: 100%;
}
#samples {
position: relative;
list-style: none;
padding: 0;
width: 250px;
}
#samples::before {
content: '';
position: absolute;
top: 0;
left: 27px;
width: 2px;
height: 100%;
background: black;
border-radius: 1px;
}
#samples > li {
position: relative;
display: flex;
flex-direction: column;
justify-content: center;
padding-left: 56px;
}
#samples > li + li {
margin-top: 8px;
}
.sample {
position: absolute;
top: 50%;
left: 16px;
transform: translate(0, -50%);
display: block;
width: 24px;
height: 24px;
border-radius: 100%;
box-shadow: 0 0 16px 4px rgba(0, 0, 0, .25);
margin-right: 8px;
}
.sampleLabel {
font-weight: bold;
margin-bottom: 8px;
}
.sampleCode {
}
<img id="image" src="data:image/gif;base64,R0lGODlhSwBLAPEAACMfIO0cJAAAAAAAACH/C0ltYWdlTWFnaWNrDWdhbW1hPTAuNDU0NTUAIf4jUmVzaXplZCBvbiBodHRwczovL2V6Z2lmLmNvbS9yZXNpemUAIfkEBQAAAgAsAAAAAEsASwAAAv+Uj6mb4A+QY7TaKxvch+MPKpC0eeUUptdomOzJqnLUvnFcl7J6Pzn9I+l2IdfII8DZiCnYsYdK4qRTptAZwQKRVK71CusOgx2nFRrlhMu+33o2NEalC6S9zQvfi3Mlnm9WxeQ396F2+HcQsMjYGEBRVbhy5yOp6OgIeVIHpEnZyYCZ6cklKBJX+Kgg2riqKoayOWl2+VrLmtDqBptIOjZ6K4qAeSrL8PcmHExsgMs2dpyIxPpKvdhM/YxaTMW2PGr9GP76BN3VHTMurh7eoU14jsc+P845Vn6OTb/P/I68iYOfwGv+JOmRNHBfsV5ujA1LqM4eKDoNvXyDqItTxYX/DC9irKBlIhkKGPtFw1JDiMeS7CqWqySPZcKGHH/JHGgIpb6bCl1O0LmT57yCOqoI5UcU0YKjPXmFjMm0ZQ4NIVdGBdZRi9WrjLxJNMY1Yr4dYeuNxWApl1ALHb+KDHrTV1owlriedJgSr4Cybu/9dFiWYAagsqAGVkkzaZTAuqD9ywKWMUG9dCO3u2zWpVzIhpW122utZlrHnTN+Bq2Mqrlnqh8CQ+0Mrq3Kc++q7eo6dlB3rLuh3abPVbbbI2mxBdhWdsZhid8cr0oy9F08q0k5FXSadiyL1mF5z51a8VsQOp3/LlodkBfzmzWf2bOrtfzr48k/1hupDaLa9rUbO+zlwndfaOCURAXRNaCBqBT2BncJakWfTzSYkmCEFr60RX0V8sKaHOltCBJ1tAAFYhHaVVbig3jxp0IBADs=" >
<div id="brush"></div>
<ul id="samples">
<li>
<span class="sample" id="solidColor"></span>
<div class="sampleLabel">solidColor</div>
<div class="sampleCode" id="solidColorCode">rgb(0, 0, 0)</div>
</li>
<li>
<span class="sample" id="alphaColor"></span>
<div class="sampleLabel">alphaColor</div>
<div class="sampleCode" id="alphaColorCode">rgba(0, 0, 0, 0.00)</div>
</li>
<li>
<span class="sample" id="solidWeighted"></span>
<div class="sampleLabel">solidWeighted (with white)</div>
<div class="sampleCode" id="solidWeightedCode">rgb(0, 0, 0)</div>
</li>
</ul>
⚠️ 请注意Cross-Origin
,如果我尝试使用更长的数据 URI,我正在使用小数据 URI 来避免问题,如果我包含外部图像或大于允许的答案。
🕵️这些颜色看起来很奇怪,不是吗?
如果围绕星号形状的边界移动光标,您有时会看到avgSolidColor
红色,但您采样的像素看起来是白色的。这是因为即使R
该像素的分量可能很高,但 alpha 通道很低,所以颜色实际上是几乎透明的红色阴影,但avgSolidColor
忽略了这一点。
另一方面,avgAlphaColor
看起来粉红色。嗯,这实际上不是真的,它只是看起来粉红色,因为我们现在使用 alpha 通道,这使它半透明并允许我们看到页面的背景,在这种情况下是白色的。
🎨 Alpha 加权颜色
那么,我们可以做些什么来解决这个问题呢?好吧,事实证明我们只需要使用 alpha 通道及其逆作为权重来计算我们新样本的分量,在这种情况下将其与白色合并,因为这是我们用作背景的颜色。
这意味着如果一个像素是R, G, B, A
,其中A
是在区间 中[0, 1]
,我们将计算 alpha 通道的倒数iA
, 和加权样本的分量:
const iA = 1 - A;
const wR = (R * A + 255 * iA) | 0;
const wG = (G * A + 255 * iA) | 0;
const wB = (B * A + 255 * iA) | 0;
请注意像素越透明(A
接近 0),颜色越浅。