Rembg 是一个用于擦除图片背景的工具。

趋势

Rembg Logo

Rembg is a tool to remove image backgrounds. It can be used as a CLI, Python library, HTTP server, or Docker container.

License Hugging Face Spaces Streamlit App Open in Colab RepoMapr

danielgatis%2Frembg | Trendshift

Sponsors

Unsplash PhotoRoom Remove Background API
https://photoroom.com/api

Fast and accurate background remover API

If this project has helped you, please consider making a donation.

Requirements

python: >=3.11, <3.14

Installation

Choose one of the following backends based on your hardware:

CPU support

pip install "rembg[cpu]" # for library
pip install "rembg[cpu,cli]" # for library + cli

GPU support (NVIDIA/CUDA)

First, check if your system supports onnxruntime-gpu by visiting onnxruntime.ai and reviewing the installation matrix.

onnxruntime-installation-matrix

If your system is compatible, run:

pip install "rembg[gpu]" # for library
pip install "rembg[gpu,cli]" # for library + cli

Note: NVIDIA GPUs may require onnxruntime-gpu, CUDA, and cudnn-devel. See #668 for details. If rembg[gpu] doesn't work and you can't install CUDA or cudnn-devel, use rembg[cpu] with onnxruntime instead.

GPU support (AMD/ROCm)

ROCm support requires the onnxruntime-rocm package. Install it by following AMD's documentation.

Once onnxruntime-rocm is installed and working, install rembg with ROCm support:

pip install "rembg[rocm]" # for library
pip install "rembg[rocm,cli]" # for library + cli

Usage as a CLI

After installation, you can use rembg by typing rembg in your terminal.

The rembg command has 4 subcommands, one for each input type:

  • i - single files
  • p - folders (batch processing)
  • s - HTTP server
  • b - RGB24 pixel binary stream

You can get help about the main command using:

rembg --help

You can also get help for any subcommand:

rembg <COMMAND> --help

rembg i

Used for processing single files.

Remove background from a remote image:

curl -s http://input.png | rembg i > output.png

Remove background from a local file:

rembg i path/to/input.png path/to/output.png

Specify a model:

rembg i -m u2netp path/to/input.png path/to/output.png

Return only the mask:

rembg i -om path/to/input.png path/to/output.png

Apply alpha matting:

rembg i -a path/to/input.png path/to/output.png

Pass extra parameters (SAM example):

rembg i -m sam -x '{ "sam_prompt": [{"type": "point", "data": [724, 740], "label": 1}] }' examples/plants-1.jpg examples/plants-1.out.png

Pass extra parameters (custom model):

rembg i -m u2net_custom -x '{"model_path": "~/.u2net/u2net.onnx"}' path/to/input.png path/to/output.png

rembg p

Used for batch processing entire folders.

Process all images in a folder:

rembg p path/to/input path/to/output

Watch mode (process new/changed files automatically):

rembg p -w path/to/input path/to/output

rembg s

Used to start an HTTP server.

rembg s --host 0.0.0.0 --port 7000 --log_level info

For complete API documentation, visit: http://localhost:7000/api

Remove background from an image URL:

curl -s "http://localhost:7000/api/remove?url=http://input.png" -o output.png

Remove background from an uploaded image:

curl -s -F file=@/path/to/input.jpg "http://localhost:7000/api/remove" -o output.png

rembg b

Process a sequence of RGB24 images from stdin. This is intended to be used with programs like FFmpeg that output RGB24 pixel data to stdout.

rembg b <width> <height> -o <output_specifier>

Arguments:

Argument Description
width Width of input image(s)
height Height of input image(s)
output_specifier Printf-style specifier for output filenames (e.g., output-%03u.png produces output-000.png, output-001.png, etc.). Omit to write to stdout.

Example with FFmpeg:

ffmpeg -i input.mp4 -ss 10 -an -f rawvideo -pix_fmt rgb24 pipe:1 | rembg b 1280 720 -o folder/output-%03u.png

Note: The width and height must match FFmpeg's output dimensions. The flags -an -f rawvideo -pix_fmt rgb24 pipe:1 are required for FFmpeg compatibility.

Usage as a Library

Input and output as bytes:

from rembg import remove

with open('input.png', 'rb') as i:
    with open('output.png', 'wb') as o:
        input = i.read()
        output = remove(input)
        o.write(output)

Input and output as a PIL image:

from rembg import remove
from PIL import Image

input = Image.open('input.png')
output = remove(input)
output.save('output.png')

Input and output as a NumPy array:

from rembg import remove
import cv2

input = cv2.imread('input.png')
output = remove(input)
cv2.imwrite('output.png', output)

Force output as bytes:

from rembg import remove

with open('input.png', 'rb') as i:
    with open('output.png', 'wb') as o:
        input = i.read()
        output = remove(input, force_return_bytes=True)
        o.write(output)

Batch processing with session reuse (recommended for performance):

from pathlib import Path
from rembg import remove, new_session

session = new_session()

for file in Path('path/to/folder').glob('*.png'):
    input_path = str(file)
    output_path = str(file.parent / (file.stem + ".out.png"))

    with open(input_path, 'rb') as i:
        with open(output_path, 'wb') as o:
            input = i.read()
            output = remove(input, session=session)
            o.write(output)

For more examples, see the examples page.

Usage with Docker

CPU Only

Replace the rembg command with docker run danielgatis/rembg:

docker run -v .:/data danielgatis/rembg i /data/input.png /data/output.png

NVIDIA CUDA GPU Acceleration

Requirements: Your host must have the NVIDIA Container Toolkit installed.

CUDA acceleration requires cudnn-devel, so you need to build the Docker image yourself. See #668 for details.

Build the image:

docker build -t rembg-nvidia-cuda-cudnn-gpu -f Dockerfile_nvidia_cuda_cudnn_gpu .

Note: This image requires ~11GB of disk space (CPU version is ~1.6GB). Models are not included.

Run the container:

sudo docker run --rm -it --gpus all -v /dev/dri:/dev/dri -v $PWD:/data rembg-nvidia-cuda-cudnn-gpu i -m birefnet-general /data/input.png /data/output.png

Tips:

  • You can create your own NVIDIA CUDA image and install rembg[gpu,cli] in it.
  • Use -v /path/to/models/:/root/.u2net to store model files outside the container, avoiding re-downloads.

Models

All models are automatically downloaded and saved to ~/.u2net/ on first use.

Available Models

  • u2net (download, source): A pre-trained model for general use cases.
  • u2netp (download, source): A lightweight version of u2net model.
  • u2net_human_seg (download, source): A pre-trained model for human segmentation.
  • u2net_cloth_seg (download, source): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
  • silueta (download, source): Same as u2net but the size is reduced to 43Mb.
  • isnet-general-use (download, source): A new pre-trained model for general use cases.
  • isnet-anime (download, source): A high-accuracy segmentation for anime character.
  • sam (download encoder, download decoder, source): A pre-trained model for any use cases.
  • birefnet-general (download, source): A pre-trained model for general use cases.
  • birefnet-general-lite (download, source): A light pre-trained model for general use cases.
  • birefnet-portrait (download, source): A pre-trained model for human portraits.
  • birefnet-dis (download, source): A pre-trained model for dichotomous image segmentation (DIS).
  • birefnet-hrsod (download, source): A pre-trained model for high-resolution salient object detection (HRSOD).
  • birefnet-cod (download, source): A pre-trained model for concealed object detection (COD).
  • birefnet-massive (download, source): A pre-trained model with massive dataset.
  • bria-rmbg (download, source): A state-of-the-art background removal model by BRIA AI.

Environment Variables

Variable Description
U2NET_HOME Path to the directory where models are stored. Defaults to $XDG_DATA_HOME/.u2net (or ~/.u2net if XDG_DATA_HOME is not set).
XDG_DATA_HOME Base data directory used when U2NET_HOME is not set. Defaults to ~.
MODEL_CHECKSUM_DISABLED When set (e.g. MODEL_CHECKSUM_DISABLED=1), disables hash verification for downloaded models. This is useful if you want to use your own custom/converted model files without rembg re-downloading the originals.
OMP_NUM_THREADS Sets the number of threads used by ONNX Runtime for inference.

Using custom model files

If you need to use a modified version of a model (e.g. converted to a different ONNX IR version for compatibility with an older CUDA toolkit), you can prevent rembg from overwriting it:

  1. Set MODEL_CHECKSUM_DISABLED=1
  2. Place your custom .onnx file in the models directory (~/.u2net/ by default) with the expected filename (e.g. u2net.onnx)
  3. Rembg will detect the file exists and use it without re-downloading

FAQ

When will this library support Python version 3.xx?

This library depends on onnxruntime. Python version support is determined by onnxruntime's compatibility.

Support

If you find this project useful, consider buying me a coffee (or a beer):

Buy Me A Coffee

Star History

Star History Chart

License

Copyright (c) 2020-present Daniel Gatis

Licensed under the MIT License.

关于
Rembg is a tool to remove images background
22.2 k
2.3 k
165
语言
Python
Inno Setup
Jupyter Notebook
PowerShell
Dockerfile
90.03%
8.24%
0.71%
0.66%
0.36%