API Reference Overview

WatermarkLab provides a comprehensive set of modules and functions for the development, evaluation, and comparison of image watermarking techniques. This API reference details the core components of the library, including evaluation frameworks, watermark models, attack methods, evaluation metrics, and data handling utilities.

laboratories

Core evaluation frameworks that orchestrate testing pipelines for both PGW and IGW models.

watermarks

Implementation of various watermarking algorithms, including both PGW and IGW approaches.

attackers

Comprehensive collection of image attacks to test watermark robustness.

metrics

Evaluation metrics for measuring visual quality and watermark robustness.

datasets

Data loading utilities and dataset classes for handling various image data sources.

steganography

Steganographic techniques for information hiding in digital images.

draw

Utilities for visualizing watermarking results and generating plots.

tools

Various utility functions and helper classes for watermarking research.

Introduction

WatermarkLab is a comprehensive Python toolkit for the development, evaluation, and comparison of robust image watermarking techniques. It supports both Post-Generation Watermarking (PGW) and In-Generation Watermarking (IGW) approaches, providing a unified framework for benchmarking watermarking algorithms under various attack scenarios.

Key Features

  • Unified evaluation pipeline for both PGW and IGW models
  • Extensive collection of image attacks for robustness testing
  • Comprehensive set of evaluation metrics
  • Modular design for easy extension
  • Structured result reporting with visualizations

Installation

Installation Command

pip install watermarklab

Download Resources

huggingface-cli download chenoly/watermarklab
huggingface-cli download stabilityai/stable-diffusion-2-1-base

Core Concepts

Watermark Model Types

PGW (Post-Generation Watermarking)

Watermark is embedded into pre-existing images after they have been created.

IGW (In-Generation Watermarking)

Watermark is embedded during the image generation process (e.g., in diffusion models).

Evaluation Pipeline

The library provides a standardized evaluation process consisting of:

  1. Embedding watermark into cover images/prompts
  2. Applying various attacks (noise, compression, geometric transformations)
  3. Extracting watermark from attacked images
  4. Computing evaluation metrics
  5. Generating comprehensive result reports

Benchmarking IGWs

Here's a quick example of how to use WatermarkLab for evaluating your custom watermark model:

import watermarklab as wl
from watermarklab.utils.data import DataLoader
from watermarklab.datasets import MS_COCO_2017_VAL_PROMPTS
from watermarklab.attackers.attackerloader import AttackersWithFactorsModel
from watermarklab.watermarks.IGWs import StableSignature, GaussianShading, TreeRing
import warnings


warnings.filterwarnings('ignore')
default_attackers = AttackersWithFactorsModel()

gaussianshading = GaussianShading(local_files_only=True)
mscoco_256 = MS_COCO_2017_VAL_PROMPTS(bit_len=256)
wl.evaluate("save_results/IGWs/", gaussianshading, default_attackers, DataLoader(mscoco_256, batch_size=128), noise_save=True)

treering = TreeRing(local_files_only=True)
mscoco_1 = MS_COCO_2017_VAL_PROMPTS(bit_len=1)
wl.evaluate("save_results/IGWs/", treering, default_attackers, DataLoader(mscoco_1, batch_size=32), noise_save=True)

stablesignature = StableSignature(local_files_only=True)
mscoco_48 = MS_COCO_2017_VAL_PROMPTS(bit_len=48)
wl.evaluate("save_results/IGWs/", stablesignature, default_attackers, DataLoader(mscoco_48, batch_size=32), need_cover=True, noise_save=True)

Benchmarking PGWs

Here's a quick example of how to use WatermarkLab for evaluating your custom watermark model:

import watermarklab as wl
from watermarklab.watermarks.PGWs import VINE
from watermarklab.utils.data import DataLoader
from watermarklab.datasets import MS_COCO_2017_VAL_IMAGES
from watermarklab.attackers.attackerloader import AttackersWithFactorsModel
from watermarklab.watermarks.PGWs import rivaGAN, dctDwtSvd, TrustMark, StegaStamp, InvisMark
import warnings

warnings.filterwarnings('ignore')


default_attackers = AttackersWithFactorsModel()

mscoco2017_256_32 = MS_COCO_2017_VAL_IMAGES(im_size=256, bit_len=32)
dataloader_256_32 = DataLoader(mscoco2017_256_32, batch_size=32)

rivagan = rivaGAN(bits_len=32, img_size=256)
wl.evaluate("save_results/PGWs", rivagan, default_attackers, dataloader_256_32, noise_save=True)

dctdwtsvd = dctDwtSvd(bits_len=32, img_size=256)
wl.evaluate("save_results/PGWs", dctdwtsvd, default_attackers, dataloader_256_32, noise_save=True)

mscoco2017_256_100 = MS_COCO_2017_VAL_IMAGES(im_size=256, bit_len=100)
dataloader_256_100 = DataLoader(mscoco2017_256_100, batch_size=32)
invismark = InvisMark(bits_len=100, img_size=256)
wl.evaluate("save_results/PGWs", invismark, default_attackers, dataloader_256_100, noise_save=True)

trustmark = TrustMark(bits_len=100, img_size=256)
wl.evaluate("save_results/PGWs", trustmark, default_attackers, dataloader_256_100, noise_save=True)

mscoco2017_400_100 = MS_COCO_2017_VAL_IMAGES(im_size=400, bit_len=100)
dataloader_400_100 = DataLoader(mscoco2017_400_100, batch_size=16)
stegastamp = StegaStamp(bits_len=100, img_size=400)
wl.evaluate("save_results/PGWs", stegastamp, default_attackers, dataloader_400_100, noise_save=True)


mscoco2017_256_100 = MS_COCO_2017_VAL_IMAGES(im_size=256, bit_len=100)
dataloader_256_100 = DataLoader(mscoco2017_256_100, batch_size=32)
vine = VINE(bits_len=100, img_size=256)
wl.evaluate("save_results/PGWs", vine, default_attackers, dataloader_256_100, noise_save=True)

How to Evaluate Your Watermark Model?

Here's a quick example of how to use WatermarkLab for evaluating your custom watermark model:

from numpy import ndarray
import watermarklab as wl
from typing import List, Any
from watermarklab.utils.data import DataLoader
from watermarklab.datasets import MS_COCO_2017_VAL_IMAGES
from watermarklab.utils.basemodel import BaseWatermarkModel, Result
from watermarklab.attackers.attackerloader import AttackersWithFactorsModel


class YourWatermark(BaseWatermarkModel):
    """Skeleton for a custom watermark model. Implement `embed`, `extract`, `recover`."""
    def __init__(self, bits_len: int, img_size: int, modelname: str):
        super().__init__(bits_len, img_size, modelname)

    def embed(self, cover_list: List[Any], secrets: List[Any]) -> Result:
        """Embed watermark into cover data."""
        pass # Replace with your logic

    def extract(self, stego_list: List[ndarray]) -> Result:
        """Extract watermark from stego data."""
        pass # Replace with your logic

    def recover(self, stego_list: List[ndarray]) -> Result:
        """Recover original cover data (for reversible watermarking)."""
        pass # Replace with your logic or handle if not applicable


# --- Set Up Evaluation ---
default_attackers = AttackersWithFactorsModel() # Default set of attackers

# Instantiate your model (replace `your_payload` and `your_img_size`)
your_payload = 100
your_img_size = 400
stegastamp = YourWatermark(bits_len=your_payload, img_size=your_img_size, modelname="YourModelName")

# Load dataset: MS COCO 2017 validation images
kodak24_400_100 = MS_COCO_2017_VAL_IMAGES(im_size=your_img_size, bit_len=your_payload, image_num=500)
# Create DataLoader for batched processing
dataloader_400_100 = DataLoader(kodak24_400_100, batch_size=64)


# --- Run Evaluation ---
# Evaluate your model
wl.evaluate("save_results/PGWs", stegastamp, default_attackers, dataloader_400_100)

# --- Visualize Results (Optional) ---
# Plot robustness results
wl.draw.plot_model_robustness_under_single_attack([f"save_results/PGWs/{stegastamp.modelname}"], "save_draw")

How to visualize results?

Here's a quick example of how to use WatermarkLab for evaluating your custom watermark model:

import watermarklab as wl
from watermarklab.attackers.attackerloader import AttackersWithFactorsModel

results = ['saved_all_json/result_dctDwtSvd.json',
           'saved_all_json/result_rivaGAN.json',
           'saved_all_json/result_TrustMark-Q.json',
           'saved_all_json/result_InvisMark.json',
           'saved_all_json/result_StegaStamp.json',
           'saved_all_json/result_VINE-R.json',
           'saved_all_json/result_GaussianShading.json',
           'saved_all_json/result_StableSignature.json',
           'saved_all_json/result_TreeRing-Ring.json']

default_attackers = AttackersWithFactorsModel()
wl.draw.plot_model_robustness_under_single_attack(results, "draw_ALL/MR_SA")
wl.draw.plot_model_robustness_under_all_attack(results, "draw_ALL/MR_AA")
wl.draw.plot_model_robustness_scores_under_all_attacks(results, "draw_ALL/MRS_AA")
wl.draw.plot_model_robustness_ranking_under_single_attack(results, "draw_ALL/MRK_SA")
wl.draw.test_compute_overall_robustness_scores(results)
wl.draw.plot_model_robustness_ranking_by_attacker_group(results, "draw_ALL/MRK_GA", default_attackers.attacker_groups)
wl.draw.plot_model_overall_robustness_ranking(results, "draw_ALL/MRK_MO")
wl.draw.plot_all_attack_ranking(results, "draw_ALL/ARK")
wl.draw.plot_attack_effectiveness_at_tpr_levels(results, "draw_ALL/ARK_TPR", tpr_levels=[0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1])
wl.draw.plot_visual_quality(results, "draw_ALL/VQ", show_dataset_name=False)
wl.draw.plot_stego_visualization(results, "draw_ALL/MVC")
wl.draw.plot_attack_visualization(results, "draw_ALL/MVC")

How to Evaluate Your Attacker?

Here's a quick example of how to use WatermarkLab for evaluating your custom attacker:

from typing import List
from numpy import ndarray
import watermarklab as wl
from watermarklab.utils.data import DataLoader
from watermarklab.watermarks.PGWs import StegaStamp  # Example watermark model
from watermarklab.datasets import MS_COCO_2017_VAL_IMAGES  # Example dataset
from watermarklab.utils.basemodel import BaseTestAttackModel, AttackerWithFactors
from watermarklab.attackers.attackerloader import AttackersWithFactorsModel


# --- Define Your Custom Attacker ---
class YourAttacker(BaseTestAttackModel):
    """
    Example class for a user-defined attacker.
    You need to inherit from BaseTestAttackModel and implement the attack method.
    """

    def __init__(self, noisename: str, factor_inversely_related: bool):
        """
        Initializes the custom attacker.

        Args:
            noisename (str): Name of the attacker, used for identification and display.
            factor_inversely_related (bool): 
                Indicates if the attack factor is inversely related to the attack strength.
                E.g., JPEG quality factor: higher value means less distortion.
        """
        # Call the parent class constructor
        super().__init__(noisename, factor_inversely_related)
        # You can add custom attributes here

    def attack(self, stego_img: List[ndarray], cover_img: List[ndarray], factor: float) -> List[ndarray]:
        """
        Applies the attack to the watermarked (stego) images.

        Args:
            stego_img (List[ndarray]): List of watermarked images (Stego images).
            cover_img (List[ndarray]): List of original cover images (may be used as reference).
            factor (float): Factor controlling the intensity of the attack.

        Returns:
            List[ndarray]: List of images after the attack has been applied.
        """
        # --- Implement your attack logic here ---
        return stego_img  # Placeholder, replace with actual attack logic


# --- Set Up Evaluation ---
# 1. Configure watermark model parameters
your_payload = 100  # Watermark bit length
your_img_size = 400  # Image size (assumed square)

# 2. Instantiate the watermark model (using StegaStamp as an example)
stegastamp = StegaStamp(bits_len=your_payload, img_size=your_img_size)

# 3. Configure your custom attacker
your_attacker_factors = [1, 2, 3, 4, 5, 6, 7]  # Define attack strength factors to test
yourattacker = YourAttacker(
    noisename="YourModelName",  # Your attacker's name
    factor_inversely_related=False  # Set based on your attack's characteristics
)

# 4. Wrap your attacker in AttackerWithFactors for the evaluation framework
#    This specifies the attacker's name, factors, and symbol for display/plots
your_custom_attacker_config = AttackerWithFactors(
    attacker=yourattacker,
    attackername="YourAttackerName",  # Name for display in results/plots
    factors=your_attacker_factors,  # List of attack strength factors
    factorsymbol="YourFactorLatexSymbol"  # Symbol for the factor (e.g., '$\sigma$')
)

# 5. Create AttackersWithFactorsModel instance, passing your custom attacker
#    This ensures the evaluation uses only your defined attacker
default_attackers = AttackersWithFactorsModel(
    default_attackers=[your_custom_attacker_config]
)

# 6. Load the dataset
#    Using MS COCO 2017 validation set images
kodak24_400_100 = MS_COCO_2017_VAL_IMAGES(
    im_size=your_img_size,  # Image size
    bit_len=your_payload,  # Watermark bit length
    image_num=500  # Number of images to use
)

# 7. Create DataLoader for batch processing
dataloader_400_100 = DataLoader(kodak24_400_100, batch_size=64)

# --- Run Evaluation ---
# Use watermarklab's evaluate function to test the watermark model
# against your custom attacker.
# Parameters:
# - save_path: Directory to save evaluation results.
# - watermark_model: Instance of the watermark model to evaluate.
# - noise_models: Instance containing attack configurations (your custom attacker).
# - dataloader: DataLoader providing data.
wl.evaluate(
    "save_results/PGWs",  # Path to save results
    stegastamp,  # Watermark model instance
    default_attackers,  # Attack model instance (your custom attacker)
    dataloader_400_100  # DataLoader instance
)

# --- Visualize Results (Optional) ---
# Use watermarklab's plotting tool to generate an attack effectiveness ranking plot.
# Parameters:
# - result_paths: List of paths containing evaluation results.
# - save_path: Directory to save the generated plots.
wl.draw.plot_attack_effectiveness_ranking(
    [f"save_results/PGWs/{stegastamp.modelname}"],  # Path to evaluation results
    "save_draw"  # Path to save plots
)