torchgeo.models¶
Change Star¶
- class torchgeo.models.ChangeStar(dense_feature_extractor, seg_classifier, changemixin, inference_mode='t1t2')[source]¶
Bases:
Module
The base class of the network architecture of ChangeStar.
ChangeStar is composed of an any segmentation model and a ChangeMixin module. This model is mainly used for binary/multi-class change detection under bitemporal supervision and single-temporal supervision. It features the property of segmentation architecture reusing, which is helpful to integrate advanced dense prediction (e.g., semantic segmentation) network architecture into change detection.
For multi-class change detection, semantic change prediction can be inferred by a binary change prediction from the ChangeMixin module and two semantic predictions from the Segmentation model.
If you use this model in your research, please cite the following paper:
- __init__(dense_feature_extractor, seg_classifier, changemixin, inference_mode='t1t2')[source]¶
Initializes a new ChangeStar model.
- Parameters:
dense_feature_extractor (Module) – module for dense feature extraction, typically a semantic segmentation model without semantic segmentation head.
seg_classifier (Module) – semantic segmentation head, typically a convolutional layer followed by an upsampling layer.
changemixin (ChangeMixin) –
torchgeo.models.ChangeMixin
moduleinference_mode (str) – name of inference mode
't1t2'
|'t2t1'
|'mean'
.'t1t2'
: concatenate bitemporal features in the order of t1->t2;'t2t1'
: concatenate bitemporal features in the order of t2->t1;'mean'
: the weighted mean of the output of't1t2'
and't1t2'
- class torchgeo.models.ChangeStarFarSeg(backbone='resnet50', classes=1, backbone_pretrained=True)[source]¶
Bases:
ChangeStar
The network architecture of ChangeStar(FarSeg).
ChangeStar(FarSeg) is composed of a FarSeg model and a ChangeMixin module.
If you use this model in your research, please cite the following paper:
- class torchgeo.models.ChangeMixin(in_channels=256, inner_channels=16, num_convs=4, scale_factor=4.0)[source]¶
Bases:
Module
This module enables any segmentation model to detect binary change.
The common usage is to attach this module on a segmentation model without the classification head.
If you use this model in your research, please cite the following paper:
- __init__(in_channels=256, inner_channels=16, num_convs=4, scale_factor=4.0)[source]¶
Initializes a new ChangeMixin module.
FarSeg¶
- class torchgeo.models.FarSeg(backbone='resnet50', classes=16, backbone_pretrained=True)[source]¶
Bases:
Module
Foreground-Aware Relation Network (FarSeg).
This model can be used for binary- or multi-class object segmentation, such as building, road, ship, and airplane segmentation. It can be also extended as a change detection model. It features a foreground-scene relation module to model the relation between scene embedding, object context, and object feature, thus improving the discrimination of object feature representation.
If you use this model in your research, please cite the following paper:
Fully-convolutional Network¶
FC Siamese Networks¶
- class torchgeo.models.FCSiamConc(encoder_name='resnet34', encoder_depth=5, encoder_weights='imagenet', decoder_use_batchnorm=True, decoder_channels=(256, 128, 64, 32, 16), decoder_attention_type=None, in_channels=3, classes=1, activation=None)[source]¶
Bases:
SegmentationModel
Fully-convolutional Siamese Concatenation (FC-Siam-conc).
If you use this model in your research, please cite the following paper:
- __init__(encoder_name='resnet34', encoder_depth=5, encoder_weights='imagenet', decoder_use_batchnorm=True, decoder_channels=(256, 128, 64, 32, 16), decoder_attention_type=None, in_channels=3, classes=1, activation=None)[source]¶
Initialize a new FCSiamConc model.
- Parameters:
encoder_name (str) – Name of the classification model that will be used as an encoder (a.k.a backbone) to extract features of different spatial resolution
encoder_depth (int) – A number of stages used in encoder in range [3, 5]. two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features. Each stage generate features with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on). Default is 5
encoder_weights (Optional[str]) – One of None (random initialization), “imagenet” (pre-training on ImageNet) and other pretrained weights (see table with available weights for each encoder_name)
decoder_channels (Sequence[int]) – List of integers which specify in_channels parameter for convolutions used in decoder. Length of the list should be the same as encoder_depth
decoder_use_batchnorm (bool) – If True, BatchNorm2d layer between Conv2D and Activation layers is used. If “inplace” InplaceABN will be used, allows to decrease memory consumption. Available options are True, False, “inplace”
decoder_attention_type (Optional[str]) – Attention module used in decoder of the model. Available options are None and scse. SCSE paper https://arxiv.org/abs/1808.08127
in_channels (int) – A number of input channels for the model, default is 3 (RGB images)
classes (int) – A number of classes for output mask (or you can think as a number of channels of output mask)
activation (Optional[Union[str, Callable[[Tensor], Tensor]]]) – An activation function to apply after the final convolution n layer. Available options are “sigmoid”, “softmax”, “logsoftmax”, “tanh”, “identity”, callable and None. Default is None
- class torchgeo.models.FCSiamDiff(*args, **kwargs)[source]¶
Bases:
Unet
Fully-convolutional Siamese Difference (FC-Siam-diff).
If you use this model in your research, please cite the following paper:
- __init__(*args, **kwargs)[source]¶
Initialize a new FCSiamConc model.
- Parameters:
encoder_name – Name of the classification model that will be used as an encoder (a.k.a backbone) to extract features of different spatial resolution
encoder_depth – A number of stages used in encoder in range [3, 5]. two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features. Each stage generate features with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on). Default is 5
encoder_weights – One of None (random initialization), “imagenet” (pre-training on ImageNet) and other pretrained weights (see table with available weights for each encoder_name)
decoder_channels – List of integers which specify in_channels parameter for convolutions used in decoder. Length of the list should be the same as encoder_depth
decoder_use_batchnorm – If True, BatchNorm2d layer between Conv2D and Activation layers is used. If “inplace” InplaceABN will be used, allows to decrease memory consumption. Available options are True, False, “inplace”
decoder_attention_type – Attention module used in decoder of the model. Available options are None and scse. SCSE paper https://arxiv.org/abs/1808.08127
in_channels – A number of input channels for the model, default is 3 (RGB images)
classes – A number of classes for output mask (or you can think as a number of channels of output mask)
activation – An activation function to apply after the final convolution n layer. Available options are “sigmoid”, “softmax”, “logsoftmax”, “tanh”, “identity”, callable and None. Default is None
RCF Extractor¶
- class torchgeo.models.RCF(in_channels=4, features=16, kernel_size=3, bias=-1.0, seed=None, mode='gaussian', dataset=None)[source]¶
Bases:
Module
This model extracts random convolutional features (RCFs) from its input.
RCFs are used in the Multi-task Observation using Satellite Imagery & Kitchen Sinks (MOSAIKS) method proposed in “A generalizable and accessible approach to machine learning with global satellite imagery”.
This class can operate in two modes, “gaussian” and “empirical”. In “gaussian” mode, the filters will be sampled from a Gaussian distribution, while in “empirical” mode, the filters will be sampled from a dataset.
If you use this model in your research, please cite the following paper:
Note
This Module is not trainable. It is only used as a feature extractor.
- __init__(in_channels=4, features=16, kernel_size=3, bias=-1.0, seed=None, mode='gaussian', dataset=None)[source]¶
Initializes the RCF model.
This is a static model that serves to extract fixed length feature vectors from input patches.
New in version 0.2: The seed parameter.
New in version 0.5: The mode and dataset parameters.
- Parameters:
in_channels (int) – number of input channels
features (int) – number of features to compute, must be divisible by 2
kernel_size (int) – size of the kernel used to compute the RCFs
bias (float) – bias of the convolutional layer
seed (Optional[int]) – random seed used to initialize the convolutional layer
mode (str) – “empirical” or “gaussian”
dataset (Optional[NonGeoDataset]) – a NonGeoDataset to sample from when mode is “empirical”
ResNet¶
- torchgeo.models.resnet18(weights=None, *args, **kwargs)[source]¶
ResNet-18 model.
If you use this model in your research, please cite the following paper:
New in version 0.4.
- Parameters:
weights (Optional[ResNet18_Weights]) – Pre-trained model weights to use.
*args (Any) – Additional arguments to pass to
timm.create_model()
**kwargs (Any) – Additional keywork arguments to pass to
timm.create_model()
- Returns:
A ResNet-18 model.
- Return type:
ResNet
- torchgeo.models.resnet50(weights=None, *args, **kwargs)[source]¶
ResNet-50 model.
If you use this model in your research, please cite the following paper:
Changed in version 0.4: Switched to multi-weight support API.
- Parameters:
weights (Optional[ResNet50_Weights]) – Pre-trained model weights to use.
*args (Any) – Additional arguments to pass to
timm.create_model()
.**kwargs (Any) – Additional keywork arguments to pass to
timm.create_model()
.
- Returns:
A ResNet-50 model.
- Return type:
ResNet
Swin Transformer¶
- torchgeo.models.swin_v2_b(weights=None, *args, **kwargs)[source]¶
Swin Transformer v2 base model.
If you use this model in your research, please cite the following paper:
New in version 0.6.
- Parameters:
weights (Optional[Swin_V2_B_Weights]) – Pre-trained model weights to use.
*args (Any) – Additional arguments to pass to
torchvision.models.swin_transformer.SwinTransformer
.**kwargs (Any) – Additional keywork arguments to pass to
torchvision.models.swin_transformer.SwinTransformer
.
- Returns:
A Swin Transformer Base model.
- Return type:
SwinTransformer
- class torchgeo.models.Swin_V2_B_Weights(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases:
WeightsEnum
Swin Transformer v2 Base weights.
For torchvision swin_v2_b implementation.
New in version 0.6.
Vision Transformer¶
- torchgeo.models.vit_small_patch16_224(weights=None, *args, **kwargs)[source]¶
Vision Transform (ViT) small patch size 16 model.
If you use this model in your research, please cite the following paper:
New in version 0.4.
- Parameters:
weights (Optional[ViTSmall16_Weights]) – Pre-trained model weights to use.
*args (Any) – Additional arguments to pass to
timm.create_model()
.**kwargs (Any) – Additional keywork arguments to pass to
timm.create_model()
.
- Returns:
A ViT small 16 model.
- Return type:
VisionTransformer
Utility Functions¶
- torchgeo.models.get_model(name, *args, **kwargs)[source]¶
Get an instantiated model from its name.
New in version 0.4.
- torchgeo.models.get_model_weights(name)[source]¶
Get the weights enum class associated with a given model.
New in version 0.4.
Pretrained Weights¶
NAIP¶
Weight |
Channels |
Source |
Citation |
License |
---|---|---|---|---|
Swin_V2_B_Weights.NAIP_RGB_SATLAS |
3 |
Apache-2.0 |
Landsat¶
Weight |
Landsat |
Channels |
Source |
Citation |
License |
NLCD (Acc) |
NLCD (mIoU) |
CDL (Acc) |
CDL (mIoU) |
---|---|---|---|---|---|---|---|---|---|
ResNet18_Weights.LANDSAT_TM_TOA_MOCO |
4–5 |
5 |
CC0-1.0 |
67.65 |
51.11 |
68.70 |
52.32 |
||
ResNet18_Weights.LANDSAT_TM_TOA_SIMCLR |
4–5 |
5 |
CC0-1.0 |
60.86 |
43.74 |
61.94 |
44.86 |
||
ResNet50_Weights.LANDSAT_TM_TOA_MOCO |
4–5 |
5 |
CC0-1.0 |
68.75 |
53.28 |
69.45 |
53.20 |
||
ResNet50_Weights.LANDSAT_TM_TOA_SIMCLR |
4–5 |
5 |
CC0-1.0 |
62.05 |
44.98 |
62.80 |
45.77 |
||
ViTSmall16_Weights.LANDSAT_TM_TOA_MOCO |
4–5 |
5 |
CC0-1.0 |
67.17 |
50.57 |
67.60 |
51.07 |
||
ViTSmall16_Weights.LANDSAT_TM_TOA_SIMCLR |
4–5 |
5 |
CC0-1.0 |
66.82 |
50.17 |
66.92 |
50.28 |
||
ResNet18_Weights.LANDSAT_ETM_TOA_MOCO |
7 |
9 |
CC0-1.0 |
65.22 |
48.39 |
62.84 |
45.81 |
||
ResNet18_Weights.LANDSAT_ETM_TOA_SIMCLR |
7 |
9 |
CC0-1.0 |
58.76 |
41.60 |
56.47 |
39.34 |
||
ResNet50_Weights.LANDSAT_ETM_TOA_MOCO |
7 |
9 |
CC0-1.0 |
66.60 |
49.92 |
64.12 |
47.19 |
||
ResNet50_Weights.LANDSAT_ETM_TOA_SIMCLR |
7 |
9 |
CC0-1.0 |
57.17 |
40.02 |
54.95 |
37.88 |
||
ViTSmall16_Weights.LANDSAT_ETM_TOA_MOCO |
7 |
9 |
CC0-1.0 |
63.75 |
46.79 |
60.88 |
43.70 |
||
ViTSmall16_Weights.LANDSAT_ETM_TOA_SIMCLR |
7 |
9 |
CC0-1.0 |
63.33 |
46.34 |
59.06 |
41.91 |
||
ResNet18_Weights.LANDSAT_ETM_SR_MOCO |
7 |
6 |
CC0-1.0 |
64.18 |
47.25 |
67.30 |
50.71 |
||
ResNet18_Weights.LANDSAT_ETM_SR_SIMCLR |
7 |
6 |
CC0-1.0 |
57.26 |
40.11 |
54.42 |
37.48 |
||
ResNet50_Weights.LANDSAT_ETM_SR_MOCO |
7 |
6 |
CC0-1.0 |
64.37 |
47.46 |
62.35 |
45.30 |
||
ResNet50_Weights.LANDSAT_ETM_SR_SIMCLR |
7 |
6 |
CC0-1.0 |
57.79 |
40.64 |
55.69 |
38.59 |
||
ViTSmall16_Weights.LANDSAT_ETM_SR_MOCO |
7 |
6 |
CC0-1.0 |
64.09 |
47.21 |
52.37 |
35.48 |
||
ViTSmall16_Weights.LANDSAT_ETM_SR_SIMCLR |
7 |
6 |
CC0-1.0 |
63.99 |
47.05 |
53.17 |
36.21 |
||
ResNet18_Weights.LANDSAT_OLI_TIRS_TOA_MOCO |
8–9 |
11 |
CC0-1.0 |
67.82 |
51.30 |
65.74 |
48.96 |
||
ResNet18_Weights.LANDSAT_OLI_TIRS_TOA_SIMCLR |
8–9 |
11 |
CC0-1.0 |
62.14 |
45.08 |
60.01 |
42.86 |
||
ResNet50_Weights.LANDSAT_OLI_TIRS_TOA_MOCO |
8–9 |
11 |
CC0-1.0 |
69.17 |
52.87 |
67.29 |
50.70 |
||
ResNet50_Weights.LANDSAT_OLI_TIRS_TOA_SIMCLR |
8–9 |
11 |
CC0-1.0 |
64.66 |
47.78 |
62.08 |
45.01 |
||
ViTSmall16_Weights.LANDSAT_OLI_TIRS_TOA_MOCO |
8–9 |
11 |
CC0-1.0 |
67.11 |
50.49 |
64.62 |
47.73 |
||
ViTSmall16_Weights.LANDSAT_OLI_TIRS_TOA_SIMCLR |
8–9 |
11 |
CC0-1.0 |
66.12 |
49.39 |
63.88 |
46.94 |
||
ResNet18_Weights.LANDSAT_OLI_SR_MOCO |
8–9 |
7 |
CC0-1.0 |
67.01 |
50.39 |
68.05 |
51.57 |
||
ResNet18_Weights.LANDSAT_OLI_SR_SIMCLR |
8–9 |
7 |
CC0-1.0 |
59.93 |
42.79 |
57.44 |
40.30 |
||
ResNet50_Weights.LANDSAT_OLI_SR_MOCO |
8–9 |
7 |
CC0-1.0 |
67.44 |
50.88 |
65.96 |
49.21 |
||
ResNet50_Weights.LANDSAT_OLI_SR_SIMCLR |
8–9 |
7 |
CC0-1.0 |
63.65 |
46.68 |
60.01 |
43.17 |
||
ViTSmall16_Weights.LANDSAT_OLI_SR_MOCO |
8–9 |
7 |
CC0-1.0 |
66.81 |
50.16 |
64.17 |
47.24 |
||
ViTSmall16_Weights.LANDSAT_OLI_SR_SIMCLR |
8–9 |
7 |
CC0-1.0 |
65.04 |
48.20 |
62.61 |
45.46 |
Sentinel-1¶
Weight |
Channels |
Source |
Citation |
License |
---|---|---|---|---|
ResNet50_Weights.SENTINEL1_ALL_MOCO |
2 |
CC-BY-4.0 |
Sentinel-2¶
Weight |
Channels |
Source |
Citation |
License |
BigEarthNet |
EuroSAT |
So2Sat |
OSCD |
---|---|---|---|---|---|---|---|---|
ResNet18_Weights.SENTINEL2_ALL_MOCO |
13 |
CC-BY-4.0 |
||||||
ResNet18_Weights.SENTINEL2_RGB_MOCO |
3 |
CC-BY-4.0 |
||||||
ResNet18_Weights.SENTINEL2_RGB_SECO |
3 |
Apache-2.0 |
87.27 |
93.14 |
46.94 |
|||
ResNet50_Weights.SENTINEL2_ALL_DINO |
13 |
CC-BY-4.0 |
90.7 |
99.1 |
63.6 |
|||
ResNet50_Weights.SENTINEL2_ALL_MOCO |
13 |
CC-BY-4.0 |
91.8 |
99.1 |
60.9 |
|||
ResNet50_Weights.SENTINEL2_RGB_MOCO |
3 |
CC-BY-4.0 |
||||||
ResNet50_Weights.SENTINEL2_RGB_SECO |
3 |
Apache-2.0 |
87.81 |
|||||
ViTSmall16_Weights.SENTINEL2_ALL_DINO |
13 |
CC-BY-4.0 |
90.5 |
99.0 |
62.2 |
|||
ViTSmall16_Weights.SENTINEL2_ALL_MOCO |
13 |
CC-BY-4.0 |
89.9 |
98.6 |
61.6 |
|||
Swin_V2_B_Weights.SENTINEL2_RGB_SATLAS |
3 |
Apache-2.0 |
Other Data Sources¶
Weight |
Channels |
Source |
Citation |
License |
---|---|---|---|---|
ResNet50_Weights.FMOW_RGB_GASSL |
3 |