Skip to content

Model Card

This is a template for Hugging Face model cards tailored for geospatial foundation models. Copy the markdown below and paste it into your model's README.md on Hugging Face Hub. Replace the placeholders in {braces} with your own information.

Template

yaml
---
# === Basic Information (Required) ===
language:
- {lang_0}
- {lang_1}

license: {license}
license_name: {license_name}
license_link: {license_link}

library_name: {library_name}
provider: {provider}
funder: {funder}

tags:
- {tag_0}
- {tag_1}
- {tag_2}

# === Embedding Properties (Required) ===
embedding_spatial_types:
- {embedding_spatial_type_0}
- {embedding_spatial_type_1}

embedding_temporal_type:
- {embedding_temporal_type_0}
- {embedding_temporal_type_1}

embedding_spatial_context: {embedding_spatial_context}
embedding_temporal_context: {embedding_temporal_context}
embedding_dimension: {embedding_dimension}

# === Model Details (Optional) ===
description: {description}
compression: {compression}
intention: {intention}
cautions: {cautions}
precomputed_embeddings: {precomputed_embeddings}
publication_link: {publication_link}
model_architecture: {model_architecture}

# === Pretraining (Optional) ===
pretraining:
  data_types:
  - {data_type_0}
  - {data_type_1}
  product_names:
  - {product_name_0}
  - {product_name_1}
  training_strategy: {training_strategy}
  training_resource: {training_resource}
  spatial_extent: {spatial_extent}
  temporal_extent: {temporal_extent}
  patch_size: {patch_size}
  temporal_context: {temporal_context}
  batch_size: {batch_size}

# === Inference (Optional) ===
inference:
  data_types:
  - {data_type_0}
  - {data_type_1}
  product_names:
  - {product_name_0}
  - {product_name_1}
  patch_size: {patch_size}
  temporal_context: {temporal_context}

# === Standard HF Fields (Optional) ===
datasets:
- {dataset_0}

metrics:
- {metric_0}

base_model: {base_model}

# === Evaluation Results (Optional) ===
model-index:
- name: {model_id}
  results:
  - task:
      type: {task_type}
      name: {task_name}
    dataset:
      type: {dataset_type}
      name: {dataset_name}
      config: {dataset_config}
      split: {dataset_split}
      revision: {dataset_revision}
      args:
        {arg_0}: {value_0}
    metrics:
      - type: {metric_type}
        value: {metric_value}
        name: {metric_name}
        config: {metric_config}
        args:
          {arg_0}: {value_0}
        verifyToken: {verify_token}
    source:
      name: {source_name}
      url: {source_url}
---

Field Reference

Basic Information

FieldRequiredDescriptionExample
languageNoLanguage codesfr, en
licenseYesLicense identifier from HF licensesapache-2.0
license_nameNoCustom license ID (if license = other)my-license-1.0
license_linkNoPath or URL to license file (if license = other)LICENSE.md
library_nameNoLibrary from HF model librarieskeras
providerYesOrganization or individual that developed the modelNASA
funderNoFunding institutionsNSF, ESA
tagsNoSearchable tagsSSL, Geospatial Foundation Model, multispectral

Embedding Properties

FieldRequiredDescriptionAcceptable Values
embedding_spatial_typesYesSpatial type of embeddingspixel, patch, scene
embedding_temporal_typeYesTemporal type of embeddingssingle-date, multi-date
embedding_spatial_contextYesSpatial context scopespatial context determined by embedding spatial type, spatial context beyond embedding spatial type
embedding_temporal_contextYesTemporal context scopetemporal context determined by embedding spatial type, spatial context beyond embedding temporal type
embedding_dimensionYesEmbedding vector size (integer)768

Model Details

FieldRequiredDescriptionExample
descriptionYesFree text explanation of the model
compressionNoDescription of storage compression used
intentionNoIntended use case and how training data was sampledland cover, oceans, urban
cautionsNoConstraints or cautions for usersmodel not trained on snow, loses accuracy with high cloud coverage
precomputed_embeddingsNoLink to precomputed embeddings, or noyes: https://... or no
publication_linkNoURL to related publication
model_architectureNoDescription of model architectureViT-L/14

Pretraining (Optional)

FieldDescriptionAcceptable Values / Example
data_typesTypes of data used for trainingRGB, multispectral, hyperspectral, SAR, LiDAR, DEM, climate data, text, semantic data
product_namesData products usedsentinel-2-l2a
training_strategyTraining approachContrastive, MIM, Barlow Twins
training_resourceTraining resource requirements (energy, GPU, etc.)
spatial_extentBounding box(es) in EPSG 4326
temporal_extentDate range01-01-2020 to 31-12-2023
patch_sizePatch size (integer)224
temporal_contextTemporal context for trainingsingle-date, multi-date
batch_sizeBatch size (integer)32

Inference (Optional)

FieldDescriptionAcceptable Values / Example
data_typesTypes of data supported for inferenceRGB, multispectral, hyperspectral, SAR, LiDAR, DEM, climate data, text, semantic data
product_namesData products supportedsentinel-2-l2a
patch_sizePatch size (integer)224
temporal_contextTemporal context for inferencesingle-date, multi-date

Evaluation Results (Optional)

Use model-index to encode evaluation results for downstream tasks.

FieldRequiredDescriptionExample
model-index.nameYesModel identifier
task.typeYesTask typecrop field segmentation
task.nameNoTask nameField Segmentation
dataset.typeYesDataset typeField boundary labels
dataset.nameYesDataset namePASTIS
dataset.configNoDataset subset for load_dataset()
dataset.splitNoDataset splittest
dataset.revisionNoDataset revision hash
metrics.typeYesMetric ID from HF metricswer
metrics.valueYesMetric value20.90
metrics.nameNoMetric display nameTest WER
source.nameNoSource of evaluation resultsPANGAEA
source.urlIf source providedLink to sourcehttps://arxiv.org/...