Batch Statistical API

The Batch statistical API is in beta release. It might misbehave in case of large requests. We may change the interface, although no major changes are expected. If you have suggestions for improvements or any feedback, please share your thoughts on our forum.
The Batch statistical API is only available for enterprise users. If you don't have an enterprise account, and would like to try it out, contact us for a custom offer.

The Batch statistical API enables you to request statistics similarly as with the Statistical API but for multiple polygons at once and/or for longer aggregations. A typical use case would be calculating statistics for all parcels in a country.

Similarly to the Batch processing API, this is an asynchronous REST service. This means that data will not be immediately returned in the response of the request but delivered to your object storage, which needs to be specified in the request (e.g. S3 bucket, see AWS bucket access below).

You can find more details about the API in the API Reference or in the examples of the workflow.


The Batch statistical API workflow is very similar to the Batch Processing API workflow but some parts of the workflow are not yet supported. Currently supported are:

  • user's actions START and CANCEL.
  • request's statuses CREATED, PROCESSING, DONE, and FAILED.

We are working on supporting the missing user's action ANALYSE and statuses ANALYSING, ANALYSIS_DONE, and PARTIAL.

Input polygons as GeoPackage file

The Statistical batch API accepts a GeoPackage file containing features (polygons) as an input. The GeoPackage must be stored in your object storage (e.g. AWS S3 bucket) and Sentinel Hub must be able to read from the storage (find more details about this in the bucket access section below). In a batch statistical request, the input GeoPackage is specified by setting the path to the .gpkg file in the input.features.s3 parameter.

All features (polygons) in an input GeoPackage must be in the same CRS supported by Sentinel Hub. An example of a GeoPackage file can be downloaded here.

Evalscript and Batch statistical API

The same specifics as described for evalscript and Statistical API apply also for Batch statistical API.

Evalscripts smaller than 32KB in size can be provided directly in a batch statistical request under evalscript parameter. If your evalsript exceeds this limit, you can store it to your S3 bucket and provide a reference to it in a batch statistical request under evalscriptReference parameter.

Processing results

Outputs of a batch statistical request are json files stored in your object storage. Each .json file will contain requested statistics for one feature (polygon) in the provided GeoPackage. You can connect statistics in a json file with corresponding feature (polygon) in the GeoPackge based on:

  • id of a feature from GeoPackage is used as name of json file (e.g. 1.json, 2.json) and available in the json file as id property OR
  • a custom column identifier of type string can be added to GeoPackage and its value will be available in json file as identifier property.

The outputs will be stored in the bucket and the folder specified by output.s3.path parameter of the batch statistical request. The outputs will be available in a sub-folder named after the ID of your request (e.g. s3://<my-bucket>/<my-folder>/db7de265-dfd4-4dc0-bc82-74866078a5ce).

Batch Statistical deployment

Batch deploymentBatch URL end-point
AWS EU (Frankfurt)

AWS bucket access

As noted above, the Batch statistical api use aws S3 to:

  • read GeoPackage file with input features (polygons) from a S3 bucket,
  • read evalscript from a S3 bucket (this is optional because an evalscript can also be provided directly in a request),
  • write results of processing to a S3 bucket.

One bucket or different buckets can be used for all three purposes.

Bucket regions

The buckets to which the results of batch statistical processing are written must be in the same region as the Batch statistical API deployment. The only available region at the moment iseu-central-1 (Frankfurt).

Access your bucket using accessKey and secretAccessKey

In order to let Sentinel Hub access the bucket, you need to provide a accessKey and secretAccessKey pairs in your batch statistical request:


s3 = {
"url": "s3://<your-bucket>/<path>",
"accessKey": "<your-bucket-access-key>",
"secretAccessKey": "<your-bucket-access-key-secret>"

The above JSON for accessing the S3 bucket can be used in:

  • input.features.s3 to specify the bucket where GeoPackage file is available,
  • (optional) evalscriptReference.s3 to specify the bucket where evalscript .js file is available,
  • ouput.s3 to specify the bucket where the results will be stored.

To learn how to configure an access key and access key secret on AWS S3, check this link, specifically, under the Programmatic access section. Note that IAM user, to which your access key and secret are linked to, must have permissions to read and/or write to the corresponding S3 bucket.


Example of a Batch Statistical Workflow