Planet’s Data Leveraged to Understand Agricultural Land and Quickly-changing Environments

We are excited to share the ability of Planet satellite tv for pc imagery with Amazon SageMaker’s geospatial capabilities. Our partnership has the potential to revolutionize agriculture and sustainability insights by combining geospatial knowledge with machine studying and cloud structure. 

In this weblog, the applying of a crop segmentation mannequin on prime of Planet high-frequency satellite tv for pc knowledge helps to determine and observe crop and non-crop areas over time. The partnership between Planet and Amazon Sagemaker permits geospatial knowledge accessibility, offering a flexible platform that permits various purposes. 

The advantages mentioned within the weblog embrace: 

Informed Decisions: By gaining higher spatial understanding of the croplands, farmers can optimize useful resource allocation, fostering sustainable practices.Climate Resilience: Satellite imagery aids in figuring out climate-stressed areas, facilitating adaptive methods to fight local weather change impacts.Rapid Response: Swift crop segmentation assists in shortly figuring out post-disaster broken areas, streamlining aid efforts. 

Amazon SageMaker’s geospatial capabilities empower knowledge scientists and machine studying engineers to assemble, prepare, and deploy fashions utilizing geospatial knowledge. The Planet-AWS partnership supplies customers entry to Planet satellite tv for pc knowledge by way of SageMaker, enabling a seamless integration and sooner outcomes. 

Dive deep within the AWS technical weblog cross-posted under:

###

Build a crop segmentation machine studying mannequin with Planet knowledge and Amazon SageMaker geospatial capabilities

by Lydia Lihui Zhang, Shital Dhakal, Mansi Shah, Janosch Woschitz, and Xiong Zhou 

This visitor submit is co-written by Lydia Lihui Zhang, Business Development Specialist, and Mansi Shah, Software Engineer/Data Scientist, at Planet Labs. The evaluation that impressed this submit was initially written by Jennifer Reiber Kyle.

Amazon SageMaker geospatial capabilities mixed with Planet satellite tv for pc knowledge can be utilized for crop segmentation, and there are quite a few purposes and potential advantages of this evaluation to the fields of agriculture and sustainability. In late 2023, Planet introduced a partnership with AWS to make its geospatial knowledge out there by way of Amazon SageMaker.

Crop segmentation is the method of splitting up a satellite tv for pc picture into areas of pixels, or segments, which have related crop traits. In this submit, we illustrate how to use a segmentation machine studying (ML) mannequin to determine crop and non-crop areas in a picture.

Identifying crop areas is a core step in the direction of gaining agricultural insights, and the mix of wealthy geospatial knowledge and ML can lead to insights that drive choices and actions. For instance:

Making data-driven farming choices – By gaining higher spatial understanding of the crops, farmers and different agricultural stakeholders can optimize using assets, from water to fertilizer to different chemical compounds throughout the season. This units the inspiration for decreasing waste, bettering sustainable farming practices wherever potential, and rising productiveness whereas minimizing environmental influence.Identifying climate-related stresses and tendencies – As local weather change continues to have an effect on international temperature and rainfall patterns, crop segmentation can be utilized to determine areas which are susceptible to climate-related stress for local weather adaptation methods. For instance, satellite tv for pc imagery archives can be utilized to observe modifications in a crop rising area over time. These may very well be the bodily modifications in dimension and distribution of croplands. They may be the modifications in soil moisture, soil temperature, and biomass, derived from the completely different spectral index of satellite tv for pc knowledge, for deeper crop well being evaluation.Assessing and mitigating harm – Finally, crop segmentation can be utilized to shortly and precisely determine areas of crop harm within the occasion of a pure catastrophe, which will help prioritize aid efforts. For instance, after a flood, high-cadence satellite tv for pc photos can be utilized to determine areas the place crops have been submerged or destroyed, permitting aid organizations to help affected farmers extra shortly.

In this evaluation, we use a Okay-nearest neighbors (KNN) mannequin to conduct crop segmentation, and we examine these outcomes with floor reality imagery on an agricultural area. Our outcomes reveal that the classification from the KNN mannequin is extra precisely consultant of the state of the present crop subject in 2017 than the bottom reality classification knowledge from 2015. These outcomes are a testomony to the ability of Planet’s high-cadence geospatial imagery. Agricultural fields change typically, generally a number of occasions a season, and having high-frequency satellite tv for pc imagery out there to observe and analyze this land can present immense worth to our understanding of agricultural land and quickly-changing environments.

Planet and AWS’s partnership on geospatial ML

SageMaker geospatial capabilities empower knowledge scientists and ML engineers to construct, prepare, and deploy fashions utilizing geospatial knowledge. SageMaker geospatial capabilities enable you to effectively rework or enrich large-scale geospatial datasets, speed up mannequin constructing with pre-trained ML fashions, and discover mannequin predictions and geospatial knowledge on an interactive map utilizing 3D-accelerated graphics and built-in visualization instruments. With SageMaker geospatial capabilities, you’ll be able to course of massive datasets of satellite tv for pc imagery and different geospatial knowledge to create correct ML fashions for numerous purposes, together with crop segmentation, which we focus on on this submit.

Planet Labs PBC is a number one Earth-imaging firm that makes use of its massive fleet of satellites to seize imagery of the Earth’s floor each day. Planet’s knowledge is subsequently a precious useful resource for geospatial ML. Its high-resolution satellite tv for pc imagery can be utilized to determine numerous crop traits and their well being over time, anyplace on Earth.

The partnership between Planet and SageMaker permits clients to simply entry and analyze Planet’s high-frequency satellite tv for pc knowledge utilizing AWS’s highly effective ML instruments. Data scientists can carry their very own knowledge or conveniently discover and subscribe to Planet’s knowledge with out switching environments.

Crop segmentation in an Amazon SageMaker Studio pocket book with a geospatial picture

In this instance geospatial ML workflow, we have a look at how to carry Planet’s knowledge together with the bottom reality knowledge supply into SageMaker, and how to prepare, infer, and deploy a crop segmentation mannequin with a KNN classifier. Finally, we assess the accuracy of our outcomes and examine this to our floor reality classification.

The KNN classifier used was skilled in an Amazon SageMaker Studio pocket book with a geospatial picture, and supplies a versatile and extensible pocket book kernel for working with geospatial knowledge.

The Amazon SageMaker Studio pocket book with geospatial picture comes pre-installed with generally used geospatial libraries akin to GDAL, Fiona, GeoPandas, Shapely, and Rasterio, which permit the visualization and processing of geospatial knowledge instantly inside a Python pocket book atmosphere. Common ML libraries akin to OpenCV or scikit-learn are additionally used to carry out crop segmentation utilizing KNN classification, and these are additionally put in within the geospatial kernel.

Data choice

The agricultural subject we zoom into is situated on the normally sunny Sacramento County in California.

Why Sacramento? The space and time choice for this sort of drawback is primarily outlined by the supply of floor reality knowledge, and such knowledge in crop kind and boundary knowledge, just isn’t straightforward to come by. The 2015 Sacramento County Land Use DWR Survey dataset is a publicly out there dataset overlaying Sacramento County in that 12 months and supplies hand-adjusted boundaries.

The main satellite tv for pc imagery we use is the Planet’s 4-band PSScene Product, which accommodates the Blue, Green, Red, and Near-IR bands and is radiometrically corrected to at-sensor radiance. The coefficients for correcting to at-sensor reflectance are offered within the scene metadata, which additional improves the consistency between photos taken at completely different occasions.

Planet’s Dove satellites that produced this imagery have been launched February 14, 2017 (information launch), subsequently they didn’t picture Sacramento County again in 2015. However, they’ve been taking every day imagery of the world because the launch. In this instance, we accept the imperfect 2-year hole between the bottom reality knowledge and satellite tv for pc imagery. However, Landsat 8 lower-resolution imagery may have been used as a bridge between 2015 and 2017.

Access Planet knowledge

To assist customers get correct and actionable knowledge sooner, Planet has additionally developed the Planet Software Development Kit (SDK) for Python. This is a strong instrument for knowledge scientists and builders who need to work with satellite tv for pc imagery and different geospatial knowledge. With this SDK, you’ll be able to search and entry Planet’s huge assortment of high-resolution satellite tv for pc imagery, in addition to knowledge from different sources like OpenStreetMap. The SDK supplies a Python shopper to Planet’s APIs, in addition to a no-code command line interface (CLI) resolution, making it straightforward to incorporate satellite tv for pc imagery and geospatial knowledge into Python workflows. This instance makes use of the Python shopper to determine and obtain imagery wanted for the evaluation.

You can set up the Planet Python shopper within the SageMaker Studio pocket book with geospatial picture utilizing a easy command:

%pip set up planet

You can use the shopper to question related satellite tv for pc imagery and retrieve an inventory of accessible outcomes based mostly on the world of curiosity, time vary, and different search standards. In the next instance, we begin by asking what number of PlanetScope scenes (Planet’s every day imagery) cowl the identical space of curiosity (AOI) that we outline earlier by way of the bottom knowledge in Sacramento, given a sure time vary between June 1 and October 1, 2017; in addition to a sure desired most cloud protection vary of 10%:

# create a request utilizing the SDK from the search specs of the information

item_type = [‘PSScene’]

geom_filter_train = data_filter.geometry_filter(aoi_train)
date_range_filter = data_filter.date_range_filter(“acquired”, gt=datetime(month=6, day=1, 12 months=2017), lt=datetime(month=10, day=1, 12 months=2017))
cloud_cover_filter = data_filter.range_filter(‘cloud_cover’, lt=0.10)

combined_filter_test = data_filter.and_filter([geom_filter_test, date_range_filter, cloud_cover_filter])

# Run a fast seek for our TRAIN knowledge
async with Session() as sess:
cl = sess.shopper(‘knowledge’)
outcomes = cl.search(identify=”temp_search_train”,search_filter=combined_filter_train, item_types=item_type)
train_result_list = [i async for i in results]

print(“Number of prepare scene outcomes: “, len(train_result_list))

The returned outcomes present the variety of matching scenes overlapping with our space of curiosity. It additionally accommodates every scene’s metadata, its picture ID, and a preview picture reference.

After a selected scene has been chosen, with specification on the scene ID, merchandise kind, and product bundles (reference documentation), you should use the next code to obtain the picture and its metadata:

train_scene_id = ‘20170601_180425_0f35’
item_type=”PSScene”
bundle_type=”analytic_sr_udm2″

# outline the order request
merchandise = [order_request.product([train_scene_id], bundle_type, item_type)]
request = order_request.build_request(‘train_dataset’, merchandise=merchandise)

# obtain the coaching knowledge
async with Session() as sess:
cl = sess.shopper(‘orders’)
# use “reporting” to handle polling for order standing
with reporting.StateBar(state=”creating”) as bar:
# carry out the order with the prior created order request
order = await cl.create_order(request)
bar.replace(state=”created”, order_id=train_order[‘id’])

# wait by way of polling till the order is processed
await cl.wait(train_order[‘id’], callback=bar.update_state)

# obtain the precise asset
await cl.download_order(order_id=order[‘id’], listing=download_directory, progress_bar=True, overwrite=True)

This code downloads the corresponding satellite tv for pc picture to the Amazon Elastic File System (Amazon EFS) quantity for SageMaker Studio.

Model coaching

After the information has been downloaded with the Planet Python shopper, the segmentation mannequin may be skilled. In this instance, a mix of KNN classification and picture segmentation strategies is used to determine crop space and create georeferenced geojson options.

The Planet knowledge is loaded and preprocessed utilizing the built-in geospatial libraries and instruments in SageMaker to put together it for coaching the KNN classifier. The floor reality knowledge for coaching is the Sacramento County Land Use DWR Survey dataset from 2015, and the Planet knowledge from 2017 is used for testing the mannequin.

Convert floor reality options to contours

To prepare the KNN classifier, the category of every pixel as both crop or non-crop wants to be recognized. The class is set by whether or not the pixel is related to a crop function within the floor reality knowledge or not. To make this willpower, the bottom reality knowledge is first transformed into OpenCV contours, that are then used to separate crop from non-crop pixels. The pixel values and their classification are then used to prepare the KNN classifier.

To convert the bottom reality options to contours, the options should first be projected to the coordinate reference system of the picture. Then, the options are remodeled into picture area, and lastly transformed into contours. To make sure the accuracy of the contours, they’re visualized overlaid on the enter picture, as proven within the following instance.

To prepare the KNN classifier, crop and non-crop pixels are separated utilizing the crop function contours as a masks.

The enter of KNN classifier consists of two datasets: X, a 2nd array that gives the options to be categorized on; and y, a 1d array that gives the lessons (instance). Here, a single categorized band is created from the non-crop and crop datasets, the place the band’s values point out the pixel class. The band and the underlying picture pixel band values are then transformed to the X and y inputs for the classifier match operate.

Train the classifier on crop and non-crop pixels

The KNN classification is carried out with the scikit-learn KNeighborsClassifier. The variety of neighbors, a parameter tremendously affecting the estimator’s efficiency, is tuned utilizing cross-validation in KNN cross-validation. The classifier is then skilled utilizing the ready datasets and the tuned variety of neighbor parameters. See the next code:

def fit_classifier(pl_filename, ground_truth_filename, metadata_filename, n_neighbors):
weights=”uniform”
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
train_class_band = create_contour_classified_band(pl_filename, ground_truth_filename)
X = to_X(load_refl_bands(pl_filename, metadata_filename))
y = to_y(train_class_band)
clf.match(X, y)
return clf

clf = fit_classifier(train_scene_filename,
train_ground_truth_filename,
train_metadata_filename,
n_neighbors)

To assess the classifier’s efficiency on its enter knowledge, the pixel class is predicted utilizing the pixel band values. The classifier’s efficiency is especially based mostly on the accuracy of the coaching knowledge and the clear separation of the pixel lessons based mostly on the enter knowledge (pixel band values). The classifier’s parameters, such because the variety of neighbors and the gap weighting operate, may be adjusted to compensate for any inaccuracies within the latter. See the next code:

def predict(pl_filename, metadata_filename, clf):
bands = load_refl_bands(pl_filename, metadata_filename)
X = to_X(bands)
y = clf.predict(X)
return classified_band_from_y(bands[0].masks, y)

train_predicted_class_band = predict(train_scene_filename, train_metadata_filename, clf)

Evaluate mannequin predictions

The skilled KNN classifier is utilized to predict crop areas within the check knowledge. This check knowledge consists of areas that weren’t uncovered to the mannequin throughout coaching. In different phrases, the mannequin has no data of the world prior to its evaluation and subsequently this knowledge can be utilized to objectively consider the mannequin’s efficiency. We begin by visually inspecting a number of areas, starting with a area that’s comparatively noisier.

The visible inspection reveals that the anticipated lessons are principally per the bottom reality lessons. There are a number of areas of deviation, which we examine additional.

Upon additional investigation, we found that a few of the noise on this area was due to the bottom reality knowledge missing the element that’s current within the categorized picture (prime proper in contrast to prime left and backside left). A very attention-grabbing discovering is that the classifier identifies bushes alongside the river as non-crop, whereas the bottom reality knowledge mistakenly identifies them as crop. This distinction between these two segmentations could also be due to the bushes shading the area over the crops.

Following this, we examine one other area that was categorized otherwise between the 2 strategies. These highlighted areas have been beforehand marked as non-crop areas within the floor reality knowledge in 2015 (prime proper) however modified and proven clearly as cropland in 2017 by way of the Planetscope Scenes (prime left and backside left). They have been additionally categorized largely as cropland by way of the classifier (backside proper).

Again, we see the KNN classifier presents a extra granular outcome than the bottom reality class, and it additionally efficiently captures the change occurring within the cropland. This instance additionally speaks to the worth of every day refreshed satellite tv for pc knowledge as a result of the world typically modifications a lot sooner than annual studies, and a mixed methodology with ML like this will help us decide up the modifications as they occur. Being in a position to monitor and uncover such modifications by way of satellite tv for pc knowledge, particularly within the evolving agricultural fields, supplies useful insights for farmers to optimize their work and any agricultural stakeholder within the worth chain to get a greater pulse of the season.

Model analysis

The visible comparability of the photographs of the anticipated lessons to the bottom reality lessons may be subjective and can’t be generalized for assessing the accuracy of the classification outcomes. To get hold of a quantitative evaluation, we get hold of classification metrics by utilizing scikit-learn’s classification_report operate:

# prepare dataset
print(classification_report(to_y(create_contour_classified_band(train_scene_filename,
train_ground_truth_filename)),
to_y(train_predicted_class_band),
target_names=[‘crop’, ‘non-crop’]))

precision recall f1-score assist

crop 0.89 0.86 0.87 2641818
non-crop 0.83 0.86 0.84 2093907

accuracy 0.86 4735725
macro avg 0.86 0.86 0.86 4735725
weighted avg 0.86 0.86 0.86 4735725

# check dataset
print(classification_report(to_y(create_contour_classified_band(test_scene_filename,
test_ground_truth_filename)),
to_y(test_predicted_class_band),
target_names=[‘crop’, ‘non-crop’]))

precision recall f1-score assist

crop 0.94 0.73 0.82 1959630
non-crop 0.32 0.74 0.44 330938

accuracy 0.73 2290568
macro avg 0.63 0.74 0.63 2290568
weighted avg 0.85 0.73 0.77 2290568

The pixel classification is used to create a segmentation masks of crop areas, making each precision and recall essential metrics, and the F1 rating general measure for predicting accuracy. Our outcomes give us metrics for each crop and non-crop areas within the prepare and check dataset. However, to maintain issues easy, let’s take a more in-depth have a look at these metrics within the context of the crop areas within the check dataset.

Precision is a measure of how correct our mannequin’s optimistic predictions are. In this case, a precision of 0.94 for crop areas signifies that our mannequin could be very profitable at accurately figuring out areas which are certainly crop areas, the place false positives (precise non-crop areas incorrectly recognized as crop areas) are minimized. Recall, alternatively, measures the completeness of optimistic predictions. In different phrases, recall measures the proportion of precise positives that have been recognized accurately. In our case, a recall worth of 0.73 for crop areas implies that 73% of all true crop area pixels are accurately recognized, minimizing the variety of false negatives.

Ideally, excessive values of each precision and recall are most well-liked, though this may be largely depending on the applying of the case research. For instance, if we have been inspecting these outcomes for farmers trying to determine crop areas for agriculture, we might need to give desire to a better recall than precision, so as to decrease the variety of false negatives (areas recognized as non-crop areas which are really crop areas) so as to take advantage of use of the land. The F1-score serves as an general accuracy metric combining each precision and recall, and measuring the stability between the 2 metrics. A excessive F1-score, akin to ours for crop areas (0.82), signifies stability between each precision and recall and a excessive general classification accuracy. Although the F1-score drops between the prepare and check datasets, that is anticipated as a result of the classifier was skilled on the prepare dataset. An general weighted common F1 rating of 0.77 is promising and enough sufficient to attempt segmentation schemes on the categorized knowledge.

Create a segmentation masks from the classifier

The creation of a segmentation masks utilizing the predictions from the KNN classifier on the check dataset includes cleansing up the anticipated output to keep away from small segments attributable to picture noise. To take away speckle noise, we use the OpenCV median blur filter. This filter preserves highway delineations between crops higher than the morphological open operation.

To apply binary segmentation to the denoised output, we first want to convert the categorized raster knowledge to vector options utilizing the OpenCV findContours operate.

Finally, the precise segmented crop areas may be computed utilizing the segmented crop outlines.

The segmented crop areas produced from the KNN classifier enable for exact identification of crop areas within the check dataset. These segmented areas can be utilized for numerous functions, akin to subject boundary identification, crop monitoring, yield estimation, and useful resource allocation. The achieved F1 rating of 0.77 is nice and supplies proof that the KNN classifier is an efficient instrument for crop segmentation in distant sensing photos. These outcomes can be utilized to additional enhance and refine crop segmentation strategies, doubtlessly main to elevated accuracy and effectivity in crop evaluation.

Conclusion

This submit demonstrated how you should use the mix of Planet’s excessive cadence, high-resolution satellite tv for pc imagery and SageMaker geospatial capabilities to carry out crop segmentation evaluation, unlocking precious insights that may enhance agricultural effectivity, environmental sustainability, and meals safety. Accurately figuring out crop areas permits additional evaluation on crop progress and productiveness, monitoring of land use modifications, and detection of potential meals safety dangers.

Moreover, the mix of Planet knowledge and SageMaker affords a variety of use instances past crop segmentation. The insights can allow data-driven choices on crop administration, useful resource allocation, and coverage planning in agriculture alone. With completely different knowledge and ML fashions, the mixed providing may additionally increase into different industries and use instances in the direction of digital transformation, sustainability transformation, and safety.

To begin utilizing SageMaker geospatial capabilities, see Get began with Amazon SageMaker geospatial capabilities.

To be taught extra about Planet’s imagery specs and developer reference supplies, go to Planet Developer’s Center. For documentation on Planet’s SDK for Python, see Planet SDK for Python. For extra details about Planet, together with its present knowledge merchandise and upcoming product releases, go to https://www.planet.com/.

Planet Labs PBC Forward-Looking Statements

Except for the historic data contained herein, the issues set forth on this weblog submit are forward-looking statements inside the which means of the “protected harbor” provisions of the Private Securities Litigation Reform Act of 1995, together with, however not restricted to, Planet Labs PBC’s skill to seize market alternative and understand any of the potential advantages from present or future product enhancements, new merchandise, or strategic partnerships and buyer collaborations. Forward-looking statements are based mostly on Planet Labs PBC’s administration’s beliefs, in addition to assumptions made by, and data presently out there to them. Because such statements are based mostly on expectations as to future occasions and outcomes and usually are not statements of truth, precise outcomes might differ materially from these projected. Factors which can trigger precise outcomes to differ materially from present expectations embrace, however usually are not restricted to the chance elements and different disclosures about Planet Labs PBC and its enterprise included in Planet Labs PBC’s periodic studies, proxy statements, and different disclosure supplies filed from time to time with the Securities and Exchange Commission (SEC) which can be found on-line at www.sec.gov, and on Planet Labs PBC’s web site at www.planet.com. All forward-looking statements mirror Planet Labs PBC’s beliefs and assumptions solely as of the date such statements are made. Planet Labs PBC undertakes no obligation to replace forward-looking statements to mirror future occasions or circumstances.

https://www.planet.com/pulse/planets-data-leveraged-to-understand-agricultural-land-and-quickly-changing-environments/

Recommended For You