From f13b4ba2ffe2811ef617c78daa9ab313f45634b2 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:24:53 +0530 Subject: [PATCH 01/80] revamp account management --- .../docs/aws/services/account-management.md | 32 +++++++++---------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/src/content/docs/aws/services/account-management.md b/src/content/docs/aws/services/account-management.md index a92cbc8f..3d96d9a2 100644 --- a/src/content/docs/aws/services/account-management.md +++ b/src/content/docs/aws/services/account-management.md @@ -1,24 +1,23 @@ --- title: "Account Management" -linkTitle: "Account Management" description: Get started with AWS Account Management on LocalStack tags: ["Ultimate"] --- ## Introduction -The Account service provides APIs to manage your AWS account. +Account service provides APIs to manage your AWS account. You can use the Account APIs to retrieve information about your account, manage your contact information and alternate contacts. Additionally, you can use the Account APIs to enable or disable a region for your account, and delete alternate contacts in your account. LocalStack allows you to use the Account API to retrieve information about your account. -The supported APIs are available on our [API coverage page]({{< ref "coverage_account" >}}), which provides information on the extent of Account's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Account's integration with LocalStack. -{{< callout >}} -LocalStack's Account provider is mock-only and does not support any real AWS account. +:::note +LocalStack's Account provider is mock-only and does not support connecting to any real AWS account. The Account APIs are only intended to demonstrate how you can use and mock the AWS Account APIs in your local environment. It's important to note that LocalStack doesn't offer a programmatic interface to manage your AWS or your LocalStack account. -{{< /callout >}} +::: ## Getting started @@ -32,8 +31,8 @@ We will demonstrate how to put contact information, fetch account details, and a You can use the [`PutContactInformation`](https://docs.aws.amazon.com/accounts/latest/reference/API_PutContactInformation.html) API to add or update the contact information for your AWS account. Run the following command to add contact information to your account: -{{< command >}} -$ awslocal account put-contact-information \ +```bash +awslocal account put-contact-information \ --contact-information '{ "FullName": "Jane Doe", "PhoneNumber": "+XXXXXXXXX", @@ -43,16 +42,16 @@ $ awslocal account put-contact-information \ "CountryCode": "US", "StateOrRegion": "WA" }' -{{< /command >}} +``` ### Fetch account details You can use the [`GetContactInformation`](https://docs.aws.amazon.com/accounts/latest/reference/API_GetContactInformation.html) API to retrieve the contact information for your AWS account. Run the following command to fetch the contact information for your account: -{{< command >}} -$ awslocal account get-contact-information -{{< /command >}} +```bash +awslocal account get-contact-information +``` The command will return the contact information for your account: @@ -75,22 +74,21 @@ The command will return the contact information for your account: You can attach an alternate contact using [`PutAlternateContact`](https://docs.aws.amazon.com/accounts/latest/reference/API_PutAlternateContact.html) API. Run the following command to attach an alternate contact to your account: -{{< command >}} -$ awslocal account put-alternate-contact \ +```bash +awslocal account put-alternate-contact \ --alternate-contact-type "BILLING" \ --email-address "bill@ing.com" \ --name "Bill Ing" \ --phone-number "+1 555-555-5555" \ --title "Billing" -{{< /command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing contact information & alternate accounts for the Account service. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the Resources section, and then clicking on **Account** under the **Management & Governance** section. -Account Resource Browser -

+![Account Resource Browser](/images/aws/account-resource-browser.png) The Resource Browser allows you to perform the following actions: From 247c9a55aa594b5ffd63c140d592a30fb71fde02 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:40:02 +0530 Subject: [PATCH 02/80] revamp acm --- src/content/docs/aws/services/acm.md | 30 +++++++++++++--------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/src/content/docs/aws/services/acm.md b/src/content/docs/aws/services/acm.md index bc4c6fb4..524dd9a4 100644 --- a/src/content/docs/aws/services/acm.md +++ b/src/content/docs/aws/services/acm.md @@ -1,6 +1,5 @@ --- title: "Certificate Manager (ACM)" -linkTitle: "Certificate Manager (ACM)" description: Get started with AWS Certificate Manager (ACM) on LocalStack tags: ["Free"] --- @@ -14,7 +13,7 @@ ACM supports securing multiple domain names and subdomains and can create wildca You can also use ACM to import certificates from third-party certificate authorities or to generate private certificates for internal use. LocalStack allows you to use the ACM APIs to create, list, and delete certificates. -The supported APIs are available on our [API coverage page]({{< ref "coverage_acm" >}}), which provides information on the extent of ACM's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of ACM's integration with LocalStack. ## Getting started @@ -26,13 +25,13 @@ Start your LocalStack container using your preferred method, then use the [Reque Specify the domain name you want to request the certificate for, and any additional options you need. Here's an example command: -{{< command >}} -$ awslocal acm request-certificate \ +```bash +awslocal acm request-certificate \ --domain-name www.example.com \ --validation-method DNS \ --idempotency-token 1234 \ --options CertificateTransparencyLoggingPreference=DISABLED -{{< /command >}} +``` This command will return the Amazon Resource Name (ARN) of the new certificate, which you can use in other ACM commands. @@ -48,9 +47,9 @@ Use the [`ListCertificates` API](https://docs.aws.amazon.com/acm/latest/APIRefer This command returns a list of the ARNs of all the certificates that have been requested or imported into ACM. Here's an example command: -{{< command >}} -$ awslocal acm list-certificates --max-items 10 -{{< /command >}} +```bash +awslocal acm list-certificates --max-items 10 +``` ### Describe the certificate @@ -58,26 +57,25 @@ Use the [`DescribeCertificate` API](https://docs.aws.amazon.com/acm/latest/APIRe Provide the ARN of the certificate you want to view, and this command will return information about the certificate's status, domain name, and other attributes. Here's an example command: -{{< command >}} -$ awslocal acm describe-certificate --certificate-arn arn:aws:acm::account:certificate/ -{{< /command >}} +```bash +awslocal acm describe-certificate --certificate-arn arn:aws:acm::account:certificate/ +``` ### Delete the certificate Finally you can use the [`DeleteCertificate` API](https://docs.aws.amazon.com/acm/latest/APIReference/API_DeleteCertificate.html) to delete a certificate from ACM, by passing the ARN of the certificate you want to delete. Here's an example command: -{{< command >}} -$ awslocal acm delete-certificate --certificate-arn arn:aws:acm::account:certificate/ -{{< /command >}} +```bash +awslocal acm delete-certificate --certificate-arn arn:aws:acm::account:certificate/ +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing ACM Certificates. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Certificate Manager** under the **Security Identity Compliance** section. -ACM Resource Browser -

+![ACM Resource Browser](/images/aws/acm-resource-browser.png) The Resource Browser allows you to perform the following actions: From cf348651ba932fa33781f0539c46db1c9c897090 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:41:40 +0530 Subject: [PATCH 03/80] revamp amplify --- src/content/docs/aws/services/amplify.md | 28 +++++++++++------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/src/content/docs/aws/services/amplify.md b/src/content/docs/aws/services/amplify.md index 2b829550..974d9a17 100644 --- a/src/content/docs/aws/services/amplify.md +++ b/src/content/docs/aws/services/amplify.md @@ -1,6 +1,5 @@ --- title: "Amplify" -linkTitle: "Amplify" description: Get started with Amplify on LocalStack tags: ["Ultimate"] persistence: supported @@ -12,12 +11,12 @@ Amplify is a JavaScript-based development framework with libraries, UI component With Amplify, developers can build and host static websites, single-page applications, and full-stack serverless web applications using an abstraction layer over popular AWS services like DynamoDB, Cognito, AppSync, Lambda, S3, and more. LocalStack allows you to use the Amplify APIs to build and test their Amplify applications locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_amplify" >}}), which provides information on the extent of Amplify's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Amplify's integration with LocalStack. -{{< callout "note" >}} +:::note The `amplifylocal` CLI and the Amplify JS library have been deprecated and are no longer supported. We recommend using the Amplify CLI with the Amplify LocalStack Plugin instead. -{{< /callout >}} +::: ## Amplify LocalStack Plugin @@ -28,10 +27,10 @@ It achieves this by redirecting any requests to AWS to a LocalStack container ru To install the Amplify LocalStack Plugin, install the [amplify-localstack](https://www.npmjs.com/package/amplify-localstack) package from the npm registry and add the plugin to your Amplify setup: -{{< command >}} -$ npm install -g amplify-localstack -$ amplify plugin add amplify-localstack -{{< /command >}} +```bash +npm install -g amplify-localstack +amplify plugin add amplify-localstack +``` ### Configuration @@ -53,19 +52,18 @@ The console will prompt you to select whether to deploy to LocalStack or AWS. You can also add the parameter `--use-localstack true` to your commands to avoid being prompted and automatically use LocalStack. Here is an example: -{{< command >}} -$ amplify init --use-localstack true -$ amplify add api -$ amplify push --use-localstack true -{{< /command >}} +```bash +amplify init --use-localstack true +amplify add api +amplify push --use-localstack true +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing Amplify applications. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Amplify** under the **Front-end Web & Mobile** section. -Amplify Resource Browser -

+![Amplify Resource Browser](/images/aws/amplify-resource-browser.png) The Resource Browser allows you to perform the following actions: From fe6da994a0065ff6835c154ff7c942386cf014c6 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:48:15 +0530 Subject: [PATCH 04/80] get msaf done --- src/content/docs/aws/services/apacheflink.md | 118 +++++++++++-------- 1 file changed, 68 insertions(+), 50 deletions(-) diff --git a/src/content/docs/aws/services/apacheflink.md b/src/content/docs/aws/services/apacheflink.md index a6c00da7..2188bdeb 100644 --- a/src/content/docs/aws/services/apacheflink.md +++ b/src/content/docs/aws/services/apacheflink.md @@ -1,30 +1,29 @@ --- title: "Managed Service for Apache Flink" -linkTitle: "Managed Service for Apache Flink" description: > Get started with Managed Service for Apache Flink on LocalStack tags: ["Ultimate"] --- -{{< callout >}} +:::note This service was formerly known as 'Kinesis Data Analytics for Apache Flink'. -{{< /callout >}} +::: ## Introduction [Apache Flink](https://flink.apache.org/) is a framework for building applications that process and analyze streaming data. [Managed Service for Apache Flink (MSF)](https://docs.aws.amazon.com/managed-flink/latest/java/what-is.html) is an AWS service that provides the underlying infrastructure and a hosted Apache Flink cluster that can run Apache Flink applications. -LocalStack lets you to run Flink applications locally and implements several [AWS-compatible API operations]({{< ref "coverage_kinesisanalyticsv2" >}}). +LocalStack lets you to run Flink applications locally and implements several [AWS-compatible API operations](). A separate Apache Flink cluster is started in [application mode](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/overview/#application-mode) for every Managed Flink application created. Flink cluster deployment on LocalStack consists of two separate containers for [JobManager](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/concepts/flink-architecture/#jobmanager) and [TaskManager](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/concepts/flink-architecture/#taskmanagers). -{{< callout "note" >}} +:::note The emulated MSF provider was introduced and made the default in LocalStack v4.1. If you wish to use the older mock provider, you can set `PROVIDER_OVERRIDE_KINESISANALYTICSV2=legacy`. -{{< /callout >}} +::: ## Getting Started @@ -38,16 +37,16 @@ Start the LocalStack container using your preferred method. Begin by cloning the AWS sample repository. We will use the [S3 Sink](https://github.com/localstack-samples/amazon-managed-service-for-apache-flink-examples/tree/main/java/S3Sink) application in this example. -{{< command >}} -$ git clone https://github.com/localstack-samples/amazon-managed-service-for-apache-flink-examples.git -$ cd java/S3Sink -{{< /command >}} +```bash +git clone https://github.com/localstack-samples/amazon-managed-service-for-apache-flink-examples.git +cd java/S3Sink +``` Next, use [Maven](https://maven.apache.org/) to compile and package the Flink application into a jar. -{{< command >}} -$ mvn package -{{< /command >}} +```bash +mvn package +``` The Flink application jar file will be placed in the `./target/flink-kds-s3.jar` directory. @@ -57,10 +56,10 @@ MSF requires that all application code resides in S3. Create an S3 bucket and upload the compiled Flink application jar. -{{< command >}} -$ awslocal s3api create-bucket --bucket flink-bucket -$ awslocal s3api put-object --bucket flink-bucket --key job.jar --body ./target/flink-kds-s3.jar -{{< /command >}} +```bash +awslocal s3api create-bucket --bucket flink-bucket +awslocal s3api put-object --bucket flink-bucket --key job.jar --body ./target/flink-kds-s3.jar +``` ### Output Sink @@ -68,9 +67,9 @@ As mentioned earlier, this Flink application writes the output to an S3 bucket. Create the S3 bucket that will serve as the sink. -{{< command >}} -$ awslocal s3api create-bucket --bucket sink-bucket -{{< /command >}} +```bash +awslocal s3api create-bucket --bucket sink-bucket +``` ### Permissions @@ -93,9 +92,9 @@ Create an IAM role for the running MSF application to assume. } ``` -{{< command >}} -$ awslocal iam create-role --role-name msaf-role --assume-role-policy-document file://role.json -{{< /command >}} +```bash +awslocal iam create-role --role-name msaf-role --assume-role-policy-document file://role.json +``` Next create add a permissions policy to this role that permits read and write access to S3. @@ -113,9 +112,9 @@ Next create add a permissions policy to this role that permits read and write ac } ``` -{{< command >}} -$ awslocal iam put-role-policy --role-name msaf-role --policy-name msaf-policy --policy-document file://policy.json -{{< /command >}} +```bash +awslocal iam put-role-policy --role-name msaf-role --policy-name msaf-policy --policy-document file://policy.json +``` Now, when the running MSF application assumes this role, it will have the necessary permissions to write to the S3 sink. @@ -123,8 +122,8 @@ Now, when the running MSF application assumes this role, it will have the necess With all prerequisite resources in place, the Flink application can now be created and started. -{{< command >}} -$ awslocal kinesisanalyticsv2 create-application \ +```bash +awslocal kinesisanalyticsv2 create-application \ --application-name msaf-app \ --runtime-environment FLINK-1_20 \ --application-mode STREAMING \ @@ -146,15 +145,15 @@ $ awslocal kinesisanalyticsv2 create-application \ } }' -$ awslocal kinesisanalyticsv2 start-application --application-name msaf-app -{{< /command >}} +awslocal kinesisanalyticsv2 start-application --application-name msaf-app +``` Once the Flink cluster is up and running, the application will stream the results to the sink S3 bucket. You can verify this with: -{{< command >}} -$ awslocal s3api list-objects --bucket sink-bucket -{{< /command >}} +```bash +awslocal s3api list-objects --bucket sink-bucket +``` ## CloudWatch Logging @@ -170,10 +169,15 @@ There are following prerequisites for CloudWatch Logs integration: To add a logging option: -{{< command >}} -$ awslocal kinesisanalyticsv2 add-application-cloud-watch-logging-option \ +```bash +awslocal kinesisanalyticsv2 add-application-cloud-watch-logging-option \ --application-name msaf-app \ --cloud-watch-logging-option '{"LogStreamARN": "arn:aws:logs:us-east-1:000000000000:log-group:msaf-log-group:log-stream:msaf-log-stream"}' +``` + +The response will be similar to: + +```json { "ApplicationARN": "arn:aws:kinesisanalytics:us-east-1:000000000000:application/msaf-app", "ApplicationVersionId": 2, @@ -184,34 +188,39 @@ $ awslocal kinesisanalyticsv2 add-application-cloud-watch-logging-option \ } ] } -{{< /command >}} +``` -{{< callout >}} +:::note Enabling CloudWatch Logs integration has a significant performance hit. -{{< /callout >}} +::: Configured logging options can be retrieved using [DescribeApplication](https://docs.aws.amazon.com/managed-flink/latest/apiv2/API_DescribeApplication.html): -{{< command >}} -$ awslocal kinesisanalyticsv2 describe-application --application-name msaf-app | jq .ApplicationDetail.CloudWatchLoggingOptionDescriptions +```bash +awslocal kinesisanalyticsv2 describe-application --application-name msaf-app | jq .ApplicationDetail.CloudWatchLoggingOptionDescriptions +``` + +The response will be similar to: + +```json [ { "CloudWatchLoggingOptionId": "1.1", "LogStreamARN": "arn:aws:logs:us-east-1:000000000000:log-group:msaf-log-group:log-stream:msaf-log-stream" } ] -{{< /command >}} +``` Log events can be retrieved from CloudWatch Logs using the appropriate operation. To retrieve all events: -{{< command >}} -$ awslocal logs get-log-events --log-group-name msaf-log-group --log-stream-name msaf-log-stream -{{< /command >}} +```bash +awslocal logs get-log-events --log-group-name msaf-log-group --log-stream-name msaf-log-stream +``` -{{< callout >}} +:::note Logs events are reported to CloudWatch every 10 seconds. -{{< /callout >}} +::: LocalStack reports both Flink application and Flink framework logs to CloudWatch. However, certain extended information such as stack traces may be missing. @@ -222,13 +231,18 @@ You may obtain this information by execing into the Flink Docker container creat You can manage [resource tags](https://docs.aws.amazon.com/managed-flink/latest/java/how-tagging.html) using [TagResource](https://docs.aws.amazon.com/managed-flink/latest/apiv2/API_TagResource.html), [UntagResource](https://docs.aws.amazon.com/managed-flink/latest/apiv2/API_UntagResource.html) and [ListTagsForResource](https://docs.aws.amazon.com/managed-flink/latest/apiv2/API_ListTagsForResource.html). Tags can also be specified when creating the Flink application using the [CreateApplication](https://docs.aws.amazon.com/managed-flink/latest/apiv2/API_CreateApplication.html) operation. -{{< command >}} -$ awslocal kinesisanalyticsv2 tag-resource \ +```bash +awslocal kinesisanalyticsv2 tag-resource \ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/msaf-app \ --tags Key=country,Value=SE -$ awslocal kinesisanalyticsv2 list-tags-for-resource \ +awslocal kinesisanalyticsv2 list-tags-for-resource \ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/msaf-app +``` + +The response will be similar to: + +```json { "Tags": [ { @@ -237,11 +251,15 @@ $ awslocal kinesisanalyticsv2 list-tags-for-resource \ } ] } +``` + +You can also untag the resource: +```bash $ awslocal kinesisanalyticsv2 untag-resource \ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/msaf-app \ --tag-keys country -{{< /command >}} +``` ## Supported Flink Versions From 654b0ace894bd19b439f23b5d76213af54a6ad9f Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:52:24 +0530 Subject: [PATCH 05/80] get api gateway done --- src/content/docs/aws/services/api-gateway.md | 131 +++++++++++-------- 1 file changed, 74 insertions(+), 57 deletions(-) diff --git a/src/content/docs/aws/services/api-gateway.md b/src/content/docs/aws/services/api-gateway.md index c7508523..d9c27762 100644 --- a/src/content/docs/aws/services/api-gateway.md +++ b/src/content/docs/aws/services/api-gateway.md @@ -1,6 +1,5 @@ --- title: "API Gateway" -linkTitle: "API Gateway" description: Get started with API Gateway on LocalStack tags: ["Free", "Base"] persistence: supported @@ -15,7 +14,7 @@ API Gateway supports standard HTTP methods such as `GET`, `POST`, `PUT`, `PATCH` LocalStack supports API Gateway V1 (REST API) in the Free plan, and API Gateway V2 (HTTP, Management and WebSocket API) in the Base plan. LocalStack allows you to use the API Gateway APIs to create, deploy, and manage APIs on your local machine to invoke those exposed API endpoints. -The supported APIs are available on the API coverage page for [API Gateway V1]({{< ref "coverage_apigateway" >}}) & [API Gateway V2]({{< ref "coverage_apigatewayv2" >}}), which provides information on the extent of API Gateway's integration with LocalStack. +The supported APIs are available on the API coverage page for [API Gateway V1]() & [API Gateway V2](), which provides information on the extent of API Gateway's integration with LocalStack. ## Getting started @@ -50,16 +49,16 @@ The above code defines a function named `apiHandler` that returns a response wit Zip the file and upload it to LocalStack using the `awslocal` CLI. Run the following command: -{{< command >}} -$ zip function.zip lambda.js -$ awslocal lambda create-function \ +```bash +zip function.zip lambda.js +awslocal lambda create-function \ --function-name apigw-lambda \ --runtime nodejs16.x \ --handler lambda.apiHandler \ --memory-size 128 \ --zip-file fileb://function.zip \ --role arn:aws:iam::111111111111:role/apigw -{{< /command >}} +``` This creates a new Lambda function named `apigw-lambda` with the code you specified. @@ -68,9 +67,9 @@ This creates a new Lambda function named `apigw-lambda` with the code you specif We will use the API Gateway's [`CreateRestApi`](https://docs.aws.amazon.com/apigateway/latest/api/API_CreateRestApi.html) API to create a new REST API. Here's an example command: -{{< command >}} -$ awslocal apigateway create-rest-api --name 'API Gateway Lambda integration' -{{< /command >}} +```bash +awslocal apigateway create-rest-api --name 'API Gateway Lambda integration' +``` This creates a new REST API named `API Gateway Lambda integration`. The above command returns the following response: @@ -97,9 +96,9 @@ You'll need this ID for the next step. Use the REST API ID generated in the previous step to fetch the resources for the API, using the [`GetResources`](https://docs.aws.amazon.com/apigateway/latest/api/API_GetResources.html) API: -{{< command >}} -$ awslocal apigateway get-resources --rest-api-id -{{< /command >}} +```bash +awslocal apigateway get-resources --rest-api-id +``` The above command returns the following response: @@ -122,12 +121,12 @@ You'll need this ID for the next step. Create a new resource for the API using the [`CreateResource`](https://docs.aws.amazon.com/apigateway/latest/api/API_CreateResource.html) API. Use the ID of the resource returned in the previous step as the parent ID: -{{< command >}} -$ awslocal apigateway create-resource \ +```bash +awslocal apigateway create-resource \ --rest-api-id \ --parent-id \ --path-part "{somethingId}" -{{< /command >}} +``` The above command returns the following response: @@ -148,14 +147,14 @@ You'll need this Resource ID for the next step. Add a `GET` method to the resource using the [`PutMethod`](https://docs.aws.amazon.com/apigateway/latest/api/API_PutMethod.html) API. Use the ID of the resource returned in the previous step as the Resource ID: -{{< command >}} +```bash awslocal apigateway put-method \ --rest-api-id \ --resource-id \ --http-method GET \ --request-parameters "method.request.path.somethingId=true" \ --authorization-type "NONE" -{{< /command >}} +``` The above command returns the following response: @@ -172,8 +171,8 @@ The above command returns the following response: Now, create a new integration for the method using the [`PutIntegration`](https://docs.aws.amazon.com/apigateway/latest/api/API_PutIntegration.html) API. -{{< command >}} -$ awslocal apigateway put-integration \ +```bash +awslocal apigateway put-integration \ --rest-api-id \ --resource-id \ --http-method GET \ @@ -181,7 +180,7 @@ $ awslocal apigateway put-integration \ --integration-http-method POST \ --uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:000000000000:function:apigw-lambda/invocations \ --passthrough-behavior WHEN_NO_MATCH -{{< /command >}} +``` The above command integrates the `GET` method with the Lambda function created in the first step. We can now proceed with the deployment before invoking the API. @@ -190,37 +189,46 @@ We can now proceed with the deployment before invoking the API. Create a new deployment for the API using the [`CreateDeployment`](https://docs.aws.amazon.com/apigateway/latest/api/API_CreateDeployment.html) API: -{{< command >}} -$ awslocal apigateway create-deployment \ +```bash +awslocal apigateway create-deployment \ --rest-api-id \ --stage-name dev -{{< /command >}} +``` Your API is now ready to be invoked. You can use [curl](https://curl.se/) or any HTTP REST client to invoke the API endpoint: -{{< command >}} -$ curl -X GET http://.execute-api.localhost.localstack.cloud:4566/dev/test +```bash +curl -X GET http://.execute-api.localhost.localstack.cloud:4566/dev/test +``` + +The response would be: +```json {"message":"Hello World"} -{{< /command >}} +``` + +You can also use our [alternative URL format](#alternative-url-format) in case of DNS issues: + +```bash +curl -X GET http://localhost:4566/_aws/execute-api//dev/test +``` -You can also use our [alternative URL format]({{< ref "#alternative-url-format" >}}) in case of DNS issues: -{{< command >}} -$ curl -X GET http://localhost:4566/_aws/execute-api//dev/test +The response would be: +```json {"message":"Hello World"} -{{< /command >}} +``` ## New API Gateway implementation -{{< callout >}} +:::note The new API Gateway implementation for both v1 (REST API) and v2 (HTTP API), introduced in [LocalStack 3.8.0](https://blog.localstack.cloud/localstack-release-v-3-8-0/#new-api-gateway-provider), is now the default in 4.0. If you were using the `PROVIDER_OVERRIDE_APIGATEWAY=next_gen` flag, please remove it as it is no longer required. The legacy provider (`PROVIDER_OVERRIDE_APIGATEWAY=legacy`) is temporarily available but deprecated and will be removed in the next major release. We strongly recommend migrating to the new implementation. -{{< /callout >}} +::: We're entirely reworked how REST and HTTP APIs are invoked, to closely match the behavior on AWS. This new implementation has improved parity on several key areas: @@ -320,15 +328,13 @@ http://localhost:4566/_aws/execute-api/0v1p6q6/local/my/path1 This format is sometimes used in case of local DNS issues. -{{< callout >}} - +:::note If you are using LocalStack 4.0, the following `_user_request_` format is deprecated, and you should use the format above. ```shell http://localhost:4566/restapis///_user_request_/ ``` - -{{< / callout >}} +::: ### WebSocket APIs (Pro) @@ -350,8 +356,13 @@ functions: Upon deployment of the Serverless project, LocalStack creates a new API Gateway V2 endpoint. To retrieve the list of APIs and verify the WebSocket endpoint, you can use the `awslocal` CLI: -{{< command >}} -$ awslocal apigatewayv2 get-apis +```bash +awslocal apigatewayv2 get-apis +``` + +The response would be: + +```json { "Items": [{ "ApiEndpoint": "ws://localhost:4510", @@ -359,7 +370,7 @@ $ awslocal apigatewayv2 get-apis ... }] } -{{< / command >}} +``` In the above example, the WebSocket endpoint is `ws://localhost:4510`. Assuming your Serverless project contains a simple Lambda `handler.js` like this: @@ -375,12 +386,12 @@ You can send a message to the WebSocket at `ws://localhost:4510` and the same me To push data from a backend service to the WebSocket connection, you can use the [Amazon API Gateway Management API](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/apigatewaymanagementapi/index.html). In LocalStack, use the following CLI command (replace `` with your WebSocket connection ID): -{{< command >}} -$ awslocal apigatewaymanagementapi \ +```bash +awslocal apigatewaymanagementapi \ post-to-connection \ --connection-id '' \ --data '{"msg": "Hi"}' -{{< / command >}} +``` ## Custom IDs for API Gateway resources via tags @@ -390,18 +401,23 @@ This can be useful to ensure a static endpoint URL for your API, simplifying tes To assign a custom ID to an API Gateway REST API, use the `create-rest-api` command with the `tags={"_custom_id_":"myid123"}` parameter. The following example assigns the custom ID `"myid123"` to the API: -{{< command >}} -$ awslocal apigateway create-rest-api --name my-api --tags '{"_custom_id_":"myid123"}' +```bash +awslocal apigateway create-rest-api --name my-api --tags '{"_custom_id_":"myid123"}' +``` + +The response would be: + +```json { "id": "myid123", .... } -{{< / command >}} +``` You can also configure the protocol type, the possible values being `HTTP` and `WEBSOCKET`: -{{< command >}} -$ awslocal apigatewayv2 create-api \ +```bash +awslocal apigatewayv2 create-api \ --name=my-api \ --protocol-type=HTTP --tags="_custom_id_=my-api" { @@ -413,12 +429,12 @@ $ awslocal apigatewayv2 create-api \ "_custom_id_": "my-api" } } -{{< / command >}} +``` -{{< callout >}} +:::note Setting the API Gateway ID via `_custom_id_` works only on the creation of the resource, but not on update in LocalStack. Ensure that you set the `_custom_id_` tag on creation of the resource. -{{< /callout >}} +::: ## Custom Domain Names with API Gateway (Pro) @@ -430,14 +446,15 @@ Assuming your custom domain is set up as `test.example.com` to point to your RES You should include the `Host` header with the custom domain name in your request, so you don't need to set up any custom DNS to resolve to LocalStack. -{{< command >}} -$ curl -H 'Host: test.example.com' http://localhost:4566/base-path -{{< / command >}} +```bash +curl -H 'Host: test.example.com' http://localhost:4566/base-path +``` The request above will be equivalent to the following request: -{{< command >}} -$ curl http://.execute-api.localhost.localstack.cloud:4566/dev/ -{{< / command >}} + +```bash +curl http://.execute-api.localhost.localstack.cloud:4566/dev/ +``` ## API Gateway Resource Browser @@ -447,7 +464,7 @@ You can access the Resource Browser by opening the LocalStack Web Application in The Resource Browser displays [API Gateway V1](https://app.localstack.cloud/resources/gateway/v1) and [API Gateway V2](https://app.localstack.cloud/resources/gateway/v2) resources. You can click on individual resources to view their details. -API Gateway Resource Browser +![API Gateway Resource Browser](/images/aws/api-gateway-resource-browser.png) The Resource Browser allows you to perform the following actions: From d2bc3d95e4884e5cee840ac6e30a929802314674 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Tue, 17 Jun 2025 23:53:48 +0530 Subject: [PATCH 06/80] get auto scaling done --- .../docs/aws/services/app-auto-scaling.md | 45 +++++++++---------- 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/src/content/docs/aws/services/app-auto-scaling.md b/src/content/docs/aws/services/app-auto-scaling.md index 0a358453..d8d9ae5a 100644 --- a/src/content/docs/aws/services/app-auto-scaling.md +++ b/src/content/docs/aws/services/app-auto-scaling.md @@ -1,6 +1,5 @@ --- title: "Application Auto Scaling" -linkTitle: "Application Auto Scaling" description: Get started with Application Auto Scaling on LocalStack tags: ["Base"] persistence: supported @@ -14,7 +13,7 @@ With Application Auto Scaling, you can configure automatic scaling for services Auto scaling uses CloudWatch under the hood to configure scalable targets which a service namespace, resource ID, and scalable dimension can uniquely identify. LocalStack allows you to use the Application Auto Scaling APIs in your local environment to scale different resources based on scaling policies and scheduled scaling. -The supported APIs are available on our [API coverage page]({{< ref "coverage_application-autoscaling" >}}), which provides information on the extent of Application Auto Scaling's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Application Auto Scaling's integration with LocalStack. ## Getting Started @@ -39,16 +38,15 @@ exports.handler = async (event, context) => { Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API: -{{< command >}} -$ zip function.zip index.js - -$ awslocal lambda create-function \ +```bash +zip function.zip index.js +awslocal lambda create-function \ --function-name autoscaling-example \ --runtime nodejs18.x \ --zip-file fileb://function.zip \ --handler index.handler \ --role arn:aws:iam::000000000000:role/cool-stacklifter -{{< /command >}} +``` ### Create a version and alias for your Lambda function @@ -56,14 +54,14 @@ Next, you can create a version for your Lambda function and publish an alias. We will use the [`PublishVersion`](https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-version.html) and [`CreateAlias`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-alias.html) APIs for this. Run the following commands: -{{< command >}} -$ awslocal lambda publish-version --function-name autoscaling-example -$ awslocal lambda create-alias \ +```bash +awslocal lambda publish-version --function-name autoscaling-example +awslocal lambda create-alias \ --function-name autoscaling-example \ --description "alias for blue version of function" \ --function-version 1 \ --name BLUE -{{< /command >}} +``` ### Register the Lambda function as a scalable target @@ -72,20 +70,20 @@ We will specify the `--service-namespace` as `lambda`, `--scalable-dimension` as Run the following command to register the scalable target: -{{< command >}} -$ awslocal application-autoscaling register-scalable-target \ +```bash +awslocal application-autoscaling register-scalable-target \ --service-namespace lambda \ --scalable-dimension lambda:function:ProvisionedConcurrency \ --resource-id function:autoscaling-example:BLUE \ --min-capacity 0 --max-capacity 0 -{{< /command >}} +``` ### Setting up a scheduled action You can create a scheduled action that scales out by specifying the `--schedule` parameter with a recurring schedule specified as a cron job. Run the following command to create a scheduled action using the [`PutScheduledAction`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scheduled-action.html) API: -{{< command >}} +```bash awslocal application-autoscaling put-scheduled-action \ --service-namespace lambda \ --scalable-dimension lambda:function:ProvisionedConcurrency \ @@ -93,14 +91,14 @@ awslocal application-autoscaling put-scheduled-action \ --scheduled-action-name lambda-action \ --schedule "cron(*/2* ** *)" \ --scalable-target-action MinCapacity=1,MaxCapacity=5 -{{< /command >}} +``` You can confirm if the scheduled action exists using [`DescribeScheduledActions`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scheduled-actions.html) API: -{{< command >}} -$ awslocal application-autoscaling describe-scheduled-actions \ +```bash +awslocal application-autoscaling describe-scheduled-actions \ --service-namespace lambda -{{< /command >}} +``` ### Setting up a target tracking scaling policy @@ -110,22 +108,21 @@ When metrics lack data due to minimal application load, Application Auto Scaling Run the following command to create a target-tracking scaling policy: -{{< command >}} -$ awslocal application-autoscaling put-scaling-policy \ +```bash +awslocal application-autoscaling put-scaling-policy \ --service-namespace lambda \ --scalable-dimension lambda:function:ProvisionedConcurrency \ --resource-id function:events-example:BLUE \ --policy-name scaling-policy --policy-type TargetTrackingScaling \ --target-tracking-scaling-policy-configuration '{ "TargetValue": 50.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "predefinedmetric" }}' -{{< /command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing AppConfig applications. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Application Auto Scaling** under the **App Integration** section. -Application Auto Scaling Resource Browser -

+![Application Auto Scaling Resource Browser](/images/aws/application-auto-scaling-resource-browser.png) The Resource Browser allows you to perform the following actions: From 8bf4bf8271d408893e7897c9c4bdcfca6afaa74b Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 00:11:09 +0530 Subject: [PATCH 07/80] file renames --- .../{api-gateway.md => apigateway.md} | 0 ...{app-auto-scaling.md => appautoscaling.md} | 0 .../services/{app-config.md => appconfig.md} | 42 +++++++++---------- .../aws/services/{app-sync.md => appsync.md} | 0 4 files changed, 20 insertions(+), 22 deletions(-) rename src/content/docs/aws/services/{api-gateway.md => apigateway.md} (100%) rename src/content/docs/aws/services/{app-auto-scaling.md => appautoscaling.md} (100%) rename src/content/docs/aws/services/{app-config.md => appconfig.md} (90%) rename src/content/docs/aws/services/{app-sync.md => appsync.md} (100%) diff --git a/src/content/docs/aws/services/api-gateway.md b/src/content/docs/aws/services/apigateway.md similarity index 100% rename from src/content/docs/aws/services/api-gateway.md rename to src/content/docs/aws/services/apigateway.md diff --git a/src/content/docs/aws/services/app-auto-scaling.md b/src/content/docs/aws/services/appautoscaling.md similarity index 100% rename from src/content/docs/aws/services/app-auto-scaling.md rename to src/content/docs/aws/services/appautoscaling.md diff --git a/src/content/docs/aws/services/app-config.md b/src/content/docs/aws/services/appconfig.md similarity index 90% rename from src/content/docs/aws/services/app-config.md rename to src/content/docs/aws/services/appconfig.md index 1917038e..32825d12 100644 --- a/src/content/docs/aws/services/app-config.md +++ b/src/content/docs/aws/services/appconfig.md @@ -1,6 +1,5 @@ --- title: "AppConfig" -linkTitle: "AppConfig" description: Get started with AppConfig on LocalStack tags: ["Base"] --- @@ -10,7 +9,7 @@ AppConfig offers centralized management of configuration data and the ability to It allows you to avoid deploying the service repeatedly for smaller changes, enables controlled deployments to applications and includes built-in validation checks & monitoring. LocalStack allows you to use the AppConfig APIs in your local environment to define configurations for different environments and deploy them to your applications as needed. -The supported APIs are available on our [API coverage page]({{< ref "coverage_appconfig" >}}), which provides information on the extent of AppConfig's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of AppConfig's integration with LocalStack. ## Getting started @@ -25,11 +24,11 @@ You can create an AppConfig application using the [`CreateApplication`](https:// The application is a folder/directory that contains the configuration data for your specific application. The following command creates an application named `my-app`: -{{< command >}} -$ awslocal appconfig create-application \ +```bash +awslocal appconfig create-application \ --name my-app \ --description "My application" -{{< /command >}} +``` The following output would be retrieved: @@ -45,12 +44,12 @@ You can now create an AppConfig environment for your application using the [`Cre An environment consists of the deployment group of your AppConfig applications. The following command creates an environment named `my-app-env`: -{{< command >}} -$ awslocal appconfig create-environment \ +```bash +awslocal appconfig create-environment \ --application-id 400c285 \ --name my-app-env \ --description "My application environment" -{{< /command >}} +``` Replace the `application-id` with the ID of the application you created in the previous step. The following output would be retrieved: @@ -71,13 +70,13 @@ You can create an AppConfig configuration profile using the [`CreateConfiguratio A configuration profile contains for the configurations of your AppConfig applications. The following command creates a configuration profile named `my-app-config`: -{{< command >}} -$ awslocal appconfig create-configuration-profile \ +```bash +awslocal appconfig create-configuration-profile \ --application-id 400c285 \ --name my-app-config \ --location-uri hosted \ --type AWS.AppConfig.FeatureFlags -{{< /command >}} +``` The following output would be retrieved: @@ -108,14 +107,14 @@ Create a file named `feature-flag-config.json` with the following content: You can now use the [`CreateHostedConfigurationVersion`](https://docs.aws.amazon.com/appconfig/latest/APIReference/API_CreateHostedConfigurationVersion.html) API to save your feature flag configuration data to AppConfig. The following command creates a hosted configuration version for the configuration profile you created in the previous step: -{{< command >}} -$ awslocal appconfig create-hosted-configuration-version \ +```bash +awslocal appconfig create-hosted-configuration-version \ --application-id 400c285 \ --configuration-profile-id 7d748f9 \ --content-type "application/json" \ --content file://feature-flag-config.json \ configuration-data.json -{{< /command >}} +``` The following output would be retrieved: @@ -134,13 +133,13 @@ You can now create an AppConfig deployment strategy using the [`CreateDeployment A deployment strategy defines important criteria for rolling out your configuration to the target environment. The following command creates a deployment strategy named `my-app-deployment-strategy`: -{{< command >}} -$ awslocal appconfig create-deployment-strategy \ +```bash +awslocal appconfig create-deployment-strategy \ --name my-app-deployment-strategy \ --description "My application deployment strategy" \ --deployment-duration-in-minutes 10 \ --growth-factor 1.0 -{{< /command >}} +``` The following output would be retrieved: @@ -157,15 +156,15 @@ The following output would be retrieved: You can now use the [`StartDeployment`](https://docs.aws.amazon.com/appconfig/latest/APIReference/API_StartDeployment.html) API to deploy the configuration. The following command deploys the configuration to the environment you created in the previous step: -{{< command >}} -$ awslocal appconfig start-deployment \ +```bash +awslocal appconfig start-deployment \ --application-id 400c285 \ --environment-id 3695ea3 \ --deployment-strategy-id f2f2225 \ --configuration-profile-id 7d748f9 \ --configuration-version 1 \ --description "My application deployment" -{{< /command >}} +``` The following output would be retrieved: @@ -202,8 +201,7 @@ The following output would be retrieved: The LocalStack Web Application provides a Resource Browser for managing AppConfig applications. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **AppConfig** under the **Developer Tools** section. -AppConfig Resource Browser -

+![AppConfig Resource Browser](/images/aws/appconfig-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/app-sync.md b/src/content/docs/aws/services/appsync.md similarity index 100% rename from src/content/docs/aws/services/app-sync.md rename to src/content/docs/aws/services/appsync.md From 40d2d9828263963578b642fa0950025fe46d6e3a Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 00:12:34 +0530 Subject: [PATCH 08/80] revamp appsync --- src/content/docs/aws/services/appsync.md | 68 ++++++++++++------------ 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/src/content/docs/aws/services/appsync.md b/src/content/docs/aws/services/appsync.md index 6aaf0e1b..de4066c3 100644 --- a/src/content/docs/aws/services/appsync.md +++ b/src/content/docs/aws/services/appsync.md @@ -1,6 +1,5 @@ --- title: "AppSync" -linkTitle: "AppSync" description: Get started with AppSync on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ AppSync is a managed service provided by Amazon Web Services (AWS) that enables AppSync allows you to define your data models and business logic using a declarative approach, and connect to various data sources, including other AWS services, relational databases, and custom data sources. LocalStack allows you to use the AppSync APIs in your local environment to connect your applications and services to data and events. -The supported APIs are available on our [API coverage page]({{< ref "coverage_appsync" >}}), which provides information on the extent of AppSync's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of AppSync's integration with LocalStack. ## Getting started @@ -25,20 +24,20 @@ We will demonstrate how to create an AppSync API with a DynamoDB data source usi You can create a DynamoDB table using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API. Execute the following command to create a table named `DynamoDBNotesTable` with a primary key named `NoteId`: -{{< command >}} -$ awslocal dynamodb create-table \ +```bash +awslocal dynamodb create-table \ --table-name DynamoDBNotesTable \ --attribute-definitions AttributeName=NoteId,AttributeType=S \ --key-schema AttributeName=NoteId,KeyType=HASH \ --billing-mode PAY_PER_REQUEST -{{< /command >}} +``` After the table is created, you can use the [`ListTables`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html) API. Run the following command to list all tables in your running LocalStack container: -{{< command >}} -$ awslocal dynamodb list-tables -{{< /command >}} +```bash +awslocal dynamodb list-tables +``` The following output would be retrieved: @@ -55,11 +54,11 @@ The following output would be retrieved: You can create a GraphQL API using the [`CreateGraphqlApi`](https://docs.aws.amazon.com/appsync/latest/APIReference/API_CreateGraphqlApi.html) API. Execute the following command to create a GraphQL API named `NotesApi`: -{{< command >}} -$ awslocal appsync create-graphql-api \ +```bash +awslocal appsync create-graphql-api \ --name NotesApi \ --authentication-type API_KEY -{{< /command >}} +``` The following output would be retrieved: @@ -83,10 +82,10 @@ The following output would be retrieved: You can now create an API key for your GraphQL API using the [`CreateApiKey`](https://docs.aws.amazon.com/appsync/latest/APIReference/API_CreateApiKey.html) API. Execute the following command to create an API key for your GraphQL API: -{{< command >}} -$ awslocal appsync create-api-key \ +```bash +awslocal appsync create-api-key \ --api-id 014d18d0c2b149ee8b66f39173 -{{< /command >}} +``` The following output would be retrieved: @@ -130,11 +129,11 @@ type Schema { You can start the schema creation process using the [`StartSchemaCreation`](https://docs.aws.amazon.com/appsync/latest/APIReference/API_StartSchemaCreation.html) API. Execute the following command to start the schema creation process: -{{< command >}} -$ awslocal appsync start-schema-creation \ +```bash +awslocal appsync start-schema-creation \ --api-id 014d18d0c2b149ee8b66f39173 \ --definition file://schema.graphql -{{< /command >}} +``` The following output would be retrieved: @@ -149,13 +148,13 @@ The following output would be retrieved: You can create a data source using the [`CreateDataSource`](https://docs.aws.amazon.com/appsync/latest/APIReference/API_CreateDataSource.html) API. Execute the following command to create a data source named `DynamoDBNotesTable`: -{{< command >}} -$ awslocal appsync create-data-source \ +```bash +awslocal appsync create-data-source \ --name AppSyncDB \ --api-id 014d18d0c2b149ee8b66f39173 \ --type AMAZON_DYNAMODB \ --dynamodb-config tableName=DynamoDBNotesTable,awsRegion=us-east-1 -{{< /command >}} + ``` The following output would be retrieved: @@ -179,27 +178,27 @@ You can create a resolver using the [`CreateResolver`](https://github.com/locals You can create a custom `request-mapping-template.vtl` and `response-mapping-template.vtl` file to use as a mapping template to use for requests and responses respectively. Execute the following command to create a VTL resolver attached to the `PaginatedNotes.notes` field: -{{< command >}} -$ awslocal appsync create-resolver \ +```bash +awslocal appsync create-resolver \ --api-id 014d18d0c2b149ee8b66f39173 \ --type Query \ --field PaginatedNotes.notes \ --data-source-name AppSyncDB \ --request-mapping-template file://request-mapping-template.vtl \ --response-mapping-template file://response-mapping-template.vtl -{{< /command >}} +``` ## Custom GraphQL API IDs You can employ a pre-defined ID during the creation of GraphQL APIs by utilizing the special tag `_custom_id_`. For example, the following command will create a GraphQL API with the ID `faceb00c`: -{{< command >}} -$ awslocal appsync create-graphql-api \ +```bash +awslocal appsync create-graphql-api \ --name my-api \ --authentication-type API_KEY \ --tags _custom_id_=faceb00c -{{< /command >}} +``` The following output would be retrieved: @@ -261,7 +260,7 @@ See the AWS documentation for [`evaluate-mapping-template`](https://awscli.amazo ### VTL resolver templates -{{< command >}} +```bash awslocal appsync evaluate-mapping-template \ --template '$ctx.result' \ --context '{"result":"ok"}' @@ -271,30 +270,33 @@ awslocal appsync evaluate-mapping-template \ "logs": [] } -{{< / command >}} +``` ### JavaScript resolvers -{{< command >}} +```bash awslocal appsync evaluate-code \ --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \ --function request \ --code 'export function request(ctx) { return ctx.result; }; export function response(ctx) {};' \ --context '{"result": "ok"}' - +``` + +The following output would be retrieved: + +```bash { "evaluationResult": "ok", "logs": [] } - -{{< / command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing AppSync APIs, Data Sources, Schema, Query, Types, Resolvers, Functions and API keys. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **AppSync** under the **App Integration** section. -AppSync Resource Browser +![AppSync Resource Browser](/images/aws/appsync-resource-browser.png) The Resource Browser allows you to perform the following actions: From 49d8defac859d3756a2c6c7dd52a82f81ab010e5 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 00:16:28 +0530 Subject: [PATCH 09/80] revamp athena docs --- .../aws/services/{athena.md => athena.mdx} | 122 ++++++++++-------- 1 file changed, 67 insertions(+), 55 deletions(-) rename src/content/docs/aws/services/{athena.md => athena.mdx} (84%) diff --git a/src/content/docs/aws/services/athena.md b/src/content/docs/aws/services/athena.mdx similarity index 84% rename from src/content/docs/aws/services/athena.md rename to src/content/docs/aws/services/athena.mdx index 0480a8cb..011f694e 100644 --- a/src/content/docs/aws/services/athena.md +++ b/src/content/docs/aws/services/athena.mdx @@ -1,10 +1,11 @@ --- title: "Athena" -linkTitle: "Athena" description: Get started with Athena on LocalStack tags: ["Ultimate"] --- +import { Tabs, TabItem } from '@astrojs/starlight/components'; + ## Introduction Athena is an interactive query service provided by Amazon Web Services (AWS) that enables you to analyze data stored in S3 using standard SQL queries. @@ -12,7 +13,7 @@ Athena allows users to create ad-hoc queries to perform data analysis, filter, a It supports various file formats, such as JSON, Parquet, and CSV, making it compatible with a wide range of data sources. LocalStack allows you to configure the Athena APIs with a Hive metastore that can connect to the S3 API and query your data directly in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_athena" >}}), which provides information on the extent of Athena's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Athena's integration with LocalStack. ## Getting started @@ -21,44 +22,44 @@ This guide is designed for users new to Athena and assumes basic knowledge of th Start your LocalStack container using your preferred method. We will demonstrate how to create an Athena table and run a query against it in addition to reading the results with the AWS CLI. -{{< callout >}} +:::note To utilize the Athena API, LocalStack will download additional dependencies. This involves getting a Docker image of around 1.5GB, containing Presto, Hive, and other tools. These components are retrieved automatically when you initiate the service. To ensure a smooth initial setup, ensure you're connected to a stable internet connection while fetching these components for the first time. -{{< /callout >}} +::: ### Create an S3 bucket You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command. Run the following command to create a bucket named `athena-bucket`: -{{< command >}} -$ awslocal s3 mb s3://athena-bucket -{{< / command >}} +```bash +awslocal s3 mb s3://athena-bucket +``` You can create some sample data using the following commands: -{{< command >}} -$ echo "Name,Service" > data.csv -$ echo "LocalStack,Athena" >> data.csv -{{< / command >}} +```bash +echo "Name,Service" > data.csv +echo "LocalStack,Athena" >> data.csv +``` You can upload the data to your bucket using the [`cp`](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command: -{{< command >}} -$ awslocal s3 cp data.csv s3://athena-bucket/data/ -{{< / command >}} +```bash +awslocal s3 cp data.csv s3://athena-bucket/data/ +``` ### Create an Athena table You can create an Athena table using the [`CreateTable`](https://docs.aws.amazon.com/athena/latest/APIReference/API_CreateTable.html) API Run the following command to create a table named `athena_table`: -{{< command >}} -$ awslocal athena start-query-execution \ +```bash +awslocal athena start-query-execution \ --query-string "create external table tbl01 (name STRING, surname STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3://athena-bucket/data/';" --result-configuration "OutputLocation=s3://athena-bucket/output/" -{{< / command >}} +``` The following output would be retrieved: @@ -71,9 +72,9 @@ The following output would be retrieved: You can retrieve information about the query execution using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API. Run the following command: -{{< command >}} -$ awslocal athena get-query-execution --query-execution-id 593acab7 -{{< / command >}} +```bash +awslocal athena get-query-execution --query-execution-id 593acab7 +``` Replace `593acab7` with the `QueryExecutionId` returned by the [`StartQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_StartQueryExecution.html) API. @@ -82,27 +83,27 @@ Replace `593acab7` with the `QueryExecutionId` returned by the [`StartQueryExecu You can get the output of the query using the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API. Run the following command: -{{< command >}} -$ awslocal athena get-query-results --query-execution-id 593acab7 -{{< / command >}} +```bash +awslocal athena get-query-results --query-execution-id 593acab7 +``` You can now read the data from the `tbl01` table and retrieve the data from S3 that was mentioned in your table creation statement. Run the following command: -{{< command >}} -$ awslocal athena start-query-execution \ +```bash +awslocal athena start-query-execution \ --query-string "select * from tbl01;" --result-configuration "OutputLocation=s3://athena-bucket/output/" -{{< / command >}} +``` You can retrieve the execution details similarly using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API using the `QueryExecutionId` returned by the previous step. You can copy the `ResultConfiguration` from the output and use it to retrieve the results of the query. Run the following command: -{{< command >}} -$ awslocal cp s3://athena-bucket/output/593acab7.csv . -$ cat 593acab7.csv -{{< / command >}} +```bash +awslocal cp s3://athena-bucket/output/593acab7.csv . +cat 593acab7.csv +``` Replace `593acab7.csv` with the path to the file that was present in the `ResultConfiguration` of the previous step. You can also use the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API to retrieve the results of the query. @@ -117,34 +118,37 @@ The Delta Lake files used in this sample are available in a public S3 bucket und For your convenience, we have prepared the test files in a downloadable ZIP file [here](https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip). We start by downloading and extracting this ZIP file: -{{< command >}} -$ mkdir /tmp/delta-lake-sample; cd /tmp/delta-lake-sample -$ wget https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip -$ unzip aws-sample-athena-delta-lake.zip; rm aws-sample-athena-delta-lake.zip -{{< / command >}} +```bash +mkdir /tmp/delta-lake-sample; cd /tmp/delta-lake-sample +wget https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip +unzip aws-sample-athena-delta-lake.zip; rm aws-sample-athena-delta-lake.zip +``` We can then create an S3 bucket in LocalStack using the [`awslocal`](https://github.com/localstack/awscli-local) command line, and upload the files to the bucket: -{{< command >}} -$ awslocal s3 mb s3://test -$ awslocal s3 sync /tmp/delta-lake-sample s3://test -{{< / command >}} + +```bash +awslocal s3 mb s3://test +awslocal s3 sync /tmp/delta-lake-sample s3://test +``` Next, we create the table definitions in Athena: -{{< command >}} -$ awslocal athena start-query-execution \ + +```bash +awslocal athena start-query-execution \ --query-string "CREATE EXTERNAL TABLE test (product_id string, product_name string, \ price bigint, currency string, category string, updated_at double) \ LOCATION 's3://test/' TBLPROPERTIES ('table_type'='DELTA')" -{{< / command >}} +``` Please note that this query may take some time to finish executing. You can observe the output in the LocalStack container (ideally with `DEBUG=1` enabled) to follow the steps of the query execution. Finally, we can now run a `SELECT` query to extract data from the Delta Lake table we've just created: -{{< command >}} -$ queryId=$(awslocal athena start-query-execution --query-string "SELECT * from deltalake.default.test" | jq -r .QueryExecutionId) -$ awslocal athena get-query-results --query-execution-id $queryId -{{< / command >}} + +```bash +queryId=$(awslocal athena start-query-execution --query-string "SELECT * from deltalake.default.test" | jq -r .QueryExecutionId) +awslocal athena get-query-results --query-execution-id $queryId +``` The query should yield a result similar to the output below: @@ -175,9 +179,9 @@ The query should yield a result similar to the output below: ... ``` -{{< callout >}} +:::note The `SELECT` statement above currently requires us to prefix the database/table name with `deltalake.` - this will be further improved in a future iteration, for better parity with AWS. -{{< /callout >}} +::: ## Iceberg Tables @@ -210,8 +214,10 @@ s3://mybucket/prefix/temp/ You can configure the Athena service in LocalStack with various clients, such as [PyAthena](https://github.com/laughingman7743/PyAthena/), [awswrangler](https://github.com/aws/aws-sdk-pandas), among others! Here are small snippets to get you started: -{{< tabpane lang="python" >}} -{{< tab header="PyAthena" lang="python" >}} + + + +```python from pyathena import connect conn = connect( @@ -223,8 +229,13 @@ cursor = conn.cursor() cursor.execute("SELECT 1,2,3 AS test") print(cursor.fetchall()) -{{< /tab >}} -{{< tab header="awswrangler" lang="python" >}} +``` + + + + + +```python import awswrangler as wr import pandas as pd @@ -238,15 +249,16 @@ wr.config.s3_endpoint_url = ENDPOINT wr.catalog.create_database(DATABASE) df = wr.athena.read_sql_query("SELECT 1 AS col1, 2 AS col2, 3 AS col3", database=DATABASE) print(df) -{{< /tab >}} -{{< /tabpane >}} +``` + + ## Resource Browser The LocalStack Web Application provides a Resource Browser for Athena query execution, writing SQL queries, and visualizing query results. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Athena** under the **Analytics** section. -Athena Resource Browser +![Athena Resource Browser](/images/aws/athena-resource-browser.png) The Resource Browser allows you to perform the following actions: From c6f64c2e907cfd5c960eeb36adeedde3fe28d470 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 00:16:55 +0530 Subject: [PATCH 10/80] rename account --- .../docs/aws/services/{account-management.md => account.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename src/content/docs/aws/services/{account-management.md => account.md} (100%) diff --git a/src/content/docs/aws/services/account-management.md b/src/content/docs/aws/services/account.md similarity index 100% rename from src/content/docs/aws/services/account-management.md rename to src/content/docs/aws/services/account.md From ef083ba759d100fed7c8f56932e80e218da1a11a Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 00:17:42 +0530 Subject: [PATCH 11/80] get backup done --- src/content/docs/aws/services/backup.md | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/src/content/docs/aws/services/backup.md b/src/content/docs/aws/services/backup.md index e90413b0..d8e9d572 100644 --- a/src/content/docs/aws/services/backup.md +++ b/src/content/docs/aws/services/backup.md @@ -1,6 +1,5 @@ --- title: "Backup" -linkTitle: "Backup" description: Get started with Backup on LocalStack tags: ["Ultimate"] persistence: supported @@ -14,7 +13,7 @@ Backup supports a wide range of AWS resources, including Elastic Block Store (EB Backup enables you to set backup retention policies, allowing you to specify how long you want to retain your backup copies. LocalStack allows you to use the Backup APIs in your local environment to manage backup plans, create scheduled or on-demand backups of certain resource types. -The supported APIs are available on our [API coverage page]({{< ref "coverage_backup" >}}), which provides information on the extent of Backup's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Backup's integration with LocalStack. ## Getting started @@ -28,10 +27,10 @@ We will demonstrate how to create a backup job and specify a set of resources to You can create a backup vault which acts as a logical container where backups are stored using the [`CreateBackupVault`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupVault.html) API. Run the following command to create a backup vault named `my-vault`: -{{< command >}} -$ awslocal backup create-backup-vault \ +```bash +awslocal backup create-backup-vault \ --backup-vault-name primary -{{< / command >}} +``` The following output would be retrieved: @@ -73,10 +72,10 @@ You can specify the backup plan in a `backup-plan.json` file: You can use the [`CreateBackupPlan`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupPlan.html) API to create a backup plan. Run the following command to create a backup plan: -{{< command >}} -$ awslocal backup create-backup-plan \ +```bash +awslocal backup create-backup-plan \ --backup-plan file://backup-plan.json -{{< / command >}} +``` The following output would be retrieved: @@ -111,11 +110,11 @@ You can specify the backup selection in a `backup-selection.json` file: You can use the [`CreateBackupSelection`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupSelection.html) API to create a backup selection. Run the following command to create a backup selection: -{{< command >}} -$ awslocal backup create-backup-selection \ +```bash +awslocal backup create-backup-selection \ --backup-plan-id 9337aba3 \ --backup-selection file://backup-plan-resources.json -{{< / command >}} +``` Replace the `--backup-plan-id` value with the `BackupPlanId` value from the output of the previous command. The following output would be retrieved: @@ -133,7 +132,7 @@ The following output would be retrieved: The LocalStack Web Application provides a Resource Browser for managing backup plans and vaults. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Backup** under the **Storage** section. -Backup Resource Browser +![Backup Resource Browser](/images/aws/backup-resource-browser.png) The Resource Browser allows you to perform the following actions: From a3fb8e5e2420bdcaf6f09a4a63bf04c2e05c0c7f Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:34:14 +0530 Subject: [PATCH 12/80] revamp autoscaling --- .../{auto-scaling.md => autoscaling.md} | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) rename src/content/docs/aws/services/{auto-scaling.md => autoscaling.md} (84%) diff --git a/src/content/docs/aws/services/auto-scaling.md b/src/content/docs/aws/services/autoscaling.md similarity index 84% rename from src/content/docs/aws/services/auto-scaling.md rename to src/content/docs/aws/services/autoscaling.md index ba7d4514..d525b78e 100644 --- a/src/content/docs/aws/services/auto-scaling.md +++ b/src/content/docs/aws/services/autoscaling.md @@ -1,7 +1,6 @@ --- title: "Auto Scaling" -linkTitle: "Auto Scaling" -description: Get started with Auto Scaling" on LocalStack +description: Get started with Auto Scaling on LocalStack tags: ["Base"] --- @@ -11,7 +10,7 @@ Auto Scaling helps you maintain application availability and allows you to autom You can use Auto Scaling to ensure that you are running your desired number of instances. LocalStack allows you to use the Auto Scaling APIs locally to create and manage Auto Scaling groups locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_autoscaling" >}}), which provides information on the extent of Auto Scaling's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Auto Scaling's integration with LocalStack. ## Getting started @@ -25,12 +24,12 @@ We will demonstrate how you can create a launch template, an Auto Scaling group, You can create a launch template that defines the launch configuration for the instances in the Auto Scaling group using the [`CreateLaunchTemplate`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateLaunchTemplate.html) API. Run the following command to create a launch template: -{{< command >}} -$ awslocal ec2 create-launch-template \ +```bash +awslocal ec2 create-launch-template \ --launch-template-name my-template-for-auto-scaling \ --version-description version1 \ --launch-template-data '{"ImageId":"ami-ff0fea8310f3","InstanceType":"t2.micro"}' -{{< /command >}} +``` The following output is displayed: @@ -53,30 +52,30 @@ The following output is displayed: Before creating an Auto Scaling group, you need to fetch the subnet ID. Run the following command to describe the subnets: -{{< command >}} -$ awslocal ec2 describe-subnets --output text --query Subnets[0].SubnetId -{{< /command >}} +```bash +awslocal ec2 describe-subnets --output text --query Subnets[0].SubnetId +``` Copy the subnet ID from the output and use it to create the Auto Scaling group. Run the following command to create an Auto Scaling group using the [`CreateAutoScalingGroup`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateAutoScalingGroup.html) API: -{{< command >}} -$ awslocal autoscaling create-auto-scaling-group \ +```bash +awslocal autoscaling create-auto-scaling-group \ --auto-scaling-group-name my-asg \ --launch-template LaunchTemplateId=lt-5ccdf1a84f178ba44 \ --min-size 1 \ --max-size 5 \ --vpc-zone-identifier 'subnet-d4d16268' -{{< /command >}} +``` ### Describe the Auto Scaling group You can describe the Auto Scaling group using the [`DescribeAutoScalingGroups`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeAutoScalingGroups.html) API. Run the following command to describe the Auto Scaling group: -{{< command >}} -$ awslocal autoscaling describe-auto-scaling-groups -{{< /command >}} +```bash +awslocal autoscaling describe-auto-scaling-groups +``` The following output is displayed: @@ -119,23 +118,23 @@ You can attach an instance to the Auto Scaling group using the [`AttachInstances Before that, create an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API. Run the following command to create an EC2 instance locally: -{{< command >}} -$ awslocal ec2 run-instances \ +```bash +awslocal ec2 run-instances \ --image-id ami-ff0fea8310f3 --count 1 -{{< /command >}} +``` Fetch the instance ID from the output and use it to attach the instance to the Auto Scaling group. Run the following command to attach the instance to the Auto Scaling group: -{{< command >}} -$ awslocal autoscaling attach-instances \ +```bash +awslocal autoscaling attach-instances \ --instance-ids i-0d678c4ecf6018dde \ --auto-scaling-group-name my-asg -{{< /command >}} +``` Replace `i-0d678c4ecf6018dde` with the instance ID that you fetched from the output. ## Current Limitations -LocalStack does not support the `docker`/`libvirt` [VM manager for EC2]({{< ref "/user-guide/aws/ec2/#vm-managers" >}}). +LocalStack does not support the `docker`/`libvirt` [VM manager for EC2](/aws/services/ec2/#vm-managers). It only works with the `mock` VM manager. From 640d82f91ab6e9cd7775d653abe0e8559d401861 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:35:41 +0530 Subject: [PATCH 13/80] revamp batch --- src/content/docs/aws/services/batch.md | 51 +++++++++++++------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/src/content/docs/aws/services/batch.md b/src/content/docs/aws/services/batch.md index 72a123f7..72523607 100644 --- a/src/content/docs/aws/services/batch.md +++ b/src/content/docs/aws/services/batch.md @@ -1,6 +1,5 @@ --- title: Batch -linkTitle: Batch description: Get started with Batch on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ Batch is a cloud-based service provided by Amazon Web Services (AWS) that simpli Batch allows you to efficiently process large volumes of data and run batch jobs without the need to manage and provision underlying compute resources. LocalStack allows you to use the Batch APIs to automate and scale computational tasks in your local environment while handling batch workloads. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_batch" >}}), which provides information on the extent of Batch integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Batch integration with LocalStack. ## Getting started @@ -30,15 +29,15 @@ We will demonstrate how you create and run a Batch job by following these steps: You can create a role using the [`CreateRole`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html) API. For LocalStack, the service role simply needs to exist. -However, when [enforcing IAM policies]({{< ref "user-guide/aws/iam#enforcing-iam-policies" >}}), it is necessary that the policy is valid. +However, when [enforcing IAM policies](/aws/services/iam/#enforcing-iam-policies), it is necessary that the policy is valid. Run the following command to create a role with an empty policy document: -{{< command >}} -$ awslocal iam create-role \ +```bash +awslocal iam create-role \ --role-name myrole \ --assume-role-policy-document "{}" -{{< / command >}} +``` You should see the following output: @@ -60,12 +59,12 @@ You should see the following output: You can use the [`CreateComputeEnvironment`](https://docs.aws.amazon.com/cli/latest/reference/batch/create-compute-environment.html) API to create a compute environment. Run the following command using the role ARN above (`arn:aws:iam::000000000000:role/myrole`), to create the compute environment: -{{< command >}} -$ awslocal batch create-compute-environment \ +```bash +awslocal batch create-compute-environment \ --compute-environment-name myenv \ --type UNMANAGED \ --service-role -{{< / command >}} +``` You should see the following output: @@ -76,19 +75,19 @@ You should see the following output: } ``` -{{< callout >}} +:::note While an unmanaged compute environment has been specified, there is no need to provision any compute resources for this setup to function. Your tasks will run independently in new Docker containers, alongside the LocalStack container. -{{< /callout >}} +::: ### Create a job queue You can fetch the ARN using the [`DescribeComputeEnvironments`](https://docs.aws.amazon.com/cli/latest/reference/batch/describe-compute-environments.html) API. Run the following command to fetch the ARN of the compute environment: -{{< command >}} -$ awslocal batch describe-compute-environments --compute-environments myenv -{{< / command >}} +```bash +awslocal batch describe-compute-environments --compute-environments myenv +``` You should see the following output: @@ -111,13 +110,13 @@ You should see the following output: You can use the ARN to create the job queue using [`CreateJobQueue`](https://docs.aws.amazon.com/cli/latest/reference/batch/create-job-queue.html) API. Run the following command to create the job queue: -{{< command >}} -$ awslocal batch create-job-queue \ +```bash +awslocal batch create-job-queue \ --job-queue-name myqueue \ --priority 1 \ --compute-environment-order order=0,computeEnvironment=arn:aws:batch:us-east-1:000000000000:compute-environment/myenv \ --state ENABLED -{{< / command >}} +``` You should see the following output: @@ -136,12 +135,12 @@ It's important to note that you can override this command when submitting the jo Run the following command to create the job definition using the [`RegisterJobDefinition`](https://docs.aws.amazon.com/cli/latest/reference/batch/register-job-definition.html) API: -{{< command >}} -$ awslocal batch register-job-definition \ +```bash +awslocal batch register-job-definition \ --job-definition-name myjobdefn \ --type container \ --container-properties '{"image":"busybox","vcpus":1,"memory":128,"command":["sleep","30"]}' -{{< / command >}} +``` You should see the following output: @@ -156,13 +155,13 @@ You should see the following output: If you want to pass arguments to the command as [parameters](https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html#parameters), you can use the `Ref::` declaration to set placeholders for parameter substitution. This allows the dynamic passing of values at runtime for specific job definitions. -{{< command >}} -$ awslocal batch register-job-definition \ +```bash +awslocal batch register-job-definition \ --job-definition-name myjobdefn \ --type container \ --parameters '{"time":"10"}' \ --container-properties '{"image":"busybox","vcpus":1,"memory":128,"command":["sleep","Ref::time"]}' -{{< / command >}} +``` ### Submit a job to the job queue @@ -172,13 +171,13 @@ This command simulates work being done in the container. Run the following command to submit a job to the job queue using the [`SubmitJob`](https://docs.aws.amazon.com/cli/latest/reference/batch/submit-job.html) API: -{{< command >}} -$ awslocal batch submit-job \ +```bash +awslocal batch submit-job \ --job-name myjob \ --job-queue myqueue \ --job-definition myjobdefn \ --container-overrides '{"command":["sh", "-c", "sleep 5; pwd"]}' -{{< / command >}} +``` You should see the following output: From a5eba1f696d883dd68bf0f67b716c4c30f27ad04 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:38:08 +0530 Subject: [PATCH 14/80] revamp bedrock --- src/content/docs/aws/services/bedrock.md | 80 ++++++++++++------------ 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/src/content/docs/aws/services/bedrock.md b/src/content/docs/aws/services/bedrock.md index fa1d2892..a14939df 100644 --- a/src/content/docs/aws/services/bedrock.md +++ b/src/content/docs/aws/services/bedrock.md @@ -1,15 +1,15 @@ --- title: "Bedrock" -linkTitle: "Bedrock" -description: Use foundation models running on your device with LocalStack! +description: Get started with Bedrock on LocalStack tags: ["Ultimate"] --- ## Introduction Bedrock is a fully managed service provided by Amazon Web Services (AWS) that makes foundation models from various LLM providers accessible via an API. + LocalStack allows you to use the Bedrock APIs to test and develop AI-powered applications in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_bedrock" >}}), which provides information on the extent of Bedrock's integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Bedrock's integration with LocalStack. ## Getting started @@ -37,16 +37,17 @@ This way you avoid long wait times when switching between models on demand with You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API. This will show you which models are available on AWS Bedrock. -{{< callout "note">}} + +:::note The actual model that will be used for emulation will differ from the ones defined in this list. You can define the used model with `DEFAULT_BEDROCK_MODEL` -{{< / callout >}} +::: Run the following command: -{{< command >}} -$ awslocal bedrock list-foundation-models -{{< / command >}} +```bash +awslocal bedrock list-foundation-models +``` ### Invoke a model @@ -56,15 +57,15 @@ However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environ Run the following command: -{{< command >}} -$ awslocal bedrock-runtime invoke-model \ +```bash +awslocal bedrock-runtime invoke-model \ --model-id "meta.llama3-8b-instruct-v1:0" \ --body '{ "prompt": "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nSay Hello!\n<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>", "max_gen_len": 2, "temperature": 0.9 }' --cli-binary-format raw-in-base64-out outfile.txt -{{< / command >}} +``` The output will be available in the `outfile.txt`. @@ -75,8 +76,8 @@ You can specify both system prompts and user messages. Run the following command: -{{< command >}} -$ awslocal bedrock-runtime converse \ +```bash +awslocal bedrock-runtime converse \ --model-id "meta.llama3-8b-instruct-v1:0" \ --messages '[{ "role": "user", @@ -87,47 +88,46 @@ $ awslocal bedrock-runtime converse \ --system '[{ "text": "You'\''re a chatbot that can only say '\''Hello!'\''" }]' -{{< / command >}} +``` ### Model Invocation Batch Processing Bedrock offers the feature to handle large batches of model invocation requests defined in S3 buckets using the [`CreateModelInvocationJob`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelInvocationJob.html) API. -First, you need to create a `JSONL` file that contains all your prompts: +First, you need to create a `JSONL` file named `batch_input.jsonl` that contains all your prompts: -{{< command >}} -$ cat batch_input.jsonl +```json {"prompt": "Tell me a quick fact about Vienna.", "max_tokens": 50, "temperature": 0.5} {"prompt": "Tell me a quick fact about Zurich.", "max_tokens": 50, "temperature": 0.5} {"prompt": "Tell me a quick fact about Las Vegas.", "max_tokens": 50, "temperature": 0.5} -{{< / command >}} +``` Then, you need to define buckets for the input as well as the output and upload the file in the input bucket: -{{< command >}} -$ awslocal s3 mb s3://in-bucket -make_bucket: in-bucket - -$ awslocal s3 cp batch_input.jsonl s3://in-bucket -upload: ./batch_input.jsonl to s3://in-bucket/batch_input.jsonl - -$ awslocal s3 mb s3://out-bucket -make_bucket: out-bucket -{{< / command >}} +```bash +awslocal s3 mb s3://in-bucket +awslocal s3 cp batch_input.jsonl s3://in-bucket +awslocal s3 mb s3://out-bucket +``` Afterwards you can run the invocation job like this: -{{< command >}} -$ awslocal bedrock create-model-invocation-job \ +```bash +awslocal bedrock create-model-invocation-job \ --job-name "my-batch-job" \ --model-id "mistral.mistral-small-2402-v1:0" \ --role-arn "arn:aws:iam::123456789012:role/MyBatchInferenceRole" \ --input-data-config '{"s3InputDataConfig": {"s3Uri": "s3://in-bucket"}}' \ --output-data-config '{"s3OutputDataConfig": {"s3Uri": "s3://out-bucket"}}' +``` + +The output will be: + +```json { "jobArn": "arn:aws:bedrock:us-east-1:000000000000:model-invocation-job/12345678" } -{{< / command >}} +``` The results will be at the S3 URL `s3://out-bucket/12345678/batch_input.jsonl.out` @@ -140,15 +140,15 @@ LocalStack will pull the model from Ollama and use it for emulation. For example, to use the Mistral model, set the environment variable while starting LocalStack: -{{< command >}} -$ DEFAULT_BEDROCK_MODEL=mistral localstack start -{{< / command >}} +```bash +DEFAULT_BEDROCK_MODEL=mistral localstack start +``` You can also define models directly in the request, by setting the `model-id` parameter to `ollama.`. For example, if you want to access `deepseek-r1`, you can do it like this: -{{< command >}} -$ awslocal bedrock-runtime converse \ +```bash +awslocal bedrock-runtime converse \ --model-id "ollama.deepseek-r1" \ --messages '[{ "role": "user", @@ -156,7 +156,7 @@ $ awslocal bedrock-runtime converse \ "text": "Say Hello!" }] }]' -{{< / command >}} +``` ## Troubleshooting @@ -164,9 +164,9 @@ Users of Docker Desktop on macOS or Windows might run into the issue of Bedrock A common reason for that is insufficient storage or memory space in the Docker Desktop VM. To resolve this issue you can increase those amounts directly in Docker Desktop or clean up unused artifacts with the Docker CLI like this -{{< command >}} -$ docker system prune -{{< / command >}} +```bash +docker system prune +``` You could also try to use a model with lower requirements. To achieve that you can search for models in the [Ollama Models library](https://ollama.com/search) with a low parameter count or smaller size. From 39fdb51413e13f81fdcfab73af4250d2084c84ae Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:43:21 +0530 Subject: [PATCH 15/80] revamp cfn docs --- .../{cloudformation.md => cloudformation.mdx} | 62 ++++++++++--------- 1 file changed, 33 insertions(+), 29 deletions(-) rename src/content/docs/aws/services/{cloudformation.md => cloudformation.mdx} (96%) diff --git a/src/content/docs/aws/services/cloudformation.md b/src/content/docs/aws/services/cloudformation.mdx similarity index 96% rename from src/content/docs/aws/services/cloudformation.md rename to src/content/docs/aws/services/cloudformation.mdx index 96a8cad1..3c7e6751 100644 --- a/src/content/docs/aws/services/cloudformation.md +++ b/src/content/docs/aws/services/cloudformation.mdx @@ -1,11 +1,12 @@ --- title: "CloudFormation" -linkTitle: "CloudFormation" description: Get started with Cloudformation on LocalStack persistence: supported with limitations tags: ["Free"] --- +import { Tabs, TabItem } from '@astrojs/starlight/components'; + ## Introduction CloudFormation is a service provided by Amazon Web Services (AWS) that allows you to define and provision infrastructure as code. @@ -14,7 +15,7 @@ With CloudFormation, you can use JSON or YAML templates to define your desired i You can specify resources, their configurations, dependencies, and relationships in these templates. LocalStack supports CloudFormation, allowing you to use the CloudFormation APIs in your local environment to declaratively define your architecture on the AWS, including resources such as S3 Buckets, Lambda Functions, and much more. -The [API coverage page]({{< ref "coverage_cloudformation" >}}) and [feature coverage](#feature-coverage) provides information on the extent of CloudFormation's integration with LocalStack. +The [API coverage page]() and [feature coverage](#feature-coverage) provides information on the extent of CloudFormation's integration with LocalStack. ## Getting started @@ -29,15 +30,18 @@ CloudFormation stack is a collection of AWS resources that you can create, updat Stacks are defined using JSON or YAML templates. Use the following code snippet and save the content in either `cfn-quickstart-stack.yaml` or `cfn-quickstart-stack.json`, depending on your preferred format. -{{< tabpane >}} -{{< tab header="YAML" lang="yaml" >}} + + +```yaml Resources: LocalBucket: Type: AWS::S3::Bucket Properties: BucketName: cfn-quickstart-bucket -{{< /tab >}} -{{< tab header="JSON" lang="json" >}} +``` + + +```json { "Resources": { "LocalBucket": { @@ -48,8 +52,9 @@ Resources: } } } -{{< /tab >}} -{{< /tabpane >}} +``` + + ### Deploy the CloudFormation Stack @@ -57,36 +62,35 @@ You can deploy the CloudFormation stack using the AWS CLI with the [`deploy`](ht The `deploy` command creates and updates CloudFormation stacks. Run the following command to deploy the stack: -{{< command >}} -$ awslocal cloudformation deploy \ +```bash +awslocal cloudformation deploy \ --stack-name cfn-quickstart-stack \ --template-file "./cfn-quickstart-stack.yaml" -{{< / command >}} +``` You can verify that the stack was created successfully by listing the S3 buckets in your LocalStack container using the [`ListBucket` API](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html). Run the following command to list the buckets: -{{< command >}} -$ awslocal s3api list-buckets -{{< / command >}} +```bash +awslocal s3api list-buckets +``` ### Delete the CloudFormation Stack You can delete the CloudFormation stack using the [`delete-stack`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/delete-stack.html) command. Run the following command to delete the stack along with all the resources created by the stack: -{{< command >}} -$ awslocal cloudformation delete-stack \ +```bash +awslocal cloudformation delete-stack \ --stack-name cfn-quickstart-stack -{{< / command >}} +``` ## Local User-Interface You can also utilize LocalStack's local CloudFormation user-interface to deploy and manage your CloudFormation stacks using public templates. You can access the user-interface at [`localhost:4566/_localstack/cloudformation/deploy`](http://localhost:4566/_localstack/cloudformation/deploy). -Local CloudFormation UI in LocalStack -

+![Local CloudFormation UI in LocalStack](/images/aws/localstack-cloudformation-local-user-interface.png) You can utilize the CloudFormation user interface to provide an existing CloudFormation template URL, input the necessary parameters, and initiate the deployment directly from your browser. Let's proceed with an example template to deploy a CloudFormation stack. @@ -101,7 +105,7 @@ Upon submission, the stack deployment will be triggered, and a result message wi The LocalStack Web Application provides a Resource Browser for managing CloudFormation stacks to manage your AWS resources locally. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **CloudFormation** under the **Management/Governance** section. -CloudFormation Resource Browser +![CloudFormation Resource Browser](/images/aws/cloudformation-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -115,16 +119,16 @@ The Resource Browser allows you to perform the following actions: The following code snippets and sample applications provide practical examples of how to use CloudFormation in LocalStack for various use cases: - [Serverless Container-based APIs with Amazon ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample) -- [Deploying containers on ECS clusters using ECR and Fargate]({{< ref "/tutorials/ecs-ecr-container-app" >}}) +- [Deploying containers on ECS clusters using ECR and Fargate]() - [Messaging Processing application with SQS, DynamoDB, and Fargate](https://github.com/localstack/sqs-fargate-ddb-cdk-go) ## Feature coverage -{{< callout "tip" >}} +:::note We are continually enhancing our CloudFormation feature coverage by consistently introducing new resource types. Your feature requests assist us in determining the priority of resource additions. Feel free to contribute by [creating a new GitHub issue](https://github.com/localstack/localstack/issues/new?assignees=&labels=feature-request&template=feature-request.yml&title=feature+request%3A+%3Ctitle%3E). -{{< /callout >}} +::: ### Features @@ -145,17 +149,17 @@ Feel free to contribute by [creating a new GitHub issue](https://github.com/loca | StackSets | Partial | | Intrinsic Functions | Partial | -{{< callout >}} +:::note Currently, support for `UPDATE` operations on resources is limited. Prefer stack re-creation over stack update at this time. -{{< /callout >}} +::: -{{< callout >}} +:::note Currently, support for `NoEcho` parameters is limited. Parameters will be masked only in the `Parameters` section of responses to `DescribeStacks` and `DescribeChangeSets` requests. This might expose sensitive information. Please exercise caution when using parameters with `NoEcho`. -{{< /callout >}} +::: ### Intrinsic Functions @@ -179,9 +183,9 @@ Please exercise caution when using parameters with `NoEcho`. ### Resources -{{< callout >}} +:::note When utilizing the Community image, any resources within the stack that are not supported will be disregarded and won't be deployed. -{{< /callout >}} +::: #### Community image From 585ed68fe91ee94235a3303662a7f150130f4ead Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:48:34 +0530 Subject: [PATCH 16/80] revamp cloudfront --- src/content/docs/aws/services/cloudfront.md | 47 ++++++++++----------- 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/src/content/docs/aws/services/cloudfront.md b/src/content/docs/aws/services/cloudfront.md index 74adac0f..2ac02e1e 100644 --- a/src/content/docs/aws/services/cloudfront.md +++ b/src/content/docs/aws/services/cloudfront.md @@ -1,6 +1,5 @@ --- title: "CloudFront" -linkTitle: "CloudFront" description: Get started with CloudFront on LocalStack tags: ["Base"] persistence: supported @@ -13,7 +12,7 @@ CloudFront distributes its web content, videos, applications, and APIs with low CloudFront APIs allow you to configure distributions, customize cache behavior, secure content with access controls, and monitor the CDN's performance through real-time metrics. LocalStack allows you to use the CloudFront APIs in your local environment to create local CloudFront distributions to transparently access your applications and file artifacts. -The supported APIs are available on our [API coverage page]({{< ref "coverage_cloudfront" >}}), which provides information on the extent of CloudFront's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of CloudFront's integration with LocalStack. ## Getting started @@ -25,46 +24,48 @@ We will demonstrate how you can create an S3 bucket, put a text file named `hell To get started, create an S3 bucket using the `mb` command: -{{< command >}} -$ awslocal s3 mb s3://abc123 -{{< / command >}} +```bash +awslocal s3 mb s3://abc123 +``` You can now go ahead, create a new text file named `hello.txt` and upload it to the bucket: -{{< command >}} -$ echo 'Hello World' > /tmp/hello.txt -$ awslocal s3 cp /tmp/hello.txt s3://abc123/hello.txt --acl public-read -{{< / command >}} +```bash +echo 'Hello World' > /tmp/hello.txt +awslocal s3 cp /tmp/hello.txt s3://abc123/hello.txt --acl public-read +``` After uploading the file to S3, you can create a CloudFront distribution using the [`CreateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) API call. Run the following command to create a distribution with the default settings: -{{< command >}} -$ domain=$(awslocal cloudfront create-distribution \ +```bash +domain=$(awslocal cloudfront create-distribution \ --origin-domain-name abc123.s3.amazonaws.com | jq -r '.Distribution.DomainName') -$ curl -k https://$domain/hello.txt -{{< / command >}} +curl -k https://$domain/hello.txt +``` -{{< callout "tip" >}} +:::note If you wish to use CloudFront on system host, ensure your local DNS setup is correctly configured. -Refer to the section on [System DNS configuration]({{< ref "dns-server#system-dns-configuration" >}}) for details. -{{< /callout >}} +Refer to the section on [System DNS configuration](/aws/tooling/dns-server#system-dns-configuration) for details. +::: In the example provided above, be aware that the final command (`curl https://$domain/hello.txt`) might encounter a temporary failure accompanied by a warning message `Could not resolve host`. + This can occur because different operating systems adopt diverse DNS caching strategies, causing a delay in the availability of the CloudFront distribution's DNS name (e.g., `abc123.cloudfront.net`) within the system. Typically, after a few retries, the command should succeed. + It's worth noting that similar behavior can be observed in the actual AWS environment, where CloudFront DNS names may take up to 10-15 minutes to propagate across the network. ## Lambda@Edge -{{< callout "note">}} +:::note We’re introducing an early, incomplete, and experimental feature that emulates AWS CloudFront Lambda@Edge, starting with version 4.3.0. It enables running Lambda functions at simulated edge locations. This allows you to locally test and develop request/response modifications, security enhancements and more. This feature is still under development, and functionality is limited. -{{< /callout >}} +::: You can enable this feature by setting `CLOUDFRONT_LAMBDA_EDGE=1` in your LocalStack configuration. @@ -77,7 +78,7 @@ You can enable this feature by setting `CLOUDFRONT_LAMBDA_EDGE=1` in your LocalS ### Current limitations -- The [`UpdateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html), [`DeleteDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_DeleteDistribution.html), and [`Persistence Restore`]({{< ref "persistence" >}}) features are not yet supported for Lambda@Edge. +- The [`UpdateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html), [`DeleteDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_DeleteDistribution.html), and [Persistence Restore](/aws/capabilities/state-management/persistence) features are not yet supported for Lambda@Edge. - The `origin-request` and `origin-response` event types currently trigger for each request because caching is not implemented in CloudFront. ## Using custom URLs @@ -92,18 +93,16 @@ The format of this structure is similar to the one used in [AWS CloudFront optio In the given example, two domains are specified as `Aliases` for a distribution. Please note that a complete configuration would entail additional values relevant to the distribution, which have been omitted here for brevity. -{{< command >}} +```bash --distribution-config {...'Aliases':'{'Quantity':2, 'Items': ['custom.domain.one', 'customDomain.two']}'...} -{{< / command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for CloudFront, which allows you to view and manage your CloudFront distributions. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **CloudFront** under the **Analytics** section. -CloudFront Resource Browser -
-
+![CloudFront Resource Browser](/images/aws/cloudfront-resource-browser.png) The Resource Browser allows you to perform the following actions: From 676d0b3050409a80e6a0f408aa6125d84c04d6e0 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:49:32 +0530 Subject: [PATCH 17/80] revamp cloudtrail --- src/content/docs/aws/services/cloudtrail.md | 53 ++++++++++----------- 1 file changed, 25 insertions(+), 28 deletions(-) diff --git a/src/content/docs/aws/services/cloudtrail.md b/src/content/docs/aws/services/cloudtrail.md index 820ae216..b12c6328 100644 --- a/src/content/docs/aws/services/cloudtrail.md +++ b/src/content/docs/aws/services/cloudtrail.md @@ -1,6 +1,5 @@ --- title: "CloudTrail" -linkTitle: "CloudTrail" description: Get started with CloudTrail on LocalStack tags: ["Ultimate"] persistence: supported @@ -12,7 +11,7 @@ CloudTrail is a service provided by Amazon Web Services (AWS) that enables you t It records API calls and actions made on your AWS resources, offering an audit trail that helps you understand changes, diagnose issues, and maintain compliance. LocalStack allows you to use the CloudTrail APIs in your local environment to create and manage Event history and trails. -The supported APIs are available on our [API coverage page]({{< ref "coverage_cloudtrail" >}}), which provides information on the extent of CloudTrail's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of CloudTrail's integration with LocalStack. ## Getting started @@ -26,9 +25,9 @@ We will demonstrate how to enable S3 object logging to CloudTrail using AWS CLI. Before you create a trail, you need to create an S3 bucket where CloudTrail can deliver the log data. You can use the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command to create a bucket: -{{< command >}} -$ awslocal s3 mb s3://my-bucket -{{< /command >}} +```bash +awslocal s3 mb s3://my-bucket +``` ### Create a trail @@ -36,11 +35,11 @@ You can create a trail which would allow the delivery of events to the S3 bucket You can use the [`CreateTrail`](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_CreateTrail.html) API to create a trail. Run the following command to create a trail: -{{< command >}} -$ awslocal cloudtrail create-trail \ +```bash +awslocal cloudtrail create-trail \ --name MyTrail \ --s3-bucket-name my-bucket -{{< /command >}} +``` ### Enable logging and configure event selectors @@ -48,28 +47,28 @@ You can now enable logging for your trail. You can use the [`StartLogging`](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_StartLogging.html) API to enable logging for your trail. Run the following command to enable logging: -{{< command >}} -$ awslocal cloudtrail start-logging --name MyTrail -{{< /command >}} +```bash +awslocal cloudtrail start-logging --name MyTrail +``` You can further configure event selectors for the trail. In this example, we will configure the trail to log all S3 object level events. You can use the [`PutEventSelectors`](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_PutEventSelectors.html) API to configure event selectors for your trail. Run the following command to configure event selectors: -{{< command >}} -$ awslocal cloudtrail put-event-selectors \ +```bash +awslocal cloudtrail put-event-selectors \ --trail-name MyTrail \ --event-selectors '[{"ReadWriteType": "All", "IncludeManagementEvents":true, "DataResources": [{"Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::my-bucket/"]}]}]' -{{< /command >}} +``` You can verify if your configuration is correct by using the [`GetEventSelectors`](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_GetEventSelectors.html) API. Run the following command to verify your configuration: -{{< command >}} -$ awslocal cloudtrail get-event-selectors \ +```bash +awslocal cloudtrail get-event-selectors \ --trail-name MyTrail -{{< /command >}} +``` The following output would be retrieved: @@ -98,21 +97,21 @@ The following output would be retrieved: You can now test the configuration by creating an object in the S3 bucket. You can use the [`cp`](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command to copy an object in the S3 bucket: -{{< command >}} -$ echo "hello world" > /tmp/hello-world -$ awslocal s3 cp /tmp/hello-world s3://my-bucket/hello-world -$ awslocal s3 ls s3://my-bucket -{{< /command >}} +```bash +echo "hello world" > /tmp/hello-world +awslocal s3 cp /tmp/hello-world s3://my-bucket/hello-world +awslocal s3 ls s3://my-bucket +``` You can verify that the object was created in the S3 bucket. You can also verify that the object level event was logged by CloudTrail using the [`LookupEvents`](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_LookupEvents.html) API. Run the following command to verify the event: -{{< command >}} -$ awslocal cloudtrail lookup-events \ +```bash +awslocal cloudtrail lookup-events \ --lookup-attributes AttributeKey=EventName,AttributeValue=PutObject \ --max-results 1 -{{< /command >}} +``` The following output would be retrieved: @@ -133,9 +132,7 @@ The following output would be retrieved: The LocalStack Web Application provides a Resource Browser for managing CloudTrail's Event History & Trails. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **CloudTrail** under the **Management/Governance** section. -CloudTrail Resource Browser -
-
+![CloudTrail Resource Browser](/images/aws/cloudtrail-resource-browser.png) The Resource Browser allows you to perform the following actions: From 23313c004b743b08265d5e9d66276ee77671ec41 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:54:43 +0530 Subject: [PATCH 18/80] revamp cloudwatch --- src/content/docs/aws/services/cloudwatch.md | 92 ++++++++++----------- 1 file changed, 44 insertions(+), 48 deletions(-) diff --git a/src/content/docs/aws/services/cloudwatch.md b/src/content/docs/aws/services/cloudwatch.md index 32e14d3b..a524e8d9 100644 --- a/src/content/docs/aws/services/cloudwatch.md +++ b/src/content/docs/aws/services/cloudwatch.md @@ -1,32 +1,18 @@ --- title: "CloudWatch" -linkTitle: "CloudWatch" description: Get started with AWS CloudWatch on LocalStack persistence: supported tags: ["Free"] --- +## Introduction + CloudWatch is a comprehensive monitoring and observability service that Amazon Web Services (AWS) provides. It allows you to collect and track metrics, collect and monitor log files, and set alarms. CloudWatch provides valuable insights into your AWS resources, applications, and services, enabling you to troubleshoot issues, optimize performance, and make informed decisions. LocalStack allows you to use CloudWatch APIs on your local machine to create and manage CloudWatch resources, such as custom metrics, alarms, and log groups, for local development and testing purposes. -The supported APIs are available on our [API coverage page]({{< ref "coverage_cloudwatch" >}}), which provides information on the extent of CloudWatch's integration with LocalStack. - -{{< callout >}} -We have introduced an all-new LocalStack-native [CloudWatch provider]({{< ref "/user-guide/aws/cloudwatch" >}}) and recently made this one the default. - -With the new provider we have migrated from storing data in Python objects within the Moto backend to a more robust system. - -Now, metrics are efficiently stored in SQLite, and alarm resources are managed using LocalStack stores. - -- Various enhancements have been made to attain greater feature parity with AWS. -- The provider is engineered to ensure thread safety, facilitating smooth concurrent operations. -- There’s a significant improvement in the integrity and durability of data. -- The new provider allows for more efficient data retrieval. - -Currently, it is still possible to switch back to the old provider using `PROVIDER_OVERRIDE_CLOUDWATCH=v1` in your LocalStack configuration. -{{< /callout >}} +The supported APIs are available on our [API coverage page](), which provides information on the extent of CloudWatch's integration with LocalStack. ## Getting started @@ -38,13 +24,13 @@ You can get the name for your Lambda Functions using the [`ListFunctions`](https Fetch the Log Groups using the [`DescribeLogGroups`](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeLogGroups.html) API. Run the following command to get the Log Group name: -{{< command >}} -$ awslocal logs describe-log-groups -{{< / command >}} +```bash +awslocal logs describe-log-groups +``` The output should look similar to the following: -```sh +```bash { "logGroups": [ { @@ -68,14 +54,14 @@ The output should look similar to the following: Get the log streams for the Log Group using the [`DescribeLogStreams`](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DescribeLogStreams.html) API. Run the following command to get the Log Stream name: -{{< command >}} -$ awslocal logs describe-log-streams \ +```bash +awslocal logs describe-log-streams \ --log-group-name /aws/lambda/serverless-local-hello -{{< / command >}} +``` The output should look similar to the following: -```sh +```bash { "logStreams": [ { @@ -95,14 +81,14 @@ The output should look similar to the following: You can now fetch the log events using the [`GetLogEvents`](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html) API. Run the following command to get the logs: -{{< command >}} -$ awslocal logs get-log-events \ +```bash +awslocal logs get-log-events \ --log-group-name '/aws/lambda/serverless-local-hello' --log-stream-name '2023/05/02/[$LATEST]853a59d0767cfaf10d6b29a6790d8b03' -{{< / command >}} +``` The output should look similar to the following: -```sh +```bash { "events": [ { @@ -126,9 +112,9 @@ The output should look similar to the following: } ``` -{{< callout "tip" >}} +:::note You can use [filters](https://docs.aws.amazon.com/cli/latest/reference/logs/filter-log-events.html) or [queries](https://docs.aws.amazon.com/cli/latest/reference/logs/get-query-results.html) with a licensed LocalStack edition to refine your results. -{{< /callout >}} +::: ## Metric Alarms @@ -145,8 +131,8 @@ With metric alarms, you can create customized thresholds and define actions base To get started with creating an alarm in LocalStack using the `awslocal` integration, use the following command: -{{< command >}} -$ awslocal cloudwatch put-metric-alarm \ +```bash +awslocal cloudwatch put-metric-alarm \ --alarm-name my-alarm \ --metric-name Orders \ --namespace test \ @@ -156,21 +142,21 @@ $ awslocal cloudwatch put-metric-alarm \ --period 30 \ --statistic Minimum \ --treat-missing notBreaching -{{< / command >}} +``` To monitor the status of the alarm, open a separate terminal and execute the following command: -{{< command >}} -$ watch "awslocal cloudwatch describe-alarms --alarm-names my-alarm | jq '.MetricAlarms[0].StateValue'" -{{< / command >}} +```bash +watch "awslocal cloudwatch describe-alarms --alarm-names my-alarm | jq '.MetricAlarms[0].StateValue'" +``` Afterward, you can add some data that will cause a breach and set the `metric-alarm` state to **ALARM** using the following command: -{{< command >}} -$ awslocal cloudwatch put-metric-data \ +```bash +awslocal cloudwatch put-metric-data \ --namespace test \ --metric-data '[{"MetricName": "Orders", "Value": -1}]' -{{< / command >}} +``` Within a few seconds, the alarm state should change to **ALARM**, and eventually, it will go back to **OK** as we configured it to treat missing data points as `not breaching`. This allows you to observe how the alarm behaves in response to the provided data. @@ -184,8 +170,8 @@ Currently, only SNS Topics are supported as the target for these actions, and it Here's an example demonstrating how to set up an alarm that sends a message to the specified topic when entering the **ALARM** state. Make sure to replace `` with the valid ARN of an existing SNS topic. -{{< command >}} -$ awslocal cloudwatch put-metric-alarm \ +```bash +awslocal cloudwatch put-metric-alarm \ --alarm-name my-alarm \ --metric-name Orders \ --namespace test \ @@ -196,20 +182,30 @@ $ awslocal cloudwatch put-metric-alarm \ --statistic Maximum \ --treat-missing notBreaching \ --alarm-actions -{{< / command >}} +``` By executing this command, you'll create an alarm named `my-alarm` that monitors the `Orders` metric in the `test` namespace. If the metric value exceeds the threshold of 50 (using the `GreaterThanThreshold` operator) during a single evaluation period of 300 seconds, the alarm will trigger the specified action on the provided SNS topic. -{{< callout "warning" >}} +:::danger Please be aware of the following known limitations in LocalStack: - Anomaly detection and extended statistics are not supported. - The `unit` values specified in the alarm are ignored. - Composite alarms are not evaluated. - Metric streams are not supported. -{{< /callout >}} +::: + +## Current Limitations + +The following CloudWatch Metrics are not supported: + +- Anomaly detection +- Metric streams +- Extended statistics + +In addition, the `unit` values specified in the alarm are ignored, and Composite alarms are not evaluated. -## Supported Service Integrations with CloudWatch Metrics +## Supported Service Integrations LocalStack supports the following AWS services for integration with CloudWatch metrics: @@ -223,9 +219,9 @@ You can access the Resource Browser by opening the LocalStack Web Application in The Resource Browser allows you to perform the following actions: -CloudWatch Logs Resource Browser +![CloudWatch Logs Resource Browser](/images/aws/cloudwatch-log-groups-resource-browser.png) -CloudWatch Metrics Resource Browser +![CloudWatch Metrics Resource Browser](/images/aws/cloudwatch-metrics-resource-browser.png) - **Create Log Group**: Create a new log group by specifying the `Log Group Name`, `KMS Key ID`, and `Tags`. - **Put metric**: Create a new metric by specifying the `Namespace` and `Metric Data`. From 7a75d4c1d602574154251a8700c417d3f5b1fbd8 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 11:58:22 +0530 Subject: [PATCH 19/80] revamp cloudwatch logs --- .../docs/aws/services/cloudwatchlogs.md | 99 ++++++++++--------- 1 file changed, 54 insertions(+), 45 deletions(-) diff --git a/src/content/docs/aws/services/cloudwatchlogs.md b/src/content/docs/aws/services/cloudwatchlogs.md index bc3c23c2..687058ff 100644 --- a/src/content/docs/aws/services/cloudwatchlogs.md +++ b/src/content/docs/aws/services/cloudwatchlogs.md @@ -1,11 +1,12 @@ --- title: "CloudWatch Logs" -linkTitle: "CloudWatch Logs" description: Get started with AWS CloudWatch Logs on LocalStack tags: ["Free"] persistence: supported --- +## Introduction + [CloudWatch Logs](https://docs.aws.amazon.com/cloudwatch/index.html) allows to store and retrieve logs. While some services automatically create and write logs (e.g. Lambda), logs can also be added manually. CloudWatch Logs is available in the Community version. @@ -23,37 +24,40 @@ In the following we setup a little example on how to use subscription filters wi First, we setup the required resources. Therefore, we create a kinesis stream, a log group and log stream. Then we can configure the subscription filter. -{{< command >}} -$ awslocal kinesis create-stream --stream-name "logtest" --shard-count 1 -$ kinesis_arn=$(awslocal kinesis describe-stream --stream-name "logtest" | jq -r .StreamDescription.StreamARN) -$ awslocal logs create-log-group --log-group-name test +```bash +awslocal kinesis create-stream --stream-name "logtest" --shard-count 1 +kinesis_arn=$(awslocal kinesis describe-stream --stream-name "logtest" | jq -r .StreamDescription.StreamARN) + +awslocal logs create-log-group --log-group-name test -$ awslocal logs create-log-stream \ - --log-group-name test \ - --log-stream-name test +awslocal logs create-log-stream \ + --log-group-name test \ + --log-stream-name test -$ awslocal logs put-subscription-filter \ +awslocal logs put-subscription-filter \ --log-group-name "test" \ --filter-name "kinesis_test" \ --filter-pattern "" \ --destination-arn $kinesis_arn \ --role-arn "arn:aws:iam::000000000000:role/kinesis_role" -{{< / command >}} +``` + +Next, we can add a log event, that will be forwarded to Kinesis. -Next, we can add a log event, that will be forwarded to kinesis. -{{< command >}} -$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000'))) -$ awslocal logs put-log-events --log-group-name test --log-stream-name test --log-events "[{\"timestamp\": ${timestamp} , \"message\": \"hello from cloudwatch\"}]" -{{< / command >}} +```bash +timestamp=$(($(date +'%s * 1000 + %-N / 1000000'))) +awslocal logs put-log-events --log-group-name test --log-stream-name test --log-events "[{\"timestamp\": ${timestamp} , \"message\": \"hello from cloudwatch\"}]" +``` Now we can retrieve the data. In our example, there will only be one record. The data record is base64 encoded and compressed in gzip format: -{{< command >}} -$ shard_iterator=$(awslocal kinesis get-shard-iterator --stream-name logtest --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON | jq -r .ShardIterator) -$ record=$(awslocal kinesis get-records --limit 10 --shard-iterator $shard_iterator | jq -r '.Records[0].Data') -$ echo $record | base64 -d | zcat -{{< / command >}} + +```bash +shard_iterator=$(awslocal kinesis get-shard-iterator --stream-name logtest --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON | jq -r .ShardIterator) +record=$(awslocal kinesis get-records --limit 10 --shard-iterator $shard_iterator | jq -r '.Records[0].Data') +echo $record | base64 -d | zcat +``` ## Filter Pattern (Pro only) @@ -66,39 +70,42 @@ LocalStack currently supports simple json-property filter. Metric filters can be used to automatically create CloudWatch metrics. In the following example we are interested in logs that include a key-value pair `"foo": "bar"` and create a metric filter. -{{< command >}} -$ awslocal logs create-log-group --log-group-name test-filter -$ awslocal logs create-log-stream \ - --log-group-name test-filter \ - --log-stream-name test-filter-stream +```bash +awslocal logs create-log-group --log-group-name test-filter + +awslocal logs create-log-stream \ +--log-group-name test-filter \ +--log-stream-name test-filter-stream -$ awslocal logs put-metric-filter \ +awslocal logs put-metric-filter \ --log-group-name test-filter \ --filter-name my-filter \ --filter-pattern "{$.foo = \"bar\"}" \ --metric-transformations \ metricName=MyMetric,metricNamespace=MyNamespace,metricValue=1,defaultValue=0 -{{< / command >}} +``` Next, we can insert some values: -{{< command >}} -$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000'))) -$ awslocal logs put-log-events --log-group-name test-filter \ + +```bash +timestamp=$(($(date +'%s * 1000 + %-N / 1000000'))) +awslocal logs put-log-events --log-group-name test-filter \ --log-stream-name test-filter-stream \ --log-events \ timestamp=$timestamp,message='"{\"foo\":\"bar\", \"hello\": \"world\"}"' \ timestamp=$timestamp,message="my test event" \ timestamp=$timestamp,message='"{\"foo\":\"nomatch\"}"' -{{< / command >}} +``` Now we can check that the metric was indeed created: -{{< command >}} + +```bash end=$(date +%s) awslocal cloudwatch get-metric-statistics --namespace MyNamespace \ --metric-name MyMetric --statistics Sum --period 3600 \ --start-time 1659621274 --end-time $end -{{< / command >}} +``` ### Filter Log Events @@ -108,9 +115,10 @@ Similarly, you can use filter-pattern to filter logs with different kinds of pat For purely JSON structured log messages, you can use JSON filter patterns to traverse the JSON object. Enclose your pattern in curly braces, like this: -{{< command >}} -$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "{$.foo = \"bar\"}" -{{< / command >}} + +```bash +awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "{$.foo = \"bar\"}" +``` This returns all events whose top level "foo" key has the "bar" value. @@ -118,27 +126,28 @@ This returns all events whose top level "foo" key has the "bar" value. You can use a simplified regex syntax for regular expression matching. Enclose your pattern in percentage signs like this: -{{< command >}} -$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "\%[fF]oo\%" -{{< / command >}} + +```bash +awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "\%[fF]oo\%" +``` + This returns all events containing "Foo" or "foo". For a complete set of the supported syntax, check [the official AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html#regex-expressions) #### Unstructured Filter Pattern If not specified otherwise in the pattern, we look for a match in the whole event message: -{{< command >}} -$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "foo" -{{< / command >}} + +```bash +awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "foo" +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for exploring CloudWatch Logs. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **CloudWatch Logs** under the **Management/Governance** section. -CloudWatch Logs Resource Browser -
-
+![CloudWatch Logs Resource Browser](/images/aws/logs-resource-browser.png) The Resource Browser allows you to perform the following actions: From 6d809a49bfd3626eff1446e654ff30a41e4f61e3 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 12:09:53 +0530 Subject: [PATCH 20/80] revamp codebuild --- .../services/{codebuild.md => codebuild.mdx} | 113 +++++++++--------- 1 file changed, 55 insertions(+), 58 deletions(-) rename src/content/docs/aws/services/{codebuild.md => codebuild.mdx} (85%) diff --git a/src/content/docs/aws/services/codebuild.md b/src/content/docs/aws/services/codebuild.mdx similarity index 85% rename from src/content/docs/aws/services/codebuild.md rename to src/content/docs/aws/services/codebuild.mdx index da93ed4e..dbad75e5 100644 --- a/src/content/docs/aws/services/codebuild.md +++ b/src/content/docs/aws/services/codebuild.mdx @@ -1,18 +1,18 @@ --- title: CodeBuild -linkTitle: CodeBuild -description: > - Get started with CodeBuild on LocalStack +description: Get started with CodeBuild on LocalStack tags: ["Base"] --- +import { FileTree } from '@astrojs/starlight/components'; + ## Introduction AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It is part of the [AWS Developer Tools suite](https://aws.amazon.com/products/developer-tools/) and integrates with other AWS services to provide an end-to-end development pipeline. LocalStack supports the emulation of most of the CodeBuild operations. -The supported operations are listed on the [API coverage page]({{< ref "coverage_codebuild" >}}). +The supported operations are listed on the [API coverage page](). AWS CodeBuild emulation is powered by the [AWS CodeBuild agent](https://docs.aws.amazon.com/codebuild/latest/userguide/use-codebuild-agent.html). @@ -28,17 +28,17 @@ In the first step, we have to create the project that we want to build with AWS In an empty directory, we need to re-create the following structure: -```bash -root-directory-name -├── pom.xml -└── src - ├── main - │   └── java - │   └── MessageUtil.java - └── test - └── java - └── TestMessageUtil.java -``` + +- root-directory-name + - pom.xml + - src + - main + - java + - MessageUtil.java + - test + - java + - TestMessageUtil.java + Let us walk through these files. `MessageUtil.java` contains the entire logic of this small application. @@ -175,30 +175,23 @@ Now we have to create two S3 buckets: Create the buckets with the following commands: -{{< command >}} -$ awslocal s3 mb s3://codebuild-demo-input - -make_bucket: codebuild-demo-input - -{{< /command >}} - -{{< command >}} -$ awslocal s3 mb s3://codebuild-demo-output - -make_bucket: codebuild-demo-output -{{< /command >}} +```bash +awslocal s3 mb s3://codebuild-demo-input +awslocal s3 mb s3://codebuild-demo-output +``` Finally, zip the content of the source code directory and upload it to the created source bucket. With a UNIX system, you can simply use the `zip` utility: -{{< command >}} -$ zip -r MessageUtil.zip -{{< /command >}} + +```bash +zip -r MessageUtil.zip +``` Then, upload `MessageUtil.zip` to the `codebuild-demo-input` bucket with the following command: -{{< command >}} -$ awslocal s3 cp MessageUtil.zip s3://codebuild-demo-input -{{< /command >}} +```bash +awslocal s3 cp MessageUtil.zip s3://codebuild-demo-input +``` ### Configuring IAM @@ -221,9 +214,10 @@ Create a `create-role.json` file with following content: ``` Then, run the following command to create the necessary IAM role: -{{< command >}} -$ awslocal iam create-role --role-name CodeBuildServiceRole --assume-role-policy-document file://create-role.json -{{< /command >}} + +```bash +awslocal iam create-role --role-name CodeBuildServiceRole --assume-role-policy-document file://create-role.json +``` From the command's response, keep note of the role ARN: it will be needed to create the CodeBuild project later on. @@ -285,9 +279,12 @@ Create a `put-role-policy.json` file with the following content: Finally, assign the policy to the role with the following command: -{{< command >}} -$ awslocal put-role-policy --role-name CodeBuildServiceRole --policy-name CodeBuildServiceRolePolicy --policy-document file://put-role-policy.json -{{< /command >}} +```bash +awslocal put-role-policy \ + --role-name CodeBuildServiceRole \ + --policy-name CodeBuildServiceRolePolicy \ + --policy-document file://put-role-policy.json +``` ### Create the build project @@ -296,9 +293,9 @@ We now need to create a build project, containing all the information about how You can use the CLI to generate the skeleton of the `CreateBuild` request, which you can later modify. Save the output of the following command to a file named `create-project.json`. -{{< command >}} -$ awslocal codebuild create-project --generate-cli-skeleton -{{< /command >}} +```bash +awslocal codebuild create-project --generate-cli-skeleton +``` From the generated file, change the source and the artifact location to match the S3 bucket names you just created. Similarly, fill in the ARN of the CodeBuild service role. @@ -325,24 +322,24 @@ Similarly, fill in the ARN of the CodeBuild service role. Now create the project with the following command: -{{< command >}} -$ awslocal codebuild create-project --cli-input-json file://create-project.json -{{< /command >}} +```bash +awslocal codebuild create-project --cli-input-json file://create-project.json +``` You have now created a CodeBuild project called `codebuild-demo-project` that uses the S3 buckets you just created as source and artifact. -{{< callout >}} +:::note LocalStack does not allow to customize the build environment. Depending on the host architecture, the build will be executed an Amazon Linux container, version `3.0.x` and `5.0.x`, respectively for the ARM and the x86 architecture. -{{< /callout >}} +::: ### Run the build In this final step, you can now execute your build with the following command: -{{< command >}} -$ awslocal codebuild start-build --project-name codebuild-demo-project -{{< /command >}} +```bash +awslocal codebuild start-build --project-name codebuild-demo-project +``` Make note of the `id` information given in output, since it can be used to query the status of the build. If you inspect the running containers (e.g., with the `docker ps -a` command), you will notice a container with the `localstack-codebuild` prefix (followed by the build ID), which CodeBuild started to execute the build. @@ -350,24 +347,24 @@ This container will be responsible to start a Docker compose stack that executes As said, you can inspect the status of the build with the following command: -{{< command >}} -$ awslocal codebuild batch-get-builds --ids -{{< /command >}} +```bash +awslocal codebuild batch-get-builds --ids +``` The command returns a list of builds. A build has a `buildStatus` attribute that will be set to `SUCCEEDED` if the build correctly terminates. -{{< callout >}} +:::note Each build goes through different phases, each of them having a start and end time, as well as a status. LocalStack does not provided such granular information. Currently, it reports only the final status of the build. -{{< /callout >}} +::: Once the build is completed, you can verify that the JAR artifact has been uploaded to the correct S3 bucket with the following command: -{{< command >}} -$ awslocal s3 ls://codebuild-demo-output -{{< /command >}} +```bash +awslocal s3 ls s3://codebuild-demo-output +``` ## Limitations From 22bdc668b37f91749d5a915f65163748713009c4 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 12:13:05 +0530 Subject: [PATCH 21/80] revamp codecommit --- src/content/docs/aws/services/codecommit.md | 29 ++++++++++----------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/src/content/docs/aws/services/codecommit.md b/src/content/docs/aws/services/codecommit.md index caeb6672..b5fd0a8c 100644 --- a/src/content/docs/aws/services/codecommit.md +++ b/src/content/docs/aws/services/codecommit.md @@ -1,15 +1,14 @@ --- title: "CodeCommit" -linkTitle: "CodeCommit" description: Get started with CodeCommit on LocalStack tags: ["Base"] persistence: supported --- -{{< callout "note" >}} +:::danger AWS has discontinued new feature development for CodeCommit effective [25 July 2024](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/). However, LocalStack will continue making parity improvements. -{{< /callout >}} +::: ## Introduction @@ -19,7 +18,7 @@ You can also use standard Git commands or CodeCommit APIs (using AWS CLI or SDKs CodeCommit also uses identity-based policies, which can be attached to IAM users, groups, and roles, ensuring secure and granular access control. LocalStack allows you to use the CodeCommit APIs in your local environment to create new repositories, push your commits, and manage the repositories. -The supported APIs are available on our [API coverage page]({{< ref "coverage_codecommit" >}}), which provides information on the extent of CodeCommit's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of CodeCommit's integration with LocalStack. ## Getting started @@ -35,12 +34,12 @@ You need to specify the repository name, repository description, and tags. Run the following command to create a new repository named `localstack-repo`: -{{< command >}} +```bash $ awslocal codecommit create-repository \ --repository-name localstack-repo \ --repository-description "A demo repository to showcase LocalStack's CodeCommit" \ --tags Team=LocalStack -{{< /command >}} +``` If successful, the command will return the following output: @@ -67,9 +66,9 @@ The repository URL is the `cloneUrlHttp` value returned by the `CreateRepository Run the following command to clone the repository to a local directory named `localstack-repo`: -{{< command >}} -$ git clone git://localhost:4510/localstack-repo -{{< /command >}} +```bash +git clone git://localhost:4510/localstack-repo +``` You will notice that the repository is empty. This is because we have not pushed any commits to the repository yet. @@ -83,11 +82,11 @@ Then, you can use [`git commit`](https://git-scm.com/docs/git-commit) to commit Run the following command to push the file to the repository: -{{< command >}} -$ git add README.md -$ git commit -m "Add README.md" -$ git push -{{< /command >}} +```bash +git add README.md +git commit -m "Add README.md" +git push +``` If successful, this command returns output similar to the following: @@ -102,7 +101,7 @@ To git://localhost:4510/localstack-repo The LocalStack Web Application provides a Resource Browser for managing CodeCommit repositories. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **CodeCommit** under the **Developer Tools** section. -CodeCommit Resource Browser +![CodeCommit Resource Browser](/images/aws/codecommit-resource-browser.png) The Resource Browser allows you to perform the following actions: From 3ade9ca08d0f4be920bf203a9ae9747ece029f64 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 12:21:12 +0530 Subject: [PATCH 22/80] revamp codedeploy --- src/content/docs/aws/services/codedeploy.md | 213 +++++++++++++------- 1 file changed, 138 insertions(+), 75 deletions(-) diff --git a/src/content/docs/aws/services/codedeploy.md b/src/content/docs/aws/services/codedeploy.md index aa26ce03..4c4474aa 100644 --- a/src/content/docs/aws/services/codedeploy.md +++ b/src/content/docs/aws/services/codedeploy.md @@ -1,8 +1,6 @@ --- title: CodeDeploy -linkTitle: CodeDeploy -description: > - Get started with CodeDeploy on LocalStack +description: Get started with CodeDeploy on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ On AWS, it supports deployments to Amazon EC2 instances, on-premises instances, Furthermore, based on the target it is also possible to use an in-place deployment or a blue/green deployment. LocalStack supports a mocking of CodeDeploy API operations. -The supported operations are listed on the [API coverage page]({{< ref "coverage_codedeploy" >}}). +The supported operations are listed on the [API coverage page](). ## Getting Started @@ -28,20 +26,27 @@ Start LocalStack using your preferred method. An application is a CodeDeploy construct that uniquely identifies your targetted application. Create an application with the [CreateApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateApplication.html) operation: -{{< command >}} -$ awslocal deploy create-application --application-name hello --compute-platform Server - +```bash +awslocal deploy create-application --application-name hello --compute-platform Server +``` + +The output will be similar to the following: + +```json { "applicationId": "063714b6-f438-4b90-bacb-ce04af7f5e83" } - -{{< /command >}} +``` Make note of the application name, which can be used with other operations such as [GetApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetApplication.html), [UpdateApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_UpdateApplication.html) and [DeleteApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_DeleteApplication.html). -{{< command >}} -$ awslocal deploy get-application --application-name hello - +```bash +awslocal deploy get-application --application-name hello +``` + +The output will be similar to the following: + +```json { "application": { "applicationId": "063714b6-f438-4b90-bacb-ce04af7f5e83", @@ -50,21 +55,23 @@ $ awslocal deploy get-application --application-name hello "computePlatform": "Server" } } - -{{< /command >}} +``` You can list all application using [ListApplications](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListApplications.html). -{{< command >}} -$ awslocal deploy list-applications - +```bash +awslocal deploy list-applications +``` + +The output will be similar to the following: + +```json { "applications": [ "hello" ] } - -{{< /command >}} +``` ### Deployment configuration @@ -72,35 +79,45 @@ A deployment configuration consists of rules for deployment along with success a Create a deployment configuration using [CreateDeploymentConfig](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeploymentConfig.html): -{{< command >}} -$ awslocal deploy create-deployment-config --deployment-config-name hello-conf \ +```bash +awslocal deploy create-deployment-config --deployment-config-name hello-conf \ --compute-platform Server \ --minimum-healthy-hosts '{"type": "HOST_COUNT", "value": 1}' - +``` + +The output will be similar to the following: + +```json { "deploymentConfigId": "0327ce0a-4637-4884-8899-49af7b9423b6" } - -{{< /command >}} +``` [ListDeploymentConfigs](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeploymentConfigs.html) can be used to list all available configs: -{{< command >}} -$ awslocal deploy list-deployment-configs - +```bash +awslocal deploy list-deployment-configs +``` + +The output will be similar to the following: + +```json { "deploymentConfigsList": [ "hello-conf" ] } - -{{< /command >}} +``` Use [GetDeploymentConfig](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeploymentConfig.html) and [DeleteDeploymentConfig](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_DeleteDeploymentConfig.html) to manage deployment configurations. -{{< command >}} -$ awslocal deploy get-deployment-config --deployment-config-name hello-conf - +```bash +awslocal deploy get-deployment-config --deployment-config-name hello-conf +``` + +The output will be similar to the following: + +```json { "deploymentConfigInfo": { "deploymentConfigId": "0327ce0a-4637-4884-8899-49af7b9423b6", @@ -113,40 +130,61 @@ $ awslocal deploy get-deployment-config --deployment-config-name hello-conf "computePlatform": "Server" } } - -{{< /command >}} +``` ### Deployment groups -Deployment groups can be managed with [CreateDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeploymentGroup.html), [ListDeploymentGroups](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeploymentGroups.html), [UpdateDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_UpdateDeploymentGroup.html), [GetDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeploymentGroup.html) and [DeleteDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_DeleteDeploymentGroup.html). +Deployment groups can be managed with: + +- [CreateDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeploymentGroup.html) +- [ListDeploymentGroups](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeploymentGroups.html) +- [UpdateDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_UpdateDeploymentGroup.html) +- [GetDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeploymentGroup.html) +- [DeleteDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_DeleteDeploymentGroup.html) + +Create a deployment group with [CreateDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeploymentGroup.html): -{{< command >}} -$ awslocal deploy create-deployment-group \ +```bash +awslocal deploy create-deployment-group \ --application-name hello \ --service-role-arn arn:aws:iam::000000000000:role/role \ --deployment-group-name hello-group - +``` + +The output will be similar to the following: + +```json { "deploymentGroupId": "09506586-9ba9-4005-a1be-840407abb39d" } - -{{< /command >}} +``` + +List all deployment groups for an application with [ListDeploymentGroups](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeploymentGroups.html): + +```bash +awslocal deploy list-deployment-groups --application-name hello +``` -{{< command >}} -$ awslocal deploy list-deployment-groups --application-name hello - +The output will be similar to the following: + +```json { "deploymentGroups": [ "hello-group" ] } - -{{< /command >}} +``` + +Get a deployment group with [GetDeploymentGroup](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeploymentGroup.html): -{{< command >}} -$ awslocal deploy get-deployment-group --application-name hello \ +```bash +awslocal deploy get-deployment-group --application-name hello \ --deployment-group-name hello-group - +``` + +The output will be similar to the following: + +```json { "deploymentGroupInfo": { "applicationName": "hello", @@ -165,39 +203,58 @@ $ awslocal deploy get-deployment-group --application-name hello \ "terminationHookEnabled": false } } - -{{< /command >}} +``` ### Deployments -Operations related to deployment management are: [CreateDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html), [GetDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeployment.html), [ListDeployments](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeployments.html). +Operations related to deployment management are: + +- [CreateDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html) +- [GetDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeployment.html) +- [ListDeployments](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeployments.html) -{{< command >}} -$ awslocal deploy create-deployment \ +Create a deployment with [CreateDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html): + +```bash +awslocal deploy create-deployment \ --application-name hello \ --deployment-group-name hello-group \ --revision '{"revisionType": "S3", "s3Location": {"bucket": "placeholder", "key": "placeholder", "bundleType": "tar"}}' - +``` + +The output will be similar to the following: + +```json { "deploymentId": "d-TU3TNCSTO" } - -{{< /command >}} +``` + +List all deployments for an application with [ListDeployments](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ListDeployments.html): -{{< command >}} -$ awslocal deploy list-deployments - +```bash +awslocal deploy list-deployments +``` + +The output will be similar to the following: + +```json { "deployments": [ "d-TU3TNCSTO" ] } - -{{< /command >}} +``` + +Get a deployment with [GetDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_GetDeployment.html): + +```bash +awslocal deploy get-deployment --deployment-id d-TU3TNCSTO +``` + +The output will be similar to the following: -{{< command >}} -$ awslocal deploy get-deployment --deployment-id d-TU3TNCSTO - +```json { "deploymentInfo": { "applicationName": "hello", @@ -227,24 +284,30 @@ $ awslocal deploy get-deployment --deployment-id d-TU3TNCSTO "computePlatform": "Server" } } - -{{< /command >}} +``` + +Furthermore, [ContinueDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html) and [StopDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html) can be used to control the deployment flows: + +Continue a deployment with [ContinueDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html): + +```bash +awslocal deploy continue-deployment --deployment-id d-TU3TNCSTO +``` + +Stop a deployment with [StopDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html): -Furthermore, [ContinueDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html) and [StopDeployment](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_StopDeployment.html) can be used to control the deployment flows. +```bash +awslocal deploy stop-deployment --deployment-id d-TU3TNCSTO +``` -{{< command >}} -$ awslocal deploy continue-deployment --deployment-id d-TU3TNCSTO -{{< /command >}} +The output will be similar to the following: -{{< command >}} -$ awslocal deploy stop-deployment --deployment-id d-TU3TNCSTO - +```json { "status": "Succeeded", "statusMessage": "Mock deployment stopped" } - -{{< /command >}} +``` ## Limitations From ab4efa947d9809e65f55daf7c33fc54a429ceb99 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 12:27:20 +0530 Subject: [PATCH 23/80] revamp config docs --- src/content/docs/aws/services/config.md | 37 ++++++++++++------------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/src/content/docs/aws/services/config.md b/src/content/docs/aws/services/config.md index 9c28d46d..77d3006b 100644 --- a/src/content/docs/aws/services/config.md +++ b/src/content/docs/aws/services/config.md @@ -1,6 +1,5 @@ --- title: "Config" -linkTitle: "Config" description: Get started with Config on LocalStack persistence: supported tags: ["Free"] @@ -13,7 +12,7 @@ Config provides a comprehensive view of the resource configuration across your A Config continuously records configuration changes and allows you to retain a historical record of these changes. LocalStack allows you to use the Config APIs in your local environment to assesses resource configurations and notifies you of any non-compliant items to mitigate potential security risks. -The supported APIs are available on our [API coverage page]({{< ref "coverage_config" >}}), which provides information on the extent of Config's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Config's integration with LocalStack. ## Getting started @@ -28,20 +27,20 @@ The S3 bucket will be used to receive a configuration snapshot on request and co The SNS topic will be used to notify you when a configuration snapshot is available. You can create a new S3 bucket and SNS topic using the AWS CLI: -{{< command >}} -$ awslocal s3 mb s3://config-test -$ awslocal sns create-topic --name config-test-topic -{{< /command >}} +```bash +awslocal s3 mb s3://config-test +awslocal sns create-topic --name config-test-topic +``` ### Create a new configuration recorder You can now create a new configuration recorder to record configuration changes for specified resource types, using the [`PutConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_PutConfigurationRecorder.html) API. Run the following command to create a new configuration recorder: -{{< command >}} -$ awslocal configservice put-configuration-recorder \ +```bash +awslocal configservice put-configuration-recorder \ --configuration-recorder name=default,roleARN=arn:aws:iam::000000000000:role/config-role -{{< /command >}} +``` We have specified the `roleARN` parameter to grant the configuration recorder the needful permissions to access the S3 bucket and SNS topic. In LocalStack, IAM roles are not enforced, so you can specify any role ARN you like. @@ -68,8 +67,8 @@ You can inline the JSON into the `awslocal` command. Run the following command to create the delivery channel: -{{< command >}} -$ awslocal configservice put-delivery-channel \ +```bash +awslocal configservice put-delivery-channel \ --delivery-channel '{ "name": "default", "s3BucketName": "config-test", @@ -78,7 +77,7 @@ $ awslocal configservice put-delivery-channel \ "deliveryFrequency": "Twelve_Hours" } }' -{{< /command >}} +``` ### Start the configuration recorder @@ -86,17 +85,17 @@ You can now start recording configurations of the local AWS resources you have s You can use the [`StartConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_StartConfigurationRecorder.html) API to start the configuration recorder. Run the following command to start the configuration recorder: -{{< command >}} -$ awslocal configservice start-configuration-recorder \ +```bash +awslocal configservice start-configuration-recorder \ --configuration-recorder-name default -{{< /command >}} +``` You can list the delivery channels and configuration recorders using the [`DescribeDeliveryChannels`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeDeliveryChannels.html) and [`DescribeConfigurationRecorderStatus`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeConfigurationRecorderStatus.html) APIs respectively. -{{< command >}} -$ awslocal configservice describe-delivery-channels -$ awslocal configservice describe-configuration-recorder-status -{{< /command >}} +```bash +awslocal configservice describe-delivery-channels +awslocal configservice describe-configuration-recorder-status +``` ## Current Limitations From 2305debc9138fc0ce0d921685d39500e07156806 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 12:30:37 +0530 Subject: [PATCH 24/80] revamp cost explorer --- .../aws/services/{cost-explorer.md => ce.md} | 45 +++++++++---------- 1 file changed, 21 insertions(+), 24 deletions(-) rename src/content/docs/aws/services/{cost-explorer.md => ce.md} (86%) diff --git a/src/content/docs/aws/services/cost-explorer.md b/src/content/docs/aws/services/ce.md similarity index 86% rename from src/content/docs/aws/services/cost-explorer.md rename to src/content/docs/aws/services/ce.md index 00f5cf75..d4e4d5f1 100644 --- a/src/content/docs/aws/services/cost-explorer.md +++ b/src/content/docs/aws/services/ce.md @@ -1,8 +1,6 @@ --- title: "Cost Explorer" -linkTitle: "Cost Explorer" -description: > - Get started with Cost Explorer on LocalStack +description: Get started with Cost Explorer on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ Cost Explorer offers options to filter and group data by dimensions such as serv With Cost Explorer, you can forecast costs, track budget progress, and set up alerts to receive notifications when spending exceeds predefined thresholds. LocalStack allows you to use the Cost Explorer APIs in your local environment to create and manage cost category definition, cost anomaly monitors & subscriptions. -The supported APIs are available on our [API coverage page]({{< ref "coverage_ce" >}}), which provides information on the extent of Cost Explorer's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Cost Explorer's integration with LocalStack. ## Getting started @@ -27,10 +25,10 @@ We will demonstrate how to mock the Cost Explorer APIs with the AWS CLI. You can create a Cost Category definition using the [`CreateCostCategoryDefinition`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateCostCategoryDefinition.html)) API. The following example creates a Cost Category definition using an empty rule condition of type "REGULAR": -{{< command >}} -$ awslocal ce create-cost-category-definition --name test \ +```bash +awslocal ce create-cost-category-definition --name test \ --rule-version "CostCategoryExpression.v1" --rules '[{"Value": "test", "Rule": {}, "Type": "REGULAR"}]' -{{< /command >}} +``` The following output would be retrieved: @@ -43,10 +41,10 @@ The following output would be retrieved: You can describe the Cost Category definition using the [`DescribeCostCategoryDefinition`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_DescribeCostCategoryDefinition.html) API. Run the following command: -{{< command >}} -$ awslocal ce describe-cost-category-definition \ +```bash +awslocal ce describe-cost-category-definition \ --cost-category-arn arn:aws:ce::000000000000:costcategory/test -{{< /command >}} +``` The following output would be retrieved: @@ -72,8 +70,8 @@ The following output would be retrieved: You can add an alert subscription to a cost anomaly detection monitor to define subscribers using the [`CreateAnomalySubscription`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateAnomalySubscription.html) API. The following example creates a cost anomaly subscription: -{{< command >}} -$ awslocal ce create-anomaly-subscription --anomaly-subscription '{ +```bash +awslocal ce create-anomaly-subscription --anomaly-subscription '{ "AccountId": "12345", "SubscriptionName": "sub1", "Frequency": "DAILY", @@ -81,7 +79,7 @@ $ awslocal ce create-anomaly-subscription --anomaly-subscription '{ "Subscribers": [], "Threshold": 111 }' -{{< /command >}} +``` The following output would be retrieved: @@ -94,9 +92,9 @@ The following output would be retrieved: You can retrieve the cost anomaly subscriptions using the [`GetAnomalySubscriptions`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetAnomalySubscriptions.html) API. Run the following command: -{{< command >}} -$ awslocal ce get-anomaly-subscriptions -{{< /command >}} +```bash +awslocal ce get-anomaly-subscriptions +``` The following output would be retrieved: @@ -121,12 +119,12 @@ The following output would be retrieved: You can create a new cost anomaly detection subscription with the requested type and monitor specification using the [`CreateAnomalyMonitor`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateAnomalyMonitor.html) API. The following example creates a cost anomaly monitor: -{{< command >}} -$ awslocal ce create-anomaly-monitor --anomaly-monitor '{ +```bash +awslocal ce create-anomaly-monitor --anomaly-monitor '{ "MonitorName": "mon5463", "MonitorType": "DIMENSIONAL" }' -{{< /command >}} +``` The following output would be retrieved: @@ -139,9 +137,9 @@ The following output would be retrieved: You can retrieve the cost anomaly monitors using the [`GetAnomalyMonitors`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetAnomalyMonitors.html) API. Run the following command: -{{< command >}} -$ awslocal ce get-anomaly-monitors -{{< /command >}} +```bash +awslocal ce get-anomaly-monitors +``` The following output would be retrieved: @@ -162,8 +160,7 @@ The following output would be retrieved: The LocalStack Web Application provides a Resource Browser for managing cost category definitions for the Cost Explorer service. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the Resources section, and then clicking on **Cost Explorer** under the **Cloud Financial Management** section. -Cost Explorer Resource Browser -

+![Cost Explorer Resource Browser](/images/aws/cost-explorer-resource-browser.png) The Resource Browser allows you to perform the following actions: From ef2ee3a78a653ce8d59b6fc34038bb60f0117249 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:10:39 +0530 Subject: [PATCH 25/80] revamp codepipeline --- src/content/docs/aws/services/codepipeline.md | 169 ++++++++++-------- 1 file changed, 98 insertions(+), 71 deletions(-) diff --git a/src/content/docs/aws/services/codepipeline.md b/src/content/docs/aws/services/codepipeline.md index cd21125f..8c031af3 100644 --- a/src/content/docs/aws/services/codepipeline.md +++ b/src/content/docs/aws/services/codepipeline.md @@ -1,8 +1,6 @@ --- title: CodePipeline -linkTitle: CodePipeline -description: > - Get started with CodePipeline on LocalStack +description: Get started with CodePipeline on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ CodePipeline can be used to create automated pipelines that handle the build, te LocalStack comes with a bespoke execution engine that can be used to create, manage, and execute pipelines. It supports a variety of actions that integrate with S3, CodeBuild, CodeConnections, and more. -The available operations can be found on the [API coverage]({{< ref "coverage_codepipeline" >}}) page. +The available operations can be found on the [API coverage]() page. ## Getting started @@ -26,38 +24,32 @@ Start LocalStack using your preferred method. Begin by creating the S3 buckets that will serve as the source and target. -{{< command >}} -$ awslocal s3 mb s3://source-bucket -$ awslocal s3 mb s3://target-bucket -{{< / command >}} +```bash +awslocal s3 mb s3://source-bucket +awslocal s3 mb s3://target-bucket +``` It is important to note the CodePipeline requires source S3 buckets to have versioning enabled. This can be done using the S3 [`PutBucketVersioning`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html) operation. -{{< command >}} -$ awslocal s3api put-bucket-versioning \ +```bash +awslocal s3api put-bucket-versioning \ --bucket source-bucket \ --versioning-configuration Status=Enabled -{{< /command >}} +``` Now create a placeholder file that will flow through the pipeline and upload it to the source bucket. -{{< command >}} -$ echo "Hello LocalStack!" > file -{{< /command >}} - -{{< command >}} -$ awslocal s3 cp file s3://source-bucket - -upload: ./file to s3://source-bucket/file - -{{< /command >}} +```bash +echo "Hello LocalStack!" > file +awslocal s3 cp file s3://source-bucket +``` Pipelines also require an artifact store, which is also an S3 bucket that is used as intermediate storage. -{{< command >}} -$ awslocal s3 mb s3://artifact-store-bucket -{{< / command >}} +```bash +awslocal s3 mb s3://artifact-store-bucket +``` ### Configure IAM @@ -83,12 +75,11 @@ Create the role and make note of the role ARN: } ``` -{{< command >}} -$ awslocal iam create-role --role-name role --assume-role-policy-document file://role.json | jq .Role.Arn - -"arn:aws:iam::000000000000:role/role" - -{{< /command >}} +Create the role with the following command: + +```bash +awslocal iam create-role --role-name role --assume-role-policy-document file://role.json | jq .Role.Arn +``` Now add a permissions policy to this role that permits read and write access to S3. @@ -111,9 +102,9 @@ Now add a permissions policy to this role that permits read and write access to The permissions in the above example policy are relatively broad. You might want to use a more focused policy for better security on production systems. -{{< command >}} -$ awslocal iam put-role-policy --role-name role --policy-name policy --policy-document file://policy.json -{{< /command >}} +```bash +awslocal iam put-role-policy --role-name role --policy-name policy --policy-document file://policy.json +``` ### Create pipeline @@ -199,9 +190,9 @@ These correspond to the resources we created earlier. Create the pipeline using the following command: -{{< command >}} -$ awslocal codepipeline create-pipeline --pipeline file://./declaration.json -{{< /command >}} +```bash +awslocal codepipeline create-pipeline --pipeline file://./declaration.json +``` ### Verify pipeline execution @@ -210,9 +201,13 @@ A 'pipeline execution' is an instance of a pipeline in a running or finished sta The [`CreatePipeline`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_CreatePipeline.html) operation we ran earlier started a pipeline execution. This can be confirmed using: -{{< command >}} -$ awslocal codepipeline list-pipeline-executions --pipeline-name pipeline - +```bash +awslocal codepipeline list-pipeline-executions --pipeline-name pipeline +``` + +The output will be similar to the following: + +```json { "pipelineExecutionSummaries": [ { @@ -227,8 +222,7 @@ $ awslocal codepipeline list-pipeline-executions --pipeline-name pipeline } ] } - -{{< /command >}} +``` Note the `trigger.triggerType` field specifies what initiated the pipeline execution. Currently in LocalStack, only two triggers are implemented: `CreatePipeline` and `StartPipelineExecution`. @@ -236,30 +230,34 @@ Currently in LocalStack, only two triggers are implemented: `CreatePipeline` and The above pipeline execution was successful. This means that we can retrieve the `output-file` object from the `target-bucket` S3 bucket. -{{< command >}} -$ awslocal s3 cp s3://target-bucket/output-file output-file - -download: s3://target-bucket/output-file to ./output-file - -{{< /command >}} +```bash +awslocal s3 cp s3://target-bucket/output-file output-file +``` To verify that it is the same file as the original input: -{{< command >}} -$ cat output-file - +```bash +cat output-file +``` + +The output will be: + +```text Hello LocalStack! - -{{< /command >}} +``` ### Examine action executions Using the [`ListActionExecutions`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListPipelineExecutions.html), detailed information about each action execution such as inputs and outputs can be retrieved. This is useful when debugging the pipeline. -{{< command >}} -$ awslocal codepipeline list-action-executions --pipeline-name pipeline - +```bash +awslocal codepipeline list-action-executions --pipeline-name pipeline +``` + +The output will be similar to the following: + +```json { "actionExecutionDetails": [ { @@ -315,27 +313,31 @@ $ awslocal codepipeline list-action-executions --pipeline-name pipeline "stageName": "stage1", "actionName": "action1", ... - -{{< /command >}} +``` -{{< callout >}} +:::note LocalStack does not use the same logic to generate external execution IDs as AWS so there may be minor discrepancies. The same is true for status and error messages produced by actions. -{{< /callout >}} +::: ## Pipelines -The operations [CreatePipeline](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_CreatePipeline.html), [GetPipeline](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_GetPipeline.html), [UpdatePipeline](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_UpdatePipeline.html), [ListPipelines](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListPipelines.html), [DeletePipeline](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_DeletePipeline.html) are used to manage pipeline declarations. +The operations [`CreatePipeline`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_CreatePipeline.html), [`GetPipeline`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_GetPipeline.html), [`UpdatePipeline`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_UpdatePipeline.html), [`ListPipelines`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListPipelines.html), [`DeletePipeline`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_DeletePipeline.html) are used to manage pipeline declarations. LocalStack supports emulation for V1 pipelines. V2 pipelines are only created as mocks. -{{< callout "tip" >}} +:::note Emulation for V2 pipelines is not supported. Make sure that the pipeline type is explicitly set in the declaration. -{{< /callout >}} +::: -Pipeline executions can be managed with [`StartPipelineExecution`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_StartPipelineExecution.html), [`GetPipelineExecution`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_GetPipelineExecution.html), [`ListPipelineExecutions`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListPipelineExecutions.html) and [`StopPipelineExecutions`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_StopPipelineExecution.html). +Pipeline executions can be managed with: + +- [`StartPipelineExecution`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_StartPipelineExecution.html) +- [`GetPipelineExecution`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_GetPipelineExecution.html) +- [`ListPipelineExecutions`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListPipelineExecutions.html) +- [`StopPipelineExecutions`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_StopPipelineExecution.html) When stopping pipeline executions with `StopPipelineExecution`, the stop and abandon method is not supported. Setting the `abandon` flag will have no impact. @@ -345,15 +347,26 @@ Action executions can be inspected using the [`ListActionExecutions`](https://do ### Tagging pipelines -Pipelines resources can be [tagged](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-tag.html) using the [`TagResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_TagResource.html), [`UntagResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_UntagResource.html) and [`ListTagsForResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListTagsForResource.html) operations. +Pipelines resources can be [tagged](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-tag.html) using the following operations: + +- [`TagResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_TagResource.html) +- [`UntagResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_UntagResource.html) +- [`ListTagsForResource`](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ListTagsForResource.html) + +Tag the pipeline with the following command: -{{< command >}} -$ awslocal codepipeline tag-resource \ +```bash +awslocal codepipeline tag-resource \ --resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline \ --tags key=purpose,value=tutorial -$ awslocal codepipeline list-tags-for-resource \ +awslocal codepipeline list-tags-for-resource \ --resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline +``` + +The output will be similar to the following: + +```json { "tags": [ { @@ -362,11 +375,15 @@ $ awslocal codepipeline list-tags-for-resource \ } ] } +``` -$ awslocal codepipeline untag-resource \ +Untag the pipeline with the following command: + +```bash +awslocal codepipeline untag-resource \ --resource-arn arn:aws:codepipeline:eu-central-1:000000000000:pipeline \ --tag-keys purpose -{{< /command >}} +``` ## Variables @@ -375,9 +392,9 @@ CodePipeline on LocalStack supports [variables](https://docs.aws.amazon.com/code Actions produce output variables which can be referenced in the configuration of subsequent actions. Make note that only when the action defines a namespace, its output variables are availabe to downstream actions. -{{< callout "tip" >}} +:::note If an action does not use a namespace, its output variables are not available to downstream actions. -{{< /callout >}} +::: CodePipeline's variable placeholder syntax is as follows: @@ -395,6 +412,11 @@ The supported actions in LocalStack CodePipeline are listed below. Using an unsupported action will make the pipeline fail. If you would like support for more actions, please [raise a feature request](https://github.com/localstack/localstack/issues/new/choose). +### CloudFormation Deploy + +The [CloudFormation Deploy](https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CloudFormation.html) action executes a CloudFormation stack. +It supports the following modes: `CREATE_UPDATE`, `CHANGE_SET_REPLACE`, `CHANGE_SET_EXECUTE` + ### CodeBuild Source and Test The [CodeBuild Source and Test](https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CodeBuild.html) action can be used to start a CodeBuild container and run the given buildspec. @@ -421,6 +443,10 @@ It will only update the running ECS service with a new task definition and wait The [ECS Deploy](https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-ECS.html) action creates a revision of a task definition based on an already deployed ECS service. +### Lambda Invoke + +The [Lambda Invoke](https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-Lambda.html) action is used to execute a Lambda function in a pipeline. + ### Manual Approval The Manual Approval action can be included in the pipeline declaration but it will only function as a no-op. @@ -438,6 +464,7 @@ The [S3 Source](https://docs.aws.amazon.com/codepipeline/latest/userguide/action - Emulation for [V2 pipeline types](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipeline-types-planning.html) is not supported. They will be created as mocks only. - [Rollbacks and stage retries](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-stages.html) are not available. +- [Custom actions](https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html) and associated operations (AcknowledgeJob, GetJobDetails, PollForJobs, etc.) are not supported. - [Triggers](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-triggers.html) are not implemented. Pipelines are executed only when [CreatePipeline](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_CreatePipeline.html) and [StartPipelineExecution](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_StartPipelineExecution.html) are invoked. - [Execution mode behaviours](https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts-how-it-works.html#concepts-how-it-works-executions) are not implemented. From 745358e24389616dc7c979edcbd1f99c5a265e36 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:15:49 +0530 Subject: [PATCH 26/80] revamp cognito docs --- src/content/docs/aws/services/cognito.md | 129 ++++++++++++++--------- 1 file changed, 79 insertions(+), 50 deletions(-) diff --git a/src/content/docs/aws/services/cognito.md b/src/content/docs/aws/services/cognito.md index af0d15a9..908de365 100644 --- a/src/content/docs/aws/services/cognito.md +++ b/src/content/docs/aws/services/cognito.md @@ -1,6 +1,5 @@ --- title: "Cognito" -linkTitle: "Cognito" description: Get started with Cognito on LocalStack tags: ["Base"] persistence: supported @@ -13,7 +12,7 @@ Cognito enables developers to add user sign-up, sign-in, and access control func Cognito supports various authentication methods, including social identity providers, SAML-based identity providers, and custom authentication flows. LocalStack allows you to use the Cognito APIs in your local environment to manage authentication and access control for your local application and resources. -The supported APIs are available on our [Cognito Identity coverage page]({{< ref "coverage_cognito-identity" >}}) and [Cognito User Pools coverage page]({{< ref "coverage_cognito-idp" >}}), which provides information on the extent of Cognito's integration with LocalStack. +The supported APIs are available on our [Cognito Identity coverage page]() and [Cognito User Pools coverage page](), which provides information on the extent of Cognito's integration with LocalStack. ## Getting started @@ -27,9 +26,9 @@ We will demonstrate how you can create a Cognito user pool and client, and then To create a user pool, you can use the [`CreateUserPool`](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPool.html) API call. The following command creates a user pool named `test`: -{{< command >}} -$ awslocal cognito-idp create-user-pool --pool-name test -{{< /command >}} +```bash +awslocal cognito-idp create-user-pool --pool-name test +``` You can see an output similar to the following: @@ -66,15 +65,15 @@ You can see an output similar to the following: You will need the user pool's `id` for further operations. Save it in a `pool_id` variable: -{{< command >}} -$ pool_id= -{{< /command >}} +```bash +pool_id= +``` Alternatively, you can use JSON processor like [`jq`](https://stedolan.github.io/jq/) to extract the essential information right from the outset when creating a pool. -{{< command >}} -$ pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id") -{{< /command >}} +```bash +pool_id=$(awslocal cognito-idp create-user-pool --pool-name test | jq -rc ".UserPool.Id") +``` ### Adding a Client @@ -83,9 +82,9 @@ You will require the ID of the newly created client for the subsequent steps. You can use the [`CreateUserPoolClient`](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html) for both client creation and extraction of the corresponding ID. Run the following command: -{{< command >}} -$ client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId") -{{< /command >}} +```bash +client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client | jq -rc ".UserPoolClient.ClientId") +``` ### Using Predefined IDs for Pool Creation @@ -95,39 +94,49 @@ Please note that a valid custom id must be in the format `_}} -$ awslocal cognito-idp create-user-pool --pool-name p1 --user-pool-tags "_custom_id_=us-east-1_myid123" +```bash +awslocal cognito-idp create-user-pool --pool-name p1 --user-pool-tags "_custom_id_=us-east-1_myid123" +``` + +The output will be: + +```json { "UserPool": { "Id": "myid123", "Name": "p1", ... -{{< /command >}} +``` You also have the possibility to create a Cognito user pool client with a predefined ID by specifying a `ClientName` with the specific format: `_custom_id_:`. -{{< command >}} -$ awslocal cognito-idp create-user-pool-client --user-pool-id us-east-1_myid123 --client-name _custom_id_:myclient123 +```bash +awslocal cognito-idp create-user-pool-client --user-pool-id us-east-1_myid123 --client-name _custom_id_:myclient123 +``` + +The output will be: + +```json { "UserPoolClient": { "UserPoolId": "us-east-1_myid123", "ClientName": "_custom_id_:myclient123", "ClientId": "myclient123", ... -{{< /command >}} +``` ### Signing up and confirming a user You can now use the [`SignUp`](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_SignUp.html) API to sign up a user. Run the following command: -{{< command >}} -$ awslocal cognito-idp sign-up \ +```bash +awslocal cognito-idp sign-up \ --client-id $client_id \ --username example_user \ --password 12345678Aa! \ --user-attributes Name=email,Value= -{{< /command >}} +``` You can see an output similar to the following: @@ -140,7 +149,7 @@ You can see an output similar to the following: Once the user is successfully created, a confirmation code will be generated. This code can be found in the LocalStack container logs (as shown below). -Additionally, if you have [SMTP configured]({{< ref "configuration#emails" >}}), the confirmation code can be optionally sent via email for enhanced convenience and user experience. +Additionally, if you have [SMTP configured](/aws/capabilities/config/configuration/#emails), the confirmation code can be optionally sent via email for enhanced convenience and user experience. ```bash INFO:localstack_ext.services.cognito.cognito_idp_api: Confirmation code for Cognito user example_user: 125796 @@ -150,18 +159,23 @@ DEBUG:localstack_ext.bootstrap.email_utils: Sending confirmation code via email You can confirm the user with the activation code, using the [`ConfirmSignUp`](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_ConfirmSignUp.html) API. Execute the following command: -{{< command >}} -$ awslocal cognito-idp confirm-sign-up \ +```bash +awslocal cognito-idp confirm-sign-up \ --client-id $client_id \ --username example_user \ --confirmation-code -{{< /command >}} +``` Since the above command does not provide a direct response, we need to verify the success of the request by checking the pool. Run the following command to use the [`ListUsers`](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_ListUsers.html) API to list the users in the pool: -{{< command "hl_lines=21" >}} -$ awslocal cognito-idp list-users --user-pool-id $pool_id +```bash +awslocal cognito-idp list-users --user-pool-id $pool_id +``` + +The output will be similar to the following: + +```json { "Users": [ { @@ -185,7 +199,7 @@ $ awslocal cognito-idp list-users --user-pool-id $pool_id } ] } -{{< /command >}} +``` ## JWT Token Issuer and JSON Web Key Sets (JWKS) endpoints @@ -204,17 +218,27 @@ https://cognito-idp.localhost.localstack.cloud/ To access the JSON Web Key Sets (JWKS) configuration for each user pool, you can use the standardized well-known URL below: -{{< command >}} -$ curl 'http://localhost:4566//.well-known/jwks.json' +```bash +curl 'http://localhost:4566//.well-known/jwks.json' +``` + +The output will be similar to the following: + +```json {"keys": [{"kty": "RSA", "alg": "RS256", "use": "sig", "kid": "test-key", "n": "k6lrbEH..."]} -{{}} +``` Moreover, you can retrieve the global region-specific public keys for Cognito Identity Pools using the following endpoint: -{{< command >}} -$ curl http://localhost:4566/.well-known/jwks_uri +```bash +curl http://localhost:4566/.well-known/jwks_uri +``` + +The output will be similar to the following: + +```bash {"keys": [{"kty": "RSA", "alg": "RS512", "use": "sig", "kid": "ap-northeast-11", "n": "AI7mc1assO5..."]} -{{}} +``` ## Cognito Lambda Triggers @@ -278,23 +302,23 @@ exports.handler = async (event) => { Enter the following commands to create the Lambda function: -{{< command >}} -$ zip function.zip index.js -$ awslocal lambda create-function \ +```bash +zip function.zip index.js +awslocal lambda create-function \ --function-name migrate_users \ --runtime nodejs18.x \ --zip-file fileb://function.zip \ --handler index.handler \ --role arn:aws:iam::000000000000:role/lambda-role -{{}} +``` Subsequently, you can define the corresponding `--lambda-config` when creating the user pool to link it with the Lambda function: -{{< command >}} -$ awslocal cognito-idp create-user-pool \ +```bash +awslocal cognito-idp create-user-pool \ --pool-name test2 \ --lambda-config '{"UserMigration":"arn:aws:lambda:us-east-1:000000000000:function:migrate_users"}' -{{}} +``` Upon successful authentication of a non-registered user, Cognito will automatically trigger the migration Lambda function, allowing the user to be added to the pool after migration. @@ -310,7 +334,7 @@ Replace `` with the ID of your existing user pool client (for example The login form should look similar to the screenshot below: -{{< figure src="cognitoLogin.png" width="320" >}} +![Cognito Login Form](/images/aws/cognito-login-form.png) Upon successful login, the page will automatically redirect to the designated ``, with an appended path parameter `?code=`. For instance, the redirect URL might look like `http://example.com?code=test123`. @@ -320,13 +344,18 @@ To obtain a token, you need to submit the received code using `grant_type=author Note that the value of the `redirect_uri` parameter in your token request must match the value provided during the login process. Ensuring this match is crucial for the proper functioning of the authentication flow. -```sh -% curl \ +```bash +curl \ --data-urlencode 'grant_type=authorization_code' \ --data-urlencode 'redirect_uri=http://example.com' \ --data-urlencode "client_id=${client_id}" \ --data-urlencode 'code=test123' \ 'http://localhost:4566/_aws/cognito-idp/oauth2/token' +``` + +The output will be similar to the following: + +```json {"access_token": "eyJ0eXAi…lKaHx44Q", "expires_in": 86400, "token_type": "Bearer", "refresh_token": "e3f08304", "id_token": "eyJ0eXAi…ADTXv5mA"} ``` @@ -338,13 +367,13 @@ The client credentials grant allows for scope-based authorization from a non-int Your app can directly request client credentials from the token endpoint to receive an access token. To request the token from the LocalStack URL, use the following endpoint: `://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token`. -For additional information on our endpoints, refer to our [Internal Endpoints]({{< ref "/references/internal-endpoints" >}}) documentation. +For additional information on our endpoints, refer to our [Internal Endpoints]() documentation. If there are multiple user pools, LocalStack identifies the appropriate one by examining the `clientid` of the request. To get started, follow the example below: -```sh +```bash #Create client user pool with a client. export client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client --generate-secret | jq -rc ".UserPoolClient.ClientId") @@ -449,7 +478,7 @@ Authentication: AWS4-HMAC-SHA256 Credential=test-1234567/20190821/us-east-1/cogn The LocalStack Web Application provides a Resource Browser for managing Cognito User Pools, and more. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Cognito** under the **Security Identity Compliance** section. -Cognito Resource Browser +![Cognito Resource Browser](/images/aws/cognito-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -472,4 +501,4 @@ The following code snippets and sample applications provide practical examples o By default, LocalStack's Cognito does not send actual email messages. However, if you wish to enable this feature, you will need to provide an email address and configure the corresponding SMTP settings. -The instructions on configuring the connection parameters of your SMTP server can be found in the [Configuration]({{< ref "configuration#emails" >}}) guide to allow your local Cognito environment to send email notifications. +The instructions on configuring the connection parameters of your SMTP server can be found in the [Configuration](/aws/capabilities/config/configuration/#emails) guide to allow your local Cognito environment to send email notifications. From 66e9f91d4f42f90a01dd833c5b5694edcb432f70 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:23:14 +0530 Subject: [PATCH 27/80] revamp dms docs --- src/content/docs/aws/services/dms.md | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/src/content/docs/aws/services/dms.md b/src/content/docs/aws/services/dms.md index b2e26fe9..ef9e8398 100644 --- a/src/content/docs/aws/services/dms.md +++ b/src/content/docs/aws/services/dms.md @@ -1,6 +1,5 @@ --- title: "Database Migration Service (DMS)" -linkTitle: "Database Migration Service (DMS)" description: Get started with Database Migration Service (DMS) on LocalStack tags: ["Ultimate"] --- @@ -11,12 +10,12 @@ AWS Database Migration Service provides migration solution from databases, data The migration can be homogeneous (source and target have the same type), but often times is heterogeneous as it supports migration from various sources to various targets (self-hosted and AWS services). LocalStack only supports selected use cases for DMS at the moment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_dms" >}}), which provides information on the extent of DMS integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of DMS integration with LocalStack. -{{< callout "note">}} +:::note DMS is in a preview state, supporting only [selected use cases](#supported-use-cases). You need to set the env `ENABLE_DMS=1` in order to activate it. -{{< /callout >}} +::: ## Getting started @@ -29,13 +28,13 @@ You can run a DMS sample showcasing MariaDB source and Kinesis target from our [ To follow the sample, simply clone the repository: -```sh +```bash git clone https://github.com/localstack-samples/sample-dms-kinesis-rds-mariadb.git ``` Next, start LocalStack (there is a docker-compose included, setting the `ENABLE_DMS=1` flag): -```sh +```bash export LOCALSTACK_AUTH_TOKEN= # this must be a enterprise license token docker-compose up ``` @@ -53,7 +52,7 @@ make run You will then see some log output, indicating the status of the ongoing replication: -```sh +```bash ************ STARTING FULL LOAD FLOW ************ @@ -147,8 +146,7 @@ The LocalStack Web Application provides a Resource Browser for managing: You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Database Migration Service** under the **Migration and transfer** section. -DMS Resource Browser -

+![DMS Resource Browser](/images/aws/dms-resource-browser.png) The Resource Browser supports CRD (Create, Read, Delete) operations on DMS resources. @@ -176,7 +174,7 @@ The Resource Browser supports CRD (Create, Read, Delete) operations on DMS resou For RDS MariaDB and RDS MySQL it is not yet possible to set custom db-parameters. In order to make those databases work with `cdc` migration for DMS, some default db-parameters are changed upon start if the `ENABLE_DMS=1` flag is set: -```sh +```bash binlog_checksum=NONE binlog_row_image=FULL binlog_format=ROW From 8e4ff826e7f51278f3886bb4b6f7d0969d3bc3df Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:26:37 +0530 Subject: [PATCH 28/80] revamp docdb --- src/content/docs/aws/services/docdb.md | 148 ++++++++++++------------- 1 file changed, 74 insertions(+), 74 deletions(-) diff --git a/src/content/docs/aws/services/docdb.md b/src/content/docs/aws/services/docdb.md index 7f89ade5..eee03d43 100644 --- a/src/content/docs/aws/services/docdb.md +++ b/src/content/docs/aws/services/docdb.md @@ -1,6 +1,5 @@ --- title: "DocumentDB (DocDB)" -linkTitle: "DocumentDB (DocDB)" tags: ["Ultimate"] description: Get started with AWS DocumentDB on LocalStack --- @@ -11,17 +10,21 @@ DocumentDB is a fully managed, non-relational database service that supports Mon DocumentDB is compatible with MongoDB, meaning you can use the same MongoDB drivers, applications, and tools to run, manage, and scale workloads on DocumentDB without having to worry about managing the underlying infrastructure. LocalStack allows you to use the DocumentDB APIs to create and manage DocumentDB clusters and instances. -The supported APIs are available on our [API coverage page]({{< ref "coverage_docdb" >}}), which provides information on the extent of DocumentDB's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of DocumentDB's integration with LocalStack. ## Getting started To create a new DocumentDB cluster we use the `create-db-cluster` command as follows: -{{< command >}} -$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb-cluster --engine docdb -{{< /command >}} +```bash +awslocal docdb create-db-cluster \ + --db-cluster-identifier test-docdb-cluster \ + --engine docdb +``` + +The output will be similar to the following: -```yaml +```bash { "DBCluster": { "DBClusterIdentifier": "test-docdb-cluster", @@ -63,12 +66,17 @@ created. As we did not specify a `MasterUsername` or `MasterUserPassword` for the creation of the database, the mongo-db will not set any credentials when starting the docker container. To create a new database, we can use the `create-db-instance` command, like in this example: -{{< command >}} -$ awslocal docdb create-db-instance --db-instance-identifier test-company \ ---db-instance-class db.r5.large --engine docdb --db-cluster-identifier test-docdb-cluster -{{< /command >}} +```bash +awslocal docdb create-db-instance \ + --db-instance-identifier test-company \ + --db-instance-class db.r5.large \ + --engine docdb \ + --db-cluster-identifier test-docdb-cluster +``` -```yaml +The output will be similar to the following: + +```bash { "DBInstance": { "DBInstanceIdentifier": "test-docdb-instance", @@ -114,11 +122,15 @@ Some noticeable fields: . To obtain detailed information about the cluster, we use the `describe-db-cluster` command: -{{< command >}} -$ awslocal docdb describe-db-clusters --db-cluster-identifier test-docdb-cluster -{{< /command >}} -```yaml +```bash +awslocal docdb describe-db-clusters \ + --db-cluster-identifier test-docdb-cluster +``` + +The output will be similar to the following: + +```bash { "DBClusters": [ { @@ -158,22 +170,19 @@ Interacting with the databases is done using `mongosh`, which is an official com It is designed to provide a modern and enhanced user experience for interacting with MongoDB databases. -{{< command >}} +```bash +mongosh mongodb://localhost:39045 +``` -$ mongosh mongodb://localhost:39045 +The output will be similar to the following: + +```bash Current Mongosh Log ID: 64a70b795697bcd4865e1b9a Connecting to: mongodb://localhost: 39045/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1 Using MongoDB: 6.0.7 Using Mongosh: 1.10.1 - -For mongosh info see: https://docs.mongodb.com/mongodb-shell/ - ------- - -test> - -{{< /command >}} +``` This command will default to accessing the `test` database that was created with the cluster. Notice the port, `39045`, @@ -181,27 +190,15 @@ which is the cluster port that appears in the aforementioned description. To work with a specific database, the command is: -{{< command >}} -$ mongosh mongodb://localhost:39045/test-company -Current Mongosh Log ID: 64a71916fae7fdeeb8b43a73 -Connecting to: mongodb://localhost: -39045/test-company?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1 -Using MongoDB: 6.0.7 -Using Mongosh: 1.10.1 - -For mongosh info see: https://docs.mongodb.com/mongodb-shell/ - ------- -test-company> - -{{< /command >}} +```bash +mongosh mongodb://localhost:39045/test-company +``` From here on we can manipulate collections using [the JavaScript methods provided](https://www.mongodb.com/docs/manual/reference/method/) by `mongosh`: -{{< command >}} - +```bash test-company> db.createCollection("employees") { ok: 1 } test-company> db.createCollection("customers") @@ -210,20 +207,19 @@ test-company> show collections customers employees test-company> exit - -{{< /command >}} +``` For more information on how to use MongoDB with `mongosh` please refer to the [MongoDB documentation](https://www.mongodb.com/docs/). ### Connect to DocumentDB using Node.js Lambda -{{< callout >}} +:::note You need to set `DOCDB_PROXY_CONTAINER=1` when starting LocalStack to be able to use the returned `Endpoint`, which will be correctly resolved automatically. The flag `DOCDB_PROXY_CONTAINER=1` changes the default behavior and the container will be started as proxied container. -Meaning a port from the [pre-defined port]({{< ref "/references/external-ports" >}}) range will be chosen, and when using lambda, you can use `localhost.localstack.cloud` to connect to the instance. -{{< /callout >}} +Meaning a port from the [pre-defined port]() range will be chosen, and when using lambda, you can use `localhost.localstack.cloud` to connect to the instance. +::: In this sample we will use a Node.js lambda function to connect to a DocumentDB. For the mongo-db connection we will use the `mongodb` lib. @@ -235,12 +231,14 @@ We included a snippet at the very end. #### Create the DocDB Cluster with a username and password We assume you have a `MasterUsername` and `MasterUserPassword` set for DocDB e.g: -{{< command >}} -$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb \ + +```bash +awslocal docdb create-db-cluster \ + --db-cluster-identifier test-docdb \ --engine docdb \ --master-user-password S3cretPwd! \ --master-username someuser -{{< /command >}} +``` #### Prepare the lambda function @@ -248,16 +246,16 @@ First, we create the zip required for the lambda function with the mongodb depen You will need [`npm`](https://docs.npmjs.com/) in order to install the dependencies. In your terminal run: -{{< command >}} -$ mkdir resources -$ cd resources -$ mkdir node_modules -$ npm install mongodb@6.3.0 -{{< /command >}} +```bash +mkdir resources +cd resources +mkdir node_modules +npm install mongodb@6.3.0 +``` Next, copy the following code into a new file named `index.js` in the `resources` folder: -{{< command >}} +```javascript const AWS = require('aws-sdk'); const RDS = AWS.RDS; const { MongoClient } = require('mongodb'); @@ -305,35 +303,40 @@ exports.handler = async (event) => { }; } }; -{{< /command >}} +``` Now, you can zip the entire. Make sure you are inside `resources` directory and run: -{{< command >}} -$ zip -r function.zip . -{{< /command >}} + +```bash +zip -r function.zip . +``` Finally, we can create the `lambda` function using `awslocal`: -{{< command >}} -$ awslocal lambda create-function \ + +```bash +awslocal lambda create-function \ --function-name MyNodeLambda \ --runtime nodejs16.x \ --role arn:aws:iam::000000000000:role/lambda-role \ --handler index.handler \ --zip-file fileb://function.zip \ --environment Variables="{DOCDB_CLUSTER_ID=test-docdb,DOCDB_SECRET=S3cretPwd!}" -{{< /command >}} +``` You can invoke the lambda by calling: -{{< command >}} -$ awslocal lambda invoke --function-name MyNodeLambda outfile -{{< /command >}} + +```bash +awslocal lambda invoke \ + --function-name MyNodeLambda \ + outfile +``` The `outfile` contains the returned value, e.g.: -```yaml +```json {"statusCode":200,"body":"{\"_id\":\"6560a21ca7771a02ef128c72\",\"key\":\"value\"}"} -```` +``` #### Use Secret To Connect to DocDB @@ -343,7 +346,7 @@ Secrets follow a [well-defined pattern](https://docs.aws.amazon.com/secretsmanag For the lambda function, you can pass the secret arn as `SECRET_NAME`. In the lambda, you can then retrieve the secret details like this: -{{< command >}} +```javascript const AWS = require('aws-sdk'); const { MongoClient } = require('mongodb'); @@ -390,17 +393,14 @@ exports.handler = async (event) => { }; } }; - -{{< /command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing DocumentDB instances and clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DocumentDB** under the **Database** section. -DocumentDB Resource Browser -
-
+![DocumentDB Resource Browser](/images/aws/docdb-resource-browser.png) The Resource Browser allows you to perform the following actions: From 52b93bbb693311b860c14d64e7845abe6211e36f Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:29:27 +0530 Subject: [PATCH 29/80] revamp ddb docs --- src/content/docs/aws/services/dynamodb.md | 46 ++++++++++++----------- 1 file changed, 25 insertions(+), 21 deletions(-) diff --git a/src/content/docs/aws/services/dynamodb.md b/src/content/docs/aws/services/dynamodb.md index 82e8be99..aaf5b691 100644 --- a/src/content/docs/aws/services/dynamodb.md +++ b/src/content/docs/aws/services/dynamodb.md @@ -1,6 +1,5 @@ --- title: DynamoDB -linkTitle: DynamoDB description: Get started with DynamoDB on LocalStack persistence: supported tags: ["Free"] @@ -11,7 +10,7 @@ It offers a flexible and highly scalable way to store and retrieve data, making DynamoDB provides a fast and scalable key-value datastore with support for replication, automatic scaling, data encryption at rest, and on-demand backup, among other capabilities. LocalStack allows you to use the DynamoDB APIs in your local environment to manage key-value and document data models. -The supported APIs are available on our [API coverage page]({{< ref "coverage_dynamodb" >}}), which provides information on the extent of DynamoDB's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of DynamoDB's integration with LocalStack. DynamoDB emulation is powered by [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). @@ -27,14 +26,14 @@ We will demonstrate how to create DynamoDB table, along with its replicas, and p You can create a DynamoDB table using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API. Execute the following command to create a table named `global01` with a primary key `id`: -{{< command >}} -$ awslocal dynamodb create-table \ +```bash +awslocal dynamodb create-table \ --table-name global01 \ --key-schema AttributeName=id,KeyType=HASH \ --attribute-definitions AttributeName=id,AttributeType=S \ --billing-mode PAY_PER_REQUEST \ --region ap-south-1 -{{< /command >}} +``` The following output would be retrieved: @@ -70,12 +69,12 @@ The following output would be retrieved: You can create replicas of a DynamoDB table using the [`UpdateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API. Execute the following command to create replicas in `ap-south-1` and `us-west-1` regions: -{{< command >}} -$ awslocal dynamodb update-table \ +```bash +awslocal dynamodb update-table \ --table-name global01 \ --replica-updates '[{"Create": {"RegionName": "eu-central-1"}}, {"Create": {"RegionName": "us-west-1"}}]' \ --region ap-south-1 -{{< /command >}} +``` The following output would be retrieved: @@ -107,10 +106,10 @@ You can now operate on the table in the replicated regions as well. You can use the [`ListTables`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html) API to list the tables in the replicated regions. Run the following command to list the tables in the `eu-central-1` region: -{{< command >}} -$ awslocal dynamodb list-tables \ +```bash +awslocal dynamodb list-tables \ --region eu-central-1 -{{< /command >}} +``` The following output would be retrieved: @@ -127,22 +126,22 @@ The following output would be retrieved: You can insert an item into a DynamoDB table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API. Execute the following command to insert an item into the `global01` table: -{{< command >}} -$ awslocal dynamodb put-item \ +```bash +awslocal dynamodb put-item \ --table-name global01 \ --item '{"id":{"S":"foo"}}' \ --region eu-central-1 -{{< /command >}} +``` You can now query the number of items in the table using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API. Run the following command to query the number of items in the `global01` table from a different region: -{{< command >}} -$ awslocal dynamodb describe-table \ +```bash +awslocal dynamodb describe-table \ --table-name global01 \ --query 'Table.ItemCount' \ --region ap-south-1 -{{< /command >}} +``` The following output would be retrieved: @@ -150,11 +149,11 @@ The following output would be retrieved: 1 ``` -{{< callout >}} +:::note You can run DynamoDB in memory, which can greatly improve the performance of your database operations. However, this also means that the data will not be possible to persist on disk and will be lost even though persistence is enabled in LocalStack. To enable this feature, you need to set the environment variable `DYNAMODB_IN_MEMORY=1` while starting LocalStack. -{{< /callout >}} +::: ### Time To Live @@ -167,8 +166,13 @@ In addition, to programmatically trigger the worker at convenience, we provide t The response returns the number of deleted items: -```console +```bash curl -X DELETE localhost:4566/_aws/dynamodb/expired +``` + +The output will be: + +```json {"ExpiredItems": 3} ``` @@ -177,7 +181,7 @@ curl -X DELETE localhost:4566/_aws/dynamodb/expired The LocalStack Web Application provides a Resource Browser for managing DynamoDB tables and items. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DynamoDB** under the **Database** section. -DynamoDB Resource Browser +![DynamoDB Resource Browser](/images/aws/dynamodb-resource-browser.png) The Resource Browser allows you to perform the following actions: From a3555f45e7faf0965ed31d8283b6790391dcea94 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 13:32:43 +0530 Subject: [PATCH 30/80] revamp ddb streams --- .../docs/aws/services/dynamodbstreams.md | 47 ++++++++++--------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/src/content/docs/aws/services/dynamodbstreams.md b/src/content/docs/aws/services/dynamodbstreams.md index 89bd856c..81924e64 100644 --- a/src/content/docs/aws/services/dynamodbstreams.md +++ b/src/content/docs/aws/services/dynamodbstreams.md @@ -11,7 +11,7 @@ The stream records are written to a DynamoDB stream, which is an ordered flow of DynamoDB Streams records data in near-real time, enabling you to develop workflows that process these streams and respond based on their contents. LocalStack supports DynamoDB Streams, allowing you to create and manage streams in a local environment. -The supported APIs are available on our [DynamoDB Streams coverage page]({{< ref "coverage_dynamodbstreams" >}}), which provides information on the extent of DynamoDB Streams integration with LocalStack. +The supported APIs are available on our [DynamoDB Streams coverage page](), which provides information on the extent of DynamoDB Streams integration with LocalStack. ## Getting started @@ -30,14 +30,14 @@ We will demonstrate the following process using LocalStack: You can create a DynamoDB table named `BarkTable` using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API. Run the following command to create the table: -{{< command >}} -$ awslocal dynamodb create-table \ +```bash +awslocal dynamodb create-table \ --table-name BarkTable \ --attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \ --key-schema AttributeName=Username,KeyType=HASH AttributeName=Timestamp,KeyType=RANGE \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES -{{< /command >}} +``` The `BarkTable` has a stream enabled which you can trigger by associating a Lambda function with the stream. You can notice that in the `LatestStreamArn` field of the response: @@ -79,9 +79,9 @@ exports.handler = (event, context, callback) => { You can now create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API. Run the following command to create the Lambda function: -{{< command >}} -$ zip index.zip index.js -$ awslocal lambda create-function \ +```bash +zip index.zip index.js +awslocal lambda create-function \ --function-name publishNewBark \ --zip-file fileb://index.zip \ --role roleARN \ @@ -89,7 +89,7 @@ $ awslocal lambda create-function \ --timeout 50 \ --runtime nodejs16.x \ --role arn:aws:iam::000000000000:role/lambda-role -{{< /command >}} +``` ### Invoke the Lambda function @@ -138,12 +138,12 @@ Create a new file named `payload.json` with the following content: Run the following command to invoke the Lambda function: -{{< command >}} -$ awslocal lambda invoke \ +```bash +awslocal lambda invoke \ --function-name publishNewBark \ --payload file://payload.json \ --cli-binary-format raw-in-base64-out output.txt -{{< /command >}} +``` In the `output.txt` file, you should see the following output: @@ -157,20 +157,20 @@ To add the DynamoDB stream as an event source for the Lambda function, you need You can get the stream ARN using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API. Run the following command to get the stream ARN: -{{< command >}} +```bash awslocal dynamodb describe-table --table-name BarkTable -{{< /command >}} +``` You can now create an event source mapping using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API. Run the following command to create the event source mapping: -{{< command >}} +```bash awslocal lambda create-event-source-mapping \ --function-name publishNewBark \ --event-source arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101 \ --batch-size 1 \ --starting-position TRIM_HORIZON -{{< /command >}} +``` Make sure to replace the `event-source` value with the stream ARN you obtained from the previous command. You should see the following output: @@ -189,11 +189,11 @@ You should see the following output: You can now test the event source mapping by adding an item to the `BarkTable` table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API. Run the following command to add an item to the table: -{{< command >}} -$ awslocal dynamodb put-item \ +```bash +awslocal dynamodb put-item \ --table-name BarkTable \ --item Username={S="Jane Doe"},Timestamp={S="2016-11-18:14:32:17"},Message={S="Testing...1...2...3"} -{{< /command >}} +``` You can find Lambda function being triggered in the LocalStack logs. @@ -202,9 +202,9 @@ You can find Lambda function being triggered in the LocalStack logs. You can list the streams using the [`ListStreams`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListStreams.html) API. Run the following command to list the streams: -{{< command >}} +```bash awslocal dynamodbstreams list-streams -{{< /command >}} +``` The following output shows the list of streams: @@ -223,8 +223,9 @@ The following output shows the list of streams: You can also describe the stream using the [`DescribeStream`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeStream.html) API. Run the following command to describe the stream: -{{< command >}} -$ awslocal dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101 -{{< /command >}} +```bash +awslocal dynamodbstreams describe-stream \ + --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101 +``` Replace the `stream-arn` value with the stream ARN you obtained from the previous command. From 5190ecce1060fd261e516194269ad3fc91a7967b Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:34:58 +0530 Subject: [PATCH 31/80] revamp ec2 --- .../docs/aws/services/{ec2.md => ec2.mdx} | 366 +++++++++--------- 1 file changed, 190 insertions(+), 176 deletions(-) rename src/content/docs/aws/services/{ec2.md => ec2.mdx} (82%) diff --git a/src/content/docs/aws/services/ec2.md b/src/content/docs/aws/services/ec2.mdx similarity index 82% rename from src/content/docs/aws/services/ec2.md rename to src/content/docs/aws/services/ec2.mdx index fc77a441..082e5878 100644 --- a/src/content/docs/aws/services/ec2.md +++ b/src/content/docs/aws/services/ec2.mdx @@ -1,18 +1,19 @@ --- title: "Elastic Compute Cloud (EC2)" -linkTitle: "Elastic Compute Cloud (EC2)" tags: ["Free"] description: Get started with Amazon Elastic Compute Cloud (EC2) on LocalStack persistence: supported with limitations --- +import { Tabs, TabItem } from '@astrojs/starlight/components'; + ## Introduction Elastic Compute Cloud (EC2) is a core service within Amazon Web Services (AWS) that provides scalable and flexible virtual computing resources. EC2 enables users to launch and manage virtual machines, referred to as instances. LocalStack allows you to use the EC2 APIs in your local environment to create and manage EC2 instances and related resources such as VPCs, EBS volumes, etc. -The list of supported APIs can be found on the [API coverage page]({{< ref "coverage_ec2" >}}). +The list of supported APIs can be found on the [API coverage page](). ## Getting started @@ -29,61 +30,51 @@ Key pairs are SSH public key/private key combinations that are used to log in to To create a key pair, you can use the [`CreateKeyPair`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateKeyPair.html) API. Run the following command to create the key pair and pipe the output to a file named `key.pem`: -{{< command >}} -$ awslocal ec2 create-key-pair \ +```bash +awslocal ec2 create-key-pair \ --key-name my-key \ --query 'KeyMaterial' \ --output text | tee key.pem -{{< /command >}} +``` You may need to assign necessary permissions to the key files for security reasons. This can be done using the following commands: -{{< tabpane text=true >}} - -{{< tab header="**Linux**" >}} - -{{< command >}} -$ chmod 400 key.pem -{{< /command >}} - -{{< /tab >}} - -{{< tab header="**Windows (Powershell)**" >}} - -{{< command >}} + + +```bash +chmod 400 key.pem +``` + + +```bash $acl = Get-Acl -Path "key.pem" $fileSystemAccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("$env:username", "Read", "Allow") $acl.SetAccessRule($fileSystemAccessRule) $acl.SetAccessRuleProtection($true, $false) Set-Acl -Path "key.pem" -AclObject $acl -{{< /command >}} - -{{< /tab >}} - -{{< tab header="**Windows (Command Prompt)**" >}} - -{{< command >}} +``` + + +```bash icacls.exe key.pem /reset icacls.exe key.pem /grant:r "$($env:username):(r)" icacls.exe key.pem /inheritance:r -{{< /command >}} - -{{< /tab >}} - -{{< /tabpane >}} +``` + + If you already have an SSH public key that you wish to use, such as the one located in your home directory at `~/.ssh/id_rsa.pub`, you can import it instead. -{{< command >}} +```bash $ awslocal ec2 import-key-pair --key-name my-key --public-key-material file://~/.ssh/id_rsa.pub -{{< /command >}} +``` If you only have the SSH private key, a public key can be generated using the following command, and then imported: -{{< command >}} -$ ssh-keygen -y -f id_rsa > id_rsa.pub -{{< /command >}} +```bash +ssh-keygen -y -f id_rsa > id_rsa.pub +``` ### Add rules to your security group @@ -91,13 +82,13 @@ Currently, LocalStack only supports the `default` security group. You can add rules to the security group using the [`AuthorizeSecurityGroupIngress`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html) API. Run the following command to add a rule to allow inbound traffic on port 8000: -{{< command >}} -$ awslocal ec2 authorize-security-group-ingress \ +```bash +awslocal ec2 authorize-security-group-ingress \ --group-id default \ --protocol tcp \ --port 8000 \ --cidr 0.0.0.0/0 -{{< /command >}} +``` The above command will enable rules in the security group to allow incoming traffic from your local machine on port 8000 of an emulated EC2 instance. @@ -106,9 +97,9 @@ The above command will enable rules in the security group to allow incoming traf You can fetch the Security Group ID using the [`DescribeSecurityGroups`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html) API. Run the following command to fetch the Security Group ID: -{{< command >}} -$ awslocal ec2 describe-security-groups -{{< /command >}} +```bash +awslocal ec2 describe-security-groups +``` You should see the following output: @@ -140,24 +131,24 @@ python3 -m http.server 8000 You can now run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API. Run the following command to run an EC2 instance by adding the appropriate Security Group ID that we fetched in the previous step: -{{< command >}} -$ awslocal ec2 run-instances \ +```bash +awslocal ec2 run-instances \ --image-id ami-df5de72bdb3b \ --count 1 \ --instance-type t3.nano \ --key-name my-key \ --security-group-ids '' \ --user-data file://./user_script.sh -{{< /command >}} +``` ### Test the Python web server You can now open the LocalStack logs to find the IP address of the locally emulated EC2 instance. Run the following command to open the LocalStack logs: -{{< command >}} -$ localstack logs -{{< /command >}} +```bash +localstack logs +``` You should see the following output: @@ -169,11 +160,11 @@ You should see the following output: You can now use the IP address to test the Python Web Server. Run the following command to test the Python Web Server: -{{< command >}} -$ curl 172.17.0.4:8000 +```bash +curl 172.17.0.4:8000 # Or, you can run -$ curl 127.0.0.1:29043 -{{< /command >}} +curl 127.0.0.1:29043 +``` You should see the following output: @@ -186,10 +177,10 @@ You should see the following output: ... ``` -{{< callout "note" >}} +:::note Similar to the setup in production AWS, the user data content is stored at `/var/lib/cloud/instances//` within the instance. Any execution of this data is recorded in the `/var/log/cloud-init-output.log` file. -{{< /callout >}} +::: ### Connecting via SSH @@ -198,20 +189,20 @@ You can also set up an SSH connection to the locally emulated EC2 instance using This section assumes that you have created or imported an SSH key pair named `my-key`. When running the EC2 instance, make sure to pass the `--key-name` parameter to the command: -{{< command >}} -$ awslocal ec2 run-instances --key-name my-key ... -{{< /command >}} +```bash +awslocal ec2 run-instances --key-name my-key ... +``` Once the instance is up and running, we can use the `ssh` command to set up an SSH connection. Assuming the instance is available under `127.0.0.1:12862` (as per the LocalStack log output), use this command: -{{< command >}} -$ ssh -p 12862 -i key.pem root@127.0.0.1 -{{< /command >}} +```bash +ssh -p 12862 -i key.pem root@127.0.0.1 +``` -{{< callout "tip" >}} +:::note If the `ssh` command throws an error like "Identity file not accessible" or "bad permissions", make sure that the key file has a restrictive `0400` permission as illustrated above. -{{< /callout >}} +::: ## VM Managers @@ -219,7 +210,7 @@ LocalStack EC2 supports multiple methods to simulate the EC2 service. All tiers support the mock/CRUD capability. For advanced setups, LocalStack Pro comes with emulation capability for certain resource types so that they behave more closely like AWS. -The underlying method for this can be controlled using the [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) configuration option. +The underlying method for this can be controlled using the [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) configuration option. You may choose between plain mocked resources, containerized or virtualized. ## Mock VM Manager @@ -228,7 +219,7 @@ With the Mock VM manager, all resources are stored as in-memory representation. This only offers the CRUD capability. This is the default VM manager in LocalStack Community edition. -To use this VM manager in LocalStack Pro, set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `mock`. +To use this VM manager in LocalStack Pro, set [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) to `mock`. This serves as the fallback manager if an operation is not implemented in other VM managers. @@ -238,7 +229,7 @@ LocalStack Pro supports the Docker VM manager which uses the [Docker Engine](htt This VM manager requires the Docker socket from the host machine to be mounted inside the LocalStack container at `/var/run/docker.sock`. This is the default VM manager in LocalStack Pro. -You may set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `docker` to explicitly use this VM manager. +You may set [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) to `docker` to explicitly use this VM manager. All launched EC2 instances have the Docker socket mounted inside them at `/var/run/docker.sock` to make Docker-in-Docker usecases possible. @@ -255,9 +246,9 @@ These can be used to launch EC2 instances which are in fact Docker containers. You can mark any Docker base image as AMI using the below command: -{{< command >}} -$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-ami:ami-000001 -{{< /command >}} +```bash +docker tag ubuntu:focal localstack-ec2/ubuntu-focal-ami:ami-000001 +``` The above example will make LocalStack treat the `ubuntu:focal` Docker image as an AMI with name `ubuntu-focal-ami` and ID `ami-000001`. @@ -265,22 +256,23 @@ At startup, LocalStack downloads the following AMIs that can be used to launch D - Ubuntu 22.04 `ami-df5de72bdb3b` - Amazon Linux 2023 `ami-024f768332f0` -{{< callout "note" >}} +:::note The auto download of Docker images for default AMIs can be disabled using the `EC2_DOWNLOAD_DEFAULT_IMAGES=0` configuration variable. -{{< /callout >}} +::: All LocalStack-managed Docker AMIs bear the resource tag `ec2_vm_manager:docker`. These can be listed using: -{{< command >}} -$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=docker -{{< /command >}} +```bash +awslocal ec2 describe-images \ + --filters Name=tag:ec2_vm_manager,Values=docker +``` -{{< callout "note" >}} +:::note If an AMI does not have the `ec2_vm_manager:docker` tag, it means that it is mocked. Attempting to launch Dockerized instances using these AMIs will result in an `InvalidAMIID.NotFound` error. See [Mock VM manager](#mock-vm-manager). -{{< /callout >}} +::: AWS does not provide an API to download AMIs which prevents the use of real AWS AMIs on LocalStack. However, in certain cases it may be possible to tweak your workflow to make it work with Localstack. @@ -303,11 +295,11 @@ The execution log is generated at `/var/log/cloud-init-output.log` in the contai ### Networking -{{< callout "note" >}} +:::note Network access from host to EC2 instance containers is not possible on macOS. This is because Docker Desktop on macOS does not expose the bridge network to the host system. See [Docker Desktop Known Limitations](https://docs.docker.com/desktop/networking/#known-limitations). -{{< /callout >}} +::: Network addresses for Dockerized instances are allocated by the Docker daemon and can be obtained from the `PublicIpAddress` attribute. These addresses are also printed in the logs while the instance is being initialized. @@ -321,23 +313,22 @@ If not found, it installs and starts the [Dropbear](https://github.com/mkj/dropb To be able to access the instance at additional ports from the host system, you can modify the default security group and include the required ingress ports. -{{< callout "note" >}} +:::note Security group ingress rules are applied only during the creation of the Dockerized instance. Modifying a security group will not open any ports for a running instance. -{{< /callout >}} +::: The system supports up to 32 ingress ports. This constraint is in place to prevent exhausting free ports on the host. -{{< command >}} -$ awslocal ec2 authorize-security-group-ingress \ +```bash +awslocal ec2 authorize-security-group-ingress \ --group-id default \ --protocol tcp \ --port 8080 -{{< /command >}} -{{< command >}} -$ awslocal ec2 describe-security-groups --group-names default -{{< /command >}} + +awslocal ec2 describe-security-groups --group-names default +``` The port mapping details are provided in the logs when the instance starts up. @@ -350,14 +341,15 @@ The port mapping details are provided in the logs when the instance starts up. A common use case is to attach an EBS block device to an EC2 instance, which can then be used to create a custom filesystem for additional storage. This section illustrates how this functionality can be achieved with EC2 Docker instances in LocalStack. -{{< callout "note" >}} +:::note This feature is disabled by default. -Please set the [`EC2_MOUNT_BLOCK_DEVICES`]({{< ref "configuration#ec2" >}}) configuration option to enable it. -{{< /callout >}} +Please set the [`EC2_MOUNT_BLOCK_DEVICES`](/aws/capabilities/config/configuration#ec2) configuration option to enable it. +::: First, we create a user data script `init.sh` which creates an ext3 file system on the block device `/ebs-dev/sda1` and mounts it under `/ebs-mounted`: -{{< command >}} -$ cat > init.sh < init.sh <}} +``` We can then start an EC2 instance, specifying a block device mapping under the device name `/ebs-dev/sda1`, and pointing to our `init.sh` user data script: -{{< command >}} + +```bash $ awslocal ec2 run-instances --image-id ami-ff0fea8310f3 --count 1 --instance-type t3.nano \ --block-device-mapping '{"DeviceName":"/ebs-dev/sda1","Ebs":{"VolumeSize":10}}' \ --user-data file://init.sh -{{< /command >}} +``` Please note that, whereas real AWS uses GiB for volume sizes, LocalStack uses MiB as the unit for `VolumeSize` in the command above (to avoid creating huge files locally). -Also, by default block device images are limited to 1 GiB in size, but this can be customized by setting the [`EC2_EBS_MAX_VOLUME_SIZE`]({{< ref "configuration#ec2" >}}) config variable (defaults to `1000`). +Also, by default block device images are limited to 1 GiB in size, but this can be customized by setting the [`EC2_EBS_MAX_VOLUME_SIZE`](/aws/capabilities/config/configuration#ec2) config variable (defaults to `1000`). Once the instance is successfully started and initialized, we can first determine the container ID via `docker ps`, and then list the contents of the mounted filesystem `/ebs-mounted`, which should contain our test file named `my-test-file`: -{{< command >}} -$ docker ps + +```bash +docker ps +``` + +The output will be: + +```bash CONTAINER ID IMAGE PORTS NAMES 5c60cf72d84a ...:ami-ff0fea8310f3 19419->22/tcp localstack-ec2... -$ docker exec 5c60cf72d84a ls /ebs-mounted +``` + +You can then list the contents of the mounted filesystem `/ebs-mounted`, which should contain our test file named `my-test-file`: + +```bash +docker exec 5c60cf72d84a ls /ebs-mounted +``` + +The output will be: + +```bash my-test-file -{{< /command >}} +``` ### Instance Metadata Service @@ -397,19 +406,19 @@ If the `X-aws-ec2-metadata-token` header is present, LocalStack will use IMDSv2, To create an IMDSv2 token, run the following inside the EC2 container: -{{< command >}} -$ curl -X PUT "http://169.254.169.254/latest/api/token" -H "x-aws-ec2-metadata-token-ttl-seconds: 300" -{{< /command >}} +```bash +curl -X PUT "http://169.254.169.254/latest/api/token" -H "x-aws-ec2-metadata-token-ttl-seconds: 300" +``` The token can be used in subsequent requests like so: -{{< command >}} -$ curl -H "x-aws-ec2-metadata-token: " -v http://169.254.169.254/latest/meta-data/ -{{< /command >}} +```bash +curl -H "x-aws-ec2-metadata-token: " -v http://169.254.169.254/latest/meta-data/ +``` -{{< callout "note" >}} +:::note IMDS IPv6 endpoint is currently not supported. -{{< /callout >}} +::: #### Metadata Categories @@ -429,7 +438,7 @@ If you would like support for more metadata categories, please make a feature re ### Configuration -You can use the [`EC2_DOCKER_FLAGS`]({{< ref "configuration#ec2" >}}) LocalStack configuration variable to pass supplementary flags to Docker during the initiation of containerized instances. +You can use the [`EC2_DOCKER_FLAGS`](/aws/capabilities/config/configuration#ec2) LocalStack configuration variable to pass supplementary flags to Docker during the initiation of containerized instances. This allows for fine-tuned behaviours, for example, running containers in privileged mode using `--privileged` or specifying an alternate CPU platform with `--platform`. Keep in mind that this will apply to all instances that are launched in the LocalStack session. @@ -450,11 +459,11 @@ Any operation not listed below will use the mock VM manager. ## Libvirt VM Manager -{{< callout "note" >}} +:::note The Libvirt VM manager is under active development. It is currently offered as a preview and will be part of the Ultimate plan upon release. If a functionality you desire is missing, please create a feature request on the [GitHub issue tracker](https://github.com/localstack/localstack/issues/new/choose). -{{< /callout >}} +::: The Libvirt VM manager uses the [Libvirt](https://libvirt.org/index.html) API to create fully virtualized EC2 resources. This lets you create EC2 setups which closely resemble AWS EC2. @@ -463,42 +472,48 @@ Currently LocalStack Pro supports the KVM-accelerated QEMU hypervisor on Linux h Installation steps for QEMU/KVM will vary based on the Linux distribution on the host machine. On Debian/Ubuntu-based distributions, you can run: -{{< command >}} -$ sudo apt install -y qemu-kvm libvirt-daemon-system -{{< /command >}} +```bash +sudo apt install -y qemu-kvm libvirt-daemon-system +``` To check CPU support for virtualization, run: -{{< command >}} -$ kvm-ok + +``` +kvm-ok +``` + +The output will be: + +```bash INFO: /dev/kvm exists KVM acceleration can be used -{{< /command >}} +``` -{{< callout "tip" >}} +:::note You may also need to enable virtualization support at hardware level. This is often labelled as 'Virtualization Technology', 'VT-d' or 'VT-x' in UEFI/BIOS setups. -{{< /callout >}} +::: If the Docker host and Libvirt host is the same, the Libvirt socket on the host must be mounted inside the LocalStack container. This can be done by including the volume mounts when the LocalStack container is started. -If you are using the [Docker Compose template]({{< ref "getting-started/installation#docker-compose" >}}), include the following line in `services.localstack.volumes` list: +If you are using the [Docker Compose template](/aws/getting-started/installation#docker-compose), include the following line in `services.localstack.volumes` list: ```text "/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock" ``` -If you are using [Docker CLI]({{< ref "getting-started/installation#docker" >}}), include the following parameter in `docker run`: +If you are using [Docker CLI](/aws/getting-started/installation#docker), include the following parameter in `docker run`: ```text -v /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock ``` -If you are using a remote Libvirt hypervisor, you can set the [`EC2_HYPERVISOR_URI`]({{< ref "configuration#ec2" >}}) config option with a connection URI. +If you are using a remote Libvirt hypervisor, you can set the [`EC2_HYPERVISOR_URI`](/aws/capabilities/config/configuration#ec2) config option with a connection URI. -{{< callout "tip" >}} +:::note If you encounter an error like `failed to connect to the hypervisor: Permission denied`, you may need to perform additional setup on the hypervisor host. Please refer to [Libvirt Wiki](https://wiki.libvirt.org/Failed_to_connect_to_the_hypervisor.html#permission-denied) for more details. -{{< /callout >}} +::: The Libvirt VM manager currently does not have full support for persistence. Underlying virtual machines and volumes are not persisted, only their mock representations are. @@ -508,67 +523,65 @@ Underlying virtual machines and volumes are not persisted, only their mock repre All qcow2 images with cloud-init support can be used as AMIs. You can find the download links for images of popular OSs below. -{{< tabpane text=true >}} - -{{% tab "Ubuntu" %}} + + Canonical provides official Ubuntu images at [cloud-images.ubuntu.com](https://cloud-images.ubuntu.com/). Please use the images in qcow2 format ending in `.img`. -{{% /tab %}} + + +Debian provides cloud images for direct download at [cdimage.debian.org/cdimage/cloud](http://cdimage.debian.org/cdimage/cloud/). +Please use the `genericcloud` image in qcow2 format. + -{{< tab "Debian" >}} -

-Debian provides cloud images for direct download at cdimage.debian.org/cdimage/cloud. -

+ +The Fedora project maintains the official cloud images at [fedoraproject.org/cloud/download](https://fedoraproject.org/cloud/download). -

-Please use the genericcloud image in qcow2 format. -

-{{< /tab >}} - -{{< tab "Fedora" >}} -

-The Fedora project maintains the official cloud images at fedoraproject.org/cloud/download. -

- -

Please use the qcow2 images. -

-{{< /tab >}} - -{{% tab "Microsoft Windows" %}} +
+ An evaluation version of Windows Server 2012 R2 is provided by [Cloudbase Solutions](https://cloudbase.it/windows-cloud-images/). -{{% /tab %}} + -{{< /tabpane >}} +
LocalStack does not come preloaded with any AMIs. Compatible qcow2 images must be placed at the default Libvirt storage pool at `/var/lib/libvirt/images` on the host machine. Images must be named with the prefix `ami-` followed by at least 8 hexadecimal characters without an extension, e.g. `ami-1234abcd`. + You may need run the following command to make sure the image is registered with Libvirt: -{{< command >}} -$ virsh pool-refresh default - +```bash +virsh pool-refresh default +``` + +The output will be: + +```bash Pool default refreshed - -{{< /command >}} -{{< command >}} -$ virsh vol-list --pool default - +``` + +You can then list the images with: + +```bash +virsh vol-list --pool default +``` + +The output will be: + +```bash Name Path -------------------------------------------------------------------------------------------------------- ami-1234abcd /var/lib/libvirt/images/ami-1234abcd - -{{< /command >}} +``` Only the images that follow the above naming scheme will be recognised by LocalStack as AMIs suitable for launching virtualized instances. These AMIs will also have the resource tag `ec2_vm_manager:libvirt`. -{{< command >}} -$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=libvirt -{{< /command >}} +```bash +awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=libvirt +``` ### Instances @@ -582,25 +595,28 @@ If a key pair is provided, it will added as an authorised SSH key for this user. LocalStack shuts down all virtual machines when it terminates. The Libvirt domains and volumes are left defined and can be used for debugging, etc. -{{< callout "tip" >}} +:::note Use [Virtual Machine Manager](https://virt-manager.org/) or [virsh](https://www.libvirt.org/manpages/virsh.html) to manage the virtual machines outside of LocalStack. -{{< /callout >}} +::: The Libvirt VM manager supports basic shell scripts for user data. This can be passed to the `UserData` parameter of the `RunInstances` operation. To connect to the graphical display of the instance, first obtain the VNC address using: -{{< command >}} -$ virsh vncdisplay +```bash +virsh vncdisplay +``` + +The output will be: + +```bash 127.0.0.1:0 -{{< /command >}} +``` You can then use a compatible VNC client (e.g. [TigerVNC](https://tigervnc.org/)) to connect and interact with the virtual machine. -

-Tiger VNC -

+![Tiger VNC](/images/aws/tiger-vnc.png) ### Networking @@ -620,15 +636,15 @@ Use the following configuration at `/etc/docker/daemon.json` on the host machine Then restart the Docker daemon: -{{< command >}} -$ sudo systemctl restart docker -{{< /command >}} +```bash +sudo systemctl restart docker +``` You can now start the LocalStack container, obtain its IP address and use it from the virtualized instance. -{{< command >}} -$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack_main -{{< /command >}} +```bash +docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack_main +``` ### Elastic Block Stores @@ -661,9 +677,7 @@ Any operation not listed below will use the mock VM manager. The LocalStack Web Application provides a Resource Browser for managing EC2 instances. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EC2** under the **Compute** section. -

-EC2 Resource Browser -

+![EC2 Resource Browser](/images/aws/ec2-resource-browser.png) The Resource Browser allows you to perform the following actions: - **Create Instance**: Create a new EC2 instance by clicking the **Launch Instance** button and specifying the AMI ID, instance type, and other parameters. From 4999a52f6243614718b63c1adfa897fa99afecff Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:36:37 +0530 Subject: [PATCH 32/80] revamp ecr --- src/content/docs/aws/services/ecr.md | 43 ++++++++++++++-------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/src/content/docs/aws/services/ecr.md b/src/content/docs/aws/services/ecr.md index f9b7da45..194b823c 100644 --- a/src/content/docs/aws/services/ecr.md +++ b/src/content/docs/aws/services/ecr.md @@ -1,6 +1,5 @@ --- title: "Elastic Container Registry (ECR)" -linkTitle: "Elastic Container Registry (ECR)" description: Get started with Elastic Container Registry (ECR) on LocalStack tags: ["Base"] persistence: supported @@ -13,7 +12,7 @@ ECR enables you to store, manage, and deploy Docker container images to build, s ECR integrates with other AWS services, such as Lambda, ECS, and EKS. LocalStack allows you to use the ECR APIs in your local environment to build & push Docker images to a local ECR registry. -The supported APIs are available on our [API coverage page]({{< ref "coverage_ecr" >}}), which provides information on the extent of ECR's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of ECR's integration with LocalStack. ## Getting started @@ -53,15 +52,15 @@ CMD /root/run_apache.sh You can now build the Docker image from the `Dockerfile` using the `docker CLI: -{{< command >}} -$ docker build -t localstack-ecr-image . -{{< / command >}} +```bash +docker build -t localstack-ecr-image . +``` You can run the following command to verify that the image was built successfully: -{{< command >}} -$ docker images -{{< / command >}} +```bash +docker images +``` You will see output similar to the following: @@ -77,15 +76,15 @@ To push the Docker image to ECR, you first need to create a repository. You can create an ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API. Run the following command to create a repository named `localstack-ecr-repository`: -{{< command >}} -$ awslocal ecr create-repository \ +```bash +awslocal ecr create-repository \ --repository-name localstack-ecr-repository \ --image-scanning-configuration scanOnPush=true -{{< / command >}} +``` You will see an output similar to the following: -```sh +```bash { "repository": { "repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/localstack-ecr-repository", @@ -111,22 +110,22 @@ You will need the `repositoryUri` value to push the Docker image to the reposito To push the Docker image to the repository, you first need to tag the image with the `repositoryUri`. Run the following command to tag the image: -{{< command >}} -$ docker tag localstack-ecr-image 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository -{{< / command >}} +```bash +docker tag localstack-ecr-image 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository +``` You can now push the image to the repository using the `docker` CLI: -{{< command >}} -$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository -{{< / command >}} +```bash +docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository +``` The image will take a few seconds to push to the repository. You can run the following command to verify that the image was pushed successfully: -{{< command >}} -$ awslocal ecr list-images --repository-name localstack-ecr-repository -{{< / command >}} +```bash +awslocal ecr list-images --repository-name localstack-ecr-repository +``` You will see an output similar to the following: @@ -146,7 +145,7 @@ You will see an output similar to the following: The LocalStack Web Application provides a Resource Browser for managing ECR repositories and images. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **ECR** under the **Compute** section. -ECR Resource Browser +![ECR Resource Browser](/images/aws/ecr-resource-browser.png) The Resource Browser allows you to perform the following actions: From acbee58dde0ef714a005731d9d236edc7a793a10 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:40:39 +0530 Subject: [PATCH 33/80] revamp ecs --- src/content/docs/aws/services/ecs.md | 73 ++++++++++++++++------------ 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/src/content/docs/aws/services/ecs.md b/src/content/docs/aws/services/ecs.md index b6f80bcc..0e0e29f2 100644 --- a/src/content/docs/aws/services/ecs.md +++ b/src/content/docs/aws/services/ecs.md @@ -1,6 +1,5 @@ --- title: "Elastic Container Service (ECS)" -linkTitle: "Elastic Container Service (ECS)" tags: ["Base"] description: Get started with Elastic Container Service (ECS) on LocalStack persistence: supported @@ -13,7 +12,7 @@ It allows you to run, stop, and manage Docker containers on a cluster. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. LocalStack allows you to use the ECS APIs in your local environment to create & manage ECS clusters, tasks, and services. -The supported APIs are available on our [API coverage page]({{< ref "coverage_ecs" >}}), which provides information on the extent of ECS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of ECS's integration with LocalStack. ## Getting Started @@ -24,16 +23,20 @@ We will demonstrate how to create an ECS service using the AWS CLI ### Create a cluster -{{< callout >}} +:::note By default, the **ECS Fargate** launch type is assumed, i.e., the local Docker engine is used for deployment of applications, and there is no need to create and manage EC2 virtual machines to run the containers. -{{< /callout >}} +::: ECS tasks and services run on a cluster. Execute the following command to create an ECS cluster named `mycluster`: -{{< command >}} -$ awslocal ecs create-cluster --cluster-name mycluster - +```bash +awslocal ecs create-cluster --cluster-name mycluster +``` + +The output will be: + +```json { "cluster": { "clusterArn": "arn:aws:ecs:us-east-1:000000000000:cluster/mycluster", @@ -51,8 +54,7 @@ $ awslocal ecs create-cluster --cluster-name mycluster ] } } - -{{< / command >}} +``` ### Create a task definition @@ -90,9 +92,13 @@ To create a task definition that runs an `ubuntu` container forever (by running and then run the following command: -{{< command >}} -$ awslocal ecs register-task-definition --cli-input-json file://task_definition.json - +```bash +awslocal ecs register-task-definition --cli-input-json file://task_definition.json +``` + +The output will be: + +```json { "taskDefinition": { "taskDefinitionArn": "arn:aws:ecs:us-east-1:000000000000:task-definition/myfamily:1", @@ -136,8 +142,7 @@ $ awslocal ecs register-task-definition --cli-input-json file://task_definition. "registeredAt": 1713364207.068659 } } - -{{< / command >}} +``` Task definitions are immutable, and are identified by their `family` field, and calling `register-task-definition` again with the same `family` value creates a new _version_ of a task definition. @@ -149,9 +154,13 @@ Finally we launch an ECS service using the task definition above. This will create a number of containers in replica mode meaning they are distributed over the nodes of the cluster, or in the case of Fargate, over availability zones within the region of the cluster. To create a service, execute the following command: -{{< command >}} -$ awslocal ecs create-service --service-name myservice --cluster mycluster --task-definition myfamily --desired-count 1 - +```bash +awslocal ecs create-service --service-name myservice --cluster mycluster --task-definition myfamily --desired-count 1 +``` + +The output will be: + +```json { "service": { "serviceArn": "arn:aws:ecs:us-east-1:000000000000:service/mycluster/myservice", @@ -196,8 +205,7 @@ $ awslocal ecs create-service --service-name myservice --cluster mycluster --tas "createdBy": "arn:aws:iam::000000000000:user/test" } } - -{{< / command >}} +``` You should see a new docker container has been created, using the `ubuntu:latest` image, and running the infinite loop command: @@ -212,9 +220,13 @@ CONTAINER ID IMAGE COMMAND CREATED To access the generated logs from the container, run the following command: -{{< command >}} +```bash awslocal logs filter-log-events --log-group-name myloggroup --query 'events[].message' - +``` + +The output will be: + +```json $ awslocal logs filter-log-events --log-group-name myloggroup | head -n 20 { "events": [ @@ -236,10 +248,9 @@ $ awslocal logs filter-log-events --log-group-name myloggroup | head -n 20 "logStreamName": "myprefix/ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d/75f0515e-0364-4ee5-9828-19026140c91a", "timestamp": 1713364216505, "message": "running", - -{{< / command >}} +``` -See our [CloudWatch Logs user guide]({{< ref "user-guide/aws/logs" >}}) for more details. +See our [CloudWatch Logs user guide](/aws/services/cloudwatchlogs) for more details. ## LocalStack ECS behavior @@ -250,7 +261,7 @@ If your ECS containers depend on LocalStack services, your ECS task network shou If you are running LocalStack through a `docker run` command, do not forget to enable the communication from the container to the Docker Engine API. You can provide the access by adding the following option `-v /var/run/docker.sock:/var/run/docker.sock`. -For more information regarding the configuration of LocalStack, please check the [LocalStack configuration]({{< ref "configuration" >}}) section. +For more information regarding the configuration of LocalStack, please check the [LocalStack configuration](/aws/capabilities/config/configuration) section. ## Remote debugging @@ -261,7 +272,7 @@ Or if you are working with a single container, you can set `ECS_DOCKER_FLAGS="-p ## Mounting local directories for ECS tasks In some cases, it can be useful to mount code from the host filesystem into the ECS container. -For example, to enable a quick debugging loop where you can test changes without having to build and redeploy the task's Docker image each time - similar to the [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) feature in LocalStack. +For example, to enable a quick debugging loop where you can test changes without having to build and redeploy the task's Docker image each time - similar to the [Lambda Hot Reloading](/aws/services/lambda#hot-reloading) feature in LocalStack. In order to leverage code mounting, we can use the ECS bind mounts feature, which is covered in the [AWS Bind mounts documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html). @@ -336,14 +347,14 @@ services: - ~/.docker/config.json:/config.json:ro ``` -Alternatively, you can download the image from the private registry before using it or employ an [Initialization Hook]({{< ref "/references/init-hooks" >}}) to install the Docker client and use these credentials to download the image. +Alternatively, you can download the image from the private registry before using it or employ an [Initialization Hook](/aws/capabilities/config/initalization-hooks) to install the Docker client and use these credentials to download the image. ## Firelens for ECS Tasks -{{< callout >}} +:::note Firelens emulation is currently available as part of the **LocalStack Enterprise** plan. If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo) to request access. -{{< /callout >}} +::: LocalStack's ECS emulation supports custom log routing via FireLens. FireLens allows the ECS service to manage the configuration of the logging driver of application containers, and to create the proper configuration for the `fluentbit`/`fluentd` logging layer. @@ -356,9 +367,7 @@ Additionally, you cannot use ECS on Kubernetes with FireLens. The LocalStack Web Application provides a Resource Browser for managing ECS clusters & task definitions. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **ECS** under the **Compute** section. -ECS Resource Browser -
-
+![ECS Resource Browser](/images/aws/ecs-resource-browser.png) The Resource Browser allows you to perform the following actions: From 9df084372d4fde6119e0815ddddd382650149c56 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:41:23 +0530 Subject: [PATCH 34/80] revamp efs --- src/content/docs/aws/services/efs.md | 33 ++++++++++++++-------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/src/content/docs/aws/services/efs.md b/src/content/docs/aws/services/efs.md index 69571e11..3cea3797 100644 --- a/src/content/docs/aws/services/efs.md +++ b/src/content/docs/aws/services/efs.md @@ -1,6 +1,5 @@ --- title: "Elastic File System (EFS)" -linkTitle: "Elastic File System (EFS)" description: Get started with Elastic File System (EFS) on LocalStack tags: ["Ultimate"] --- @@ -12,7 +11,7 @@ EFS offers scalable and shared file storage that can be accessed by multiple EC2 EFS utilizes the Network File System protocol to allow it to be used as a data source for various applications and workloads. LocalStack allows you to use the EFS APIs in your local environment to create local file systems, lifecycle configurations, and file system policies. -The supported APIs are available on our [API coverage page]({{< ref "coverage_efs" >}}), which provides information on the extent of EFS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of EFS's integration with LocalStack. ## Getting started @@ -26,13 +25,13 @@ We will demonstrate how to create a file system, apply an IAM resource-based pol To create a new, empty file system you can use the [`CreateFileSystem`](https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/CreateFileSystem) API. Run the following command to create a new file system: -{{< command >}} -$ awslocal efs create-file-system \ +```bash +awslocal efs create-file-system \ --performance-mode generalPurpose \ --throughput-mode bursting \ --encrypted \ --tags Key=Name,Value=my-file-system -{{< /command >}} +``` The following output would be retrieved: @@ -58,9 +57,9 @@ The following output would be retrieved: You can also describe the locally available file systems using the [`DescribeFileSystems`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html) API. Run the following command to describe the local file systems available: -{{< command >}} -$ awslocal efs describe-file-systems -{{< /command >}} +```bash +awslocal efs describe-file-systems +``` You can alternatively pass the `--file-system-id` parameter to the `describe-file-system` command to retrieve information about a specific file system in AWS CLI. @@ -69,19 +68,19 @@ You can alternatively pass the `--file-system-id` parameter to the `describe-fil You can apply an EFS `FileSystemPolicy` to an EFS file system using the [`PutFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_PutFileSystemPolicy.html) API. Run the following command to apply a policy to the file system created in the previous step: -{{< command >}} -$ awslocal efs put-file-system-policy \ +```bash +awslocal efs put-file-system-policy \ --file-system-id \ --policy "{\"Version\":\"2012-10-17\",\"Id\":\"ExamplePolicy01\",\"Statement\":[{\"Sid\":\"ExampleStatement01\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"*\"},\"Action\":[\"elasticfilesystem:ClientMount\",\"elasticfilesystem:ClientWrite\"],\"Resource\":\"arn:aws:elasticfilesystem:us-east-1:000000000000:file-system/fs-34feac549e66b814\"}]}" -{{< /command >}} +``` You can list the file system policies using the [`DescribeFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystemPolicy.html) API. Run the following command to list the file system policies: -{{< command >}} -$ awslocal efs describe-file-system-policy \ +```bash +awslocal efs describe-file-system-policy \ --file-system-id -{{< /command >}} +``` Replace `` with the ID of the file system you want to list the policies for. The output will return the `FileSystemPolicy` for the specified EFS file system. @@ -91,11 +90,11 @@ The output will return the `FileSystemPolicy` for the specified EFS file system. You can create a lifecycle configuration for an EFS file system using the [`PutLifecycleConfiguration`](https://docs.aws.amazon.com/efs/latest/ug/API_PutLifecycleConfiguration.html) API. Run the following command to create a lifecycle configuration for the file system created in the previous step: -{{< command >}} -$ awslocal efs put-lifecycle-configuration \ +```bash +awslocal efs put-lifecycle-configuration \ --file-system-id \ --lifecycle-policies "{\"TransitionToIA\":\"AFTER_30_DAYS\"}" -{{< /command >}} +``` The following output would be retrieved: From a7e5a17a2cd1173a8661683bd776ffb99bd86170 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:53:42 +0530 Subject: [PATCH 35/80] revamp eks --- src/content/docs/aws/services/eks.md | 238 +++++++++++++++------------ 1 file changed, 134 insertions(+), 104 deletions(-) diff --git a/src/content/docs/aws/services/eks.md b/src/content/docs/aws/services/eks.md index 53518df5..b19f50ce 100644 --- a/src/content/docs/aws/services/eks.md +++ b/src/content/docs/aws/services/eks.md @@ -1,6 +1,5 @@ --- title: "Elastic Kubernetes Service (EKS)" -linkTitle: "Elastic Kubernetes Service (EKS)" description: Get started with Elastic Kubernetes Service (EKS) on LocalStack tags: ["Ultimate"] persistence: supported with limitations @@ -12,7 +11,7 @@ Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it e Kubernetes is an open-source system for automating containerized applications' deployment, scaling, and management. LocalStack allows you to use the EKS APIs in your local environment to spin up embedded Kubernetes clusters in your local Docker engine or use an existing Kubernetes installation you can access from your local machine (defined in `$HOME/.kube/config`). -The supported APIs are available on our [API coverage page]({{< ref "coverage_eks" >}}), which provides information on the extent of EKS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of EKS's integration with LocalStack. ## Getting started @@ -31,12 +30,12 @@ In most cases, the installation is automatic, eliminating the need for any manua You can create a new cluster using the [`CreateCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html) API. Run the following command: -{{< command >}} -$ awslocal eks create-cluster \ +```bash +awslocal eks create-cluster \ --name cluster1 \ --role-arn "arn:aws:iam::000000000000:role/eks-role" \ --resources-vpc-config "{}" -{{}} +``` You can see an output similar to the following: @@ -59,30 +58,37 @@ You can see an output similar to the following: } ``` -{{< callout >}} +:::note When setting up a local EKS cluster, if you encounter a `"status": "FAILED"` in the command output and see `Unable to start EKS cluster` in LocalStack logs, remove or rename the `~/.kube/config` file on your machine and retry. The CLI mounts this file automatically for CLI versions before `3.7`, leading EKS to assume you intend to use the specified cluster, a feature that has specific requirements. -{{< /callout >}} +::: You can use the `docker` CLI to check that some containers have been created: -{{< command >}} -$ docker ps - +```bash +docker ps +``` + +The output will be: + +```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ... b335f7f089e4 rancher/k3d-proxy:5.0.1-rc.1 "/bin/sh -c nginx-pr…" 1 minute ago Up 1 minute 0.0.0.0:8081->80/tcp, 0.0.0.0:44959->6443/tcp k3d-cluster1-serverlb f05770ec8523 rancher/k3s:v1.21.5-k3s2 "/bin/k3s server --t…" 1 minute ago Up 1 minute ... - -{{< / command >}} +``` After successfully creating and initializing the cluster, we can easily find the server endpoint, using the [`DescribeCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API. Run the following command: -{{< command >}} -$ awslocal eks describe-cluster --name cluster1 - +```bash +awslocal eks describe-cluster --name cluster1 +``` + +The output will be: + +```json { "cluster": { "name": "cluster1", @@ -103,8 +109,7 @@ $ awslocal eks describe-cluster --name cluster1 "clientRequestToken": "d188f578-b353-416b-b309-5d8c76ecc4e2" } } - -{{< / command >}} +``` ### Utilizing ECR Images within EKS @@ -112,17 +117,17 @@ You can now use ECR (Elastic Container Registry) images within your EKS environm #### Initial configuration -To modify the return value of resource URIs for most services, including ECR, you can utilize the `LOCALSTACK_HOST` variable in the [configuration]({{< ref "configuration" >}}). +To modify the return value of resource URIs for most services, including ECR, you can utilize the `LOCALSTACK_HOST` variable in the [configuration](/aws/capabilities/config/configuration). By default, ECR returns a `repositoryUri` starting with `localhost.localstack.cloud`, such as: `localhost.localstack.cloud:/`. -{{< callout >}} +:::note In this section, we assume that `localhost.localstack.cloud` resolves in your environment, and LocalStack is connected to a non-default bridge network. -For more information, refer to the article about [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}). +For more information, refer to the article about [DNS rebind protection](/aws/tooling/dns-server#dns-rebind-protection). If the domain `localhost.localstack.cloud` does not resolve on your host, you can still proceed by setting `LOCALSTACK_HOST=localhost` (not recommended). LocalStack will take care of the DNS resolution of `localhost.localstack.cloud` within ECR itself, allowing you to use the `localhost:/` URI for tagging and pushing the image on your host. -{{< /callout >}} +::: Once you have configured this correctly, you can seamlessly use your ECR image within EKS as expected. @@ -134,9 +139,13 @@ For the purpose of this guide, we will retag the `nginx` image to be pushed to a You can create a new ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API. Run the following command: -{{< command >}} -$ awslocal ecr create-repository --repository-name "fancier-nginx" - +```bash +awslocal ecr create-repository --repository-name "fancier-nginx" +``` + +The output will be: + +```json { "repository": { "repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/fancier-nginx", @@ -153,47 +162,49 @@ $ awslocal ecr create-repository --repository-name "fancier-nginx" } } } - -{{< / command >}} +``` You can now pull the `nginx` image from Docker Hub using the `docker` CLI: -{{< command >}} -$ docker pull nginx -{{< / command >}} +```bash +docker pull nginx +``` You can further tag the image to be pushed to ECR: -{{< command >}} -$ docker tag nginx 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx -{{< / command >}} +```bash +docker tag nginx 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx +``` Finally, you can push the image to local ECR: -{{< command >}} -$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx -{{< / command >}} +```bash +docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx +``` Now, let us set up the EKS cluster using the image pushed to local ECR. Next, we can configure `kubectl` to use the EKS cluster, using the [`UpdateKubeconfig`](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterConfig.html) API. Run the following command: -{{< command >}} -$ awslocal eks update-kubeconfig --name cluster1 && \ +```bash +awslocal eks update-kubeconfig --name cluster1 && \ kubectl config use-context arn:aws:eks:us-east-1:000000000000:cluster/cluster1 - +``` + +The output will be: + +```bash ... Added new context arn:aws:eks:us-east-1:000000000000:cluster/cluster1 to /home/localstack/.kube/config Switched to context "arn:aws:eks:us-east-1:000000000000:cluster/cluster1". ... - -{{< / command >}} +``` You can now go ahead and add a deployment configuration for the `fancier-nginx` image. -{{< command >}} -$ cat <}} +``` You can now describe the pod to see if the image was pulled successfully: -{{< command >}} -$ kubectl describe pod fancier-nginx -{{< / command >}} +```bash +kubectl describe pod fancier-nginx +``` In the events, we can see that the pull from ECR was successful: @@ -230,9 +241,9 @@ In the events, we can see that the pull from ECR was successful: Normal Pulled 10s kubelet Successfully pulled image "000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx:latest" in 2.412775896s ``` -{{< callout "tip" >}} -Public Docker images from `registry.k8s.io` can be pulled without additional configuration from EKS nodes, but if you pull images from any other locations that resolve to S3 you can configure `DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=\.s3.*\.amazonaws\.com` in your [configuration]({{< ref "configuration" >}}). -{{< /callout >}} +:::note +Public Docker images from `registry.k8s.io` can be pulled without additional configuration from EKS nodes, but if you pull images from any other locations that resolve to S3 you can configure `DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=\.s3.*\.amazonaws\.com` in your [configuration](/aws/capabilities/config/configuration). +::: ### Configuring an Ingress for your services @@ -240,8 +251,8 @@ To make an EKS service externally accessible, it is necessary to create an Ingre For our sample deployment, we can create an `nginx` Kubernetes service by applying the following configuration: -{{< command >}} -$ cat <}} +``` Use the following ingress configuration to expose the `nginx` service on path `/test123`: -{{< command >}} -$ cat <}} +``` You will be able to send a request to `nginx` via the load balancer port `8081` from the host: -{{< command >}} -$ curl http://localhost:8081/test123 - +```bash +curl http://localhost:8081/test123 +``` + +The output will be: + +```bash ...
nginx/1.21.6
... -
-{{< / command >}} +``` -{{< callout "tip" >}} +:::note You can customize the Load Balancer port by configuring `EKS_LOADBALANCER_PORT` in your environment. -{{< /callout >}} +::: ### Enabling HTTPS with local SSL/TLS certificate for the Ingress @@ -325,10 +339,10 @@ Once you have deployed your service using the mentioned ingress configuration, i Remember that the ingress controller does not support HTTP/HTTPS multiplexing within the same Ingress. Consequently, if you want your service to be accessible via HTTP and HTTPS, you must create two separate Ingress definitions — one Ingress for HTTP and another for HTTPS. -{{< callout >}} +:::note The `ls-secret-tls` secret is created in the `default` namespace. If your ingress and services are residing in a custom namespace, it is essential to copy the secret to that custom namespace to make use of it. -{{< /callout >}} +::: ## Use an existing Kubernetes installation @@ -343,25 +357,29 @@ volumes: When using the LocalStack CLI, please configure the `DOCKER_FLAGS` to mount the kubeconfig into the container: -{{< command >}} -$ DOCKER_FLAGS="-v ${HOME}/.kube/config:/root/.kube/config" localstack start -{{}} +```bash +DOCKER_FLAGS="-v ${HOME}/.kube/config:/root/.kube/config" localstack start +``` -{{< callout >}} +:::note Using an existing Kubernetes installation is currently only possible when the authentication with the cluster uses X509 client certificates: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certificates -{{< /callout >}} +::: In recent versions of Docker, you can enable Kubernetes as an embedded service running inside Docker. The picture below illustrates the Kubernetes settings in Docker for macOS (similar configurations apply for Linux/Windows). By default, the Kubernetes API is assumed to run on the local TCP port `6443`. -Kubernetes in Docker +![Kubernetes in Docker](/images/aws/kubernetes.png) You can create an EKS Cluster configuration using the following command: -{{< command >}} -$ awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::000000000000:role/eks-role --resources-vpc-config '{}' - +```bash +awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::000000000000:role/eks-role --resources-vpc-config '{}' +``` + +The output will be: + +```json { "cluster": { "name": "cluster1", @@ -372,21 +390,23 @@ $ awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::0000000000 ... } } - -{{}} +``` And check that it was created with: -{{< command >}} -$ awslocal eks list-clusters - +```bash +awslocal eks list-clusters +``` + +The output will be: + +```json { "clusters": [ "cluster1" ] } - -{{< / command >}} +``` To interact with your Kubernetes cluster, configure your Kubernetes client (such as `kubectl` or other SDKs) to point to the `endpoint` provided in the `create-cluster` output mentioned earlier. However, depending on whether you're calling the Kubernetes API from your local machine or from within a Lambda function, you might need to use different endpoint URLs. @@ -403,12 +423,12 @@ If you need to customize the port or expose the load balancer on multiple ports, For instance, if you want to expose the load balancer on ports 8085 and 8086, you can use the following tag definition when creating the cluster: -{{< command >}} -$ awslocal eks create-cluster \ +```bash +awslocal eks create-cluster \ --name cluster1 \ --role-arn arn:aws:iam::000000000000:role/eks-role \ --resources-vpc-config '{}' --tags '{"_lb_ports_":"8085,8086"}' -{{< /command >}} +``` ## Routing Traffic to Services on Different Endpoints @@ -419,8 +439,8 @@ In such cases, path-based routing may not be ideal if you need the services to b To address this requirement, we recommend utilizing host-based routing rules, as demonstrated in the example below: -{{< command >}} -$ cat <}} +``` The example defines routing rules for two local endpoints - the first rule points to a service `service-1` accessible under `/v1`, and the second rule points to a service `service-2` accessible under the same path `/v1`. @@ -461,16 +481,25 @@ Similarly, the second rule points to a service named `service-2`, also accessibl This approach enables us to access the two distinct services using the same path and port number, but with different host names. This host-based routing mechanism ensures that each service is uniquely identified based on its designated host name, allowing for a uniform and organized way of accessing multiple services within the EKS cluster. -{{< command >}} -$ curl http://eks-service-1.localhost.localstack.cloud:8081/v1 - +```bash +curl http://eks-service-1.localhost.localstack.cloud:8081/v1 +``` + +The output will be: + +```bash ... [output of service 1] - -$ curl http://eks-service-2.localhost.localstack.cloud:8081/v1 - +``` + +```bash +curl http://eks-service-2.localhost.localstack.cloud:8081/v1 +``` + +The output will be: + +```bash ... [output of service 2] - -{{< /command >}} +``` It is important to note that the host names `eks-service-1.localhost.localstack.cloud` and `eks-service-2.localhost.localstack.cloud` both resolve to `127.0.0.1` (localhost). Consequently, you can utilize them to communicate with your service endpoints and distinguish between different services within the Kubernetes load balancer. @@ -489,13 +518,17 @@ If you have specific directories that you want to mount from your local developm When creating your cluster, include the special tag `_volume_mount_`, which allows you to define the desired volume mounting configuration from your local development machine to the cluster nodes. -{{< command >}} -$ awslocal eks create-cluster \ +```bash +awslocal eks create-cluster \ --name cluster1 \ --role-arn arn:aws:iam::000000000000:role/eks-role \ --resources-vpc-config '{}' \ --tags '{"_volume_mount_":"/path/on/host:/path/on/node"}' - +``` + +The output will be: + +```json { "cluster": { "name": "cluster1", @@ -509,13 +542,12 @@ $ awslocal eks create-cluster \ ... } } - -{{< / command >}} +``` -{{< callout >}} +:::note Note that the tag was previously referred to as `__k3d_volume_mount__`, but it has now been renamed to `_volume_mount_`. As a result, the tag name `__k3d_volume_mount__` is considered deprecated and will be removed in an upcoming release. -{{< /callout >}} +::: After creating your cluster with the `_volume_mount_` tag, you can create your path with volume mounts as usual. The configuration for the volume mounts can be set up similar to this: @@ -572,9 +604,7 @@ Users can specify the desired version when creating an EKS cluster in LocalStack The LocalStack Web Application provides a Resource Browser for managing EKS clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EKS** under the **Compute** section. -EKS Resource Browser -
-
+![EKS Resource Browser](/images/aws/eks-resource-browser.png) The Resource Browser allows you to perform the following actions: From 22c4664426ee7bbe86b9e272e482d590538df976 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:57:37 +0530 Subject: [PATCH 36/80] revamp elasticache --- src/content/docs/aws/services/elasticache.md | 68 +++++++++++--------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/src/content/docs/aws/services/elasticache.md b/src/content/docs/aws/services/elasticache.md index 0904ea10..2cda83d3 100644 --- a/src/content/docs/aws/services/elasticache.md +++ b/src/content/docs/aws/services/elasticache.md @@ -1,6 +1,5 @@ --- title: "ElastiCache" -linkTitle: "ElastiCache" tags: ["Base"] description: Get started with AWS ElastiCache on LocalStack persistence: supported @@ -15,7 +14,7 @@ It supports popular open-source caching engines like Redis and Memcached (LocalS providing a means to efficiently store and retrieve frequently accessed data with minimal latency. LocalStack supports ElastiCache via the Pro offering, allowing you to use the ElastiCache APIs in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "references/coverage/coverage_elasticache" >}}), +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of ElastiCache integration with LocalStack. ## Getting started @@ -26,82 +25,87 @@ This guide is designed for users new to ElastiCache and assumes basic knowledge After starting LocalStack Pro, you can create a cluster with the following command. -{{< command >}} -$ awslocal elasticache create-cache-cluster \ +```bash +awslocal elasticache create-cache-cluster \ --cache-cluster-id my-redis-cluster \ --cache-node-type cache.t2.micro \ --engine redis \ --num-cache-nodes 1 -{{< /command>}} +``` Wait for it to be available, then you can use the cluster endpoint for Redis operations. -{{< command >}} -$ awslocal elasticache describe-cache-clusters --show-cache-node-info --query "CacheClusters[0].CacheNodes[0].Endpoint" +```bash +awslocal elasticache describe-cache-clusters --show-cache-node-info --query "CacheClusters[0].CacheNodes[0].Endpoint" +``` + +The output will be: + +```json { "Address": "localhost.localstack.cloud", "Port": 4510 } -{{< /command >}} +``` -The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}). +The cache cluster uses a random port of the [external service port range](). Use this port number to connect to the Redis instance like so: -{{< command >}} -$ redis-cli -p 4510 ping +```bash +redis-cli -p 4510 ping PONG -$ redis-cli -p 4510 set foo bar +redis-cli -p 4510 set foo bar OK -$ redis-cli -p 4510 get foo +redis-cli -p 4510 get foo "bar" -{{< / command >}} +``` ### Replication groups in non-cluster mode -{{< command >}} -$ awslocal elasticache create-replication-group \ +```bash +awslocal elasticache create-replication-group \ --replication-group-id my-redis-replication-group \ --replication-group-description 'my replication group' \ --engine redis \ --cache-node-type cache.t2.micro \ --num-cache-clusters 3 -{{< /command >}} +``` Wait for it to be available. When running the following command, you should see one node group when running: -{{< command >}} -$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group -{{< /command >}} +```bash +awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group +``` To retrieve the primary endpoint: -{{< command >}} -$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group \ +```bash +awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group \ --query "ReplicationGroups[0].NodeGroups[0].PrimaryEndpoint" -{{< /command >}} +``` ### Replication groups in cluster mode The cluster mode is enabled by using `--num-node-groups` and `--replicas-per-node-group`: -{{< command >}} -$ awslocal elasticache create-replication-group \ +```bash +awslocal elasticache create-replication-group \ --engine redis \ --replication-group-id my-clustered-redis-replication-group \ --replication-group-description 'my clustered replication group' \ --cache-node-type cache.t2.micro \ --num-node-groups 2 \ --replicas-per-node-group 2 -{{< /command >}} +``` Note that the group nodes do not have a primary endpoint. Instead they have a `ConfigurationEndpoint`, which you can connect to using `redis-cli -c` where `-c` is for cluster mode. -{{< command >}} -$ awslocal elasticache describe-replication-groups --replication-group-id my-clustered-redis-replication-group \ +```bash +awslocal elasticache describe-replication-groups --replication-group-id my-clustered-redis-replication-group \ --query "ReplicationGroups[0].ConfigurationEndpoint" -{{< /command >}} +``` ## Container mode @@ -119,11 +123,11 @@ You can access the Resource Browser by opening the LocalStack Web Application in In the ElastiCache resource browser you can: * List and remove existing cache clusters - {{< img src="elasticache-resource-browser-list.png" alt="Create a ElastiCache cluster in the resource browser" >}} + ![List existing cache clusters](/images/aws/elasticache-resource-browser-list.png) * View details of cache clusters - {{< img src="elasticache-resource-browser-show.png" alt="Create a ElastiCache cluster in the resource browser" >}} + ![View details of cache clusters](/images/aws/elasticache-resource-browser-show.png) * Create new cache clusters - {{< img src="elasticache-resource-browser-create.png" alt="Create a ElastiCache cluster in the resource browser" >}} + ![Create a ElastiCache cluster in the resource browser](/images/aws/elasticache-resource-browser-create.png) ## Current Limitations From 6ee29f272ad9f92d7748ed78078062f2c2ac94db Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 16:59:12 +0530 Subject: [PATCH 37/80] revamp eb --- .../docs/aws/services/elasticbeanstalk.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/src/content/docs/aws/services/elasticbeanstalk.md b/src/content/docs/aws/services/elasticbeanstalk.md index 95950ae4..937a7ff8 100644 --- a/src/content/docs/aws/services/elasticbeanstalk.md +++ b/src/content/docs/aws/services/elasticbeanstalk.md @@ -1,8 +1,6 @@ --- title: "Elastic Beanstalk" -linkTitle: "Elastic Beanstalk" -description: > - Get started with Elastic Beanstalk (EB) on LocalStack +description: Get started with Elastic Beanstalk (EB) on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ Elastic Beanstalk orchestrates various AWS services, including EC2, S3, SNS, and Elastic Beanstalk also supports various application environments, such as Java, .NET, Node.js, PHP, Python, Ruby, Go, and Docker. LocalStack allows you to use the Elastic Beanstalk APIs in your local environment to create and manage applications, environments and versions. -The supported APIs are available on our [API coverage page]({{< ref "coverage_elasticbeanstalk" >}}), which provides information on the extent of Elastic Beanstalk's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Elastic Beanstalk's integration with LocalStack. ## Getting started @@ -27,10 +25,10 @@ We will demonstrate how to create an Elastic Beanstalk application and environme To create an Elastic Beanstalk application, you can use the [`CreateApplication`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplication.html) API. Run the following command to create an application named `my-app`: -{{< command >}} -$ awslocal elasticbeanstalk create-application \ +```bash +awslocal elasticbeanstalk create-application \ --application-name my-app -{{< /command >}} +``` The following output would be retrieved: @@ -47,21 +45,21 @@ The following output would be retrieved: You can also use the [`DescribeApplications`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplications.html) API to retrieve information about your application. Run the following command to retrieve information about the `my-app` application, we created earlier: -{{< command >}} -$ awslocal elasticbeanstalk describe-applications \ +```bash +awslocal elasticbeanstalk describe-applications \ --application-names my-app -{{< /command >}} +``` ### Create an environment To create an Elastic Beanstalk environment, you can use the [`CreateEnvironment`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateEnvironment.html) API. Run the following command to create an environment named `my-environment`: -{{< command >}} -$ awslocal elasticbeanstalk create-environment \ +```bash +awslocal elasticbeanstalk create-environment \ --application-name my-app \ --environment-name my-environment -{{< /command >}} +``` The following output would be retrieved: @@ -78,21 +76,21 @@ The following output would be retrieved: You can also use the [`DescribeEnvironments`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeEnvironments.html) API to retrieve information about your environment. Run the following command to retrieve information about the `my-environment` environment, we created earlier: -{{< command >}} -$ awslocal elasticbeanstalk describe-environments \ +```bash +awslocal elasticbeanstalk describe-environments \ --environment-names my-environment -{{< /command >}} +``` ### Create an application version To create an Elastic Beanstalk application version, you can use the [`CreateApplicationVersion`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplicationVersion.html) API. Run the following command to create an application version named `v1`: -{{< command >}} -$ awslocal elasticbeanstalk create-application-version \ +```bash +awslocal elasticbeanstalk create-application-version \ --application-name my-app \ --version-label v1 -{{< /command >}} +``` The following output would be retrieved: @@ -110,10 +108,10 @@ The following output would be retrieved: You can also use the [`DescribeApplicationVersions`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplicationVersions.html) API to retrieve information about your application version. Run the following command to retrieve information about the `v1` application version, we created earlier: -{{< command >}} -$ awslocal elasticbeanstalk describe-application-versions \ +```bash +awslocal elasticbeanstalk describe-application-versions \ --application-name my-app -{{< /command >}} +``` ## Current Limitations From f9e8aebf7963802211780672b51926beabf5217b Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:02:02 +0530 Subject: [PATCH 38/80] revamp elastictranscoder --- .../docs/aws/services/elastictranscoder.md | 29 +++++++++---------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/src/content/docs/aws/services/elastictranscoder.md b/src/content/docs/aws/services/elastictranscoder.md index 4094f454..af826998 100644 --- a/src/content/docs/aws/services/elastictranscoder.md +++ b/src/content/docs/aws/services/elastictranscoder.md @@ -1,6 +1,5 @@ --- title: "Elastic Transcoder" -linkTitle: "Elastic Transcoder" description: Get started with Elastic Transcoder on LocalStack tags: ["Base"] --- @@ -12,7 +11,7 @@ Elastic Transcoder manages the underlying resources, ensuring high availability It also supports a wide range of input and output formats, enabling users to efficiently process and deliver video content at scale. LocalStack allows you to mock the Elastic Transcoder APIs in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_elastictranscoder" >}}), which provides information on the extent of Elastic Transcoder's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Elastic Transcoder's integration with LocalStack. ## Getting started @@ -26,23 +25,23 @@ We will demonstrate how to create an Elastic Transcoder pipeline, read the pipel You can create S3 buckets using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) API. Execute the following command to create two buckets named `elasticbucket` and `outputbucket`: -{{< command >}} -$ awslocal s3 mb s3://elasticbucket -$ awslocal s3 mb s3://outputbucket -{{< /command >}} +```bash +awslocal s3 mb s3://elasticbucket +awslocal s3 mb s3://outputbucket +``` ### Create an Elastic Transcoder pipeline You can create an Elastic Transcoder pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-pipeline.html) API. Execute the following command to create a pipeline named `test-pipeline`: -{{< command >}} -$ awslocal elastictranscoder create-pipeline \ +```bash +awslocal elastictranscoder create-pipeline \ --name Default \ --input-bucket elasticbucket \ --output-bucket outputbucket \ --role arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role -{{< /command >}} +``` The following output would be retrieved: @@ -80,9 +79,9 @@ The following output would be retrieved: You can list all pipelines using the [`ListPipelines`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/list-pipelines.html) API. Execute the following command to list all pipelines: -{{< command >}} -$ awslocal elastictranscoder list-pipelines -{{< /command >}} +```bash +awslocal elastictranscoder list-pipelines +``` The following output would be retrieved: @@ -121,9 +120,9 @@ The following output would be retrieved: You can read a pipeline using the [`ReadPipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/read-pipeline.html) API. Execute the following command to read the pipeline with the ID `0998507242379-vltecz`: -{{< command >}} -$ awslocal elastictranscoder read-pipeline --id 0998507242379-vltecz -{{< /command >}} +```bash +awslocal elastictranscoder read-pipeline --id 0998507242379-vltecz +``` The following output would be retrieved: From 0efbe7e50cb5c8b8fbc4077a6cf0b3b05c9b2de6 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:04:27 +0530 Subject: [PATCH 39/80] revamp elb --- src/content/docs/aws/services/elb.md | 61 ++++++++++++++-------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/src/content/docs/aws/services/elb.md b/src/content/docs/aws/services/elb.md index ba65a550..e17eda9a 100644 --- a/src/content/docs/aws/services/elb.md +++ b/src/content/docs/aws/services/elb.md @@ -1,6 +1,5 @@ --- title: "Elastic Load Balancing (ELB)" -linkTitle: "Elastic Load Balancing (ELB)" description: Get started with Elastic Load Balancing (ELB) on LocalStack tags: ["Base"] --- @@ -12,7 +11,7 @@ It also monitors the health of its registered targets and ensures that it routes You can check [the official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) to understand the basic terms and concepts used in the ELB. Localstack allows you to use the Elastic Load Balancing APIs in your local environment to create, edit, and view load balancers, target groups, listeners, and rules. -The supported APIs are available on our [API coverage page]({{< ref "coverage_elbv2" >}}), which provides information on the extent of ELB's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of ELB's integration with LocalStack. ## Getting started @@ -25,90 +24,90 @@ We will demonstrate how to create an Application Load Balancer, along with its t Launch an HTTP server which will serve as the target for our load balancer. -{{< command >}} -$ docker run --rm -itd -p 5678:80 ealen/echo-server -{{< /command >}} +```bash +docker run --rm -itd -p 5678:80 ealen/echo-server +``` ### Create a load balancer To specify the subnet and VPC in which the load balancer will be created, you can use the [`DescribeSubnets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_DescribeSubnets.html) API to retrieve the subnet ID and VPC ID. In this example, we will use the subnet and VPC in the `us-east-1f` availability zone. -{{< command >}} -$ subnet_info=$(awslocal ec2 describe-subnets --filters Name=availability-zone,Values=us-east-1f \ +```bash +subnet_info=$(awslocal ec2 describe-subnets --filters Name=availability-zone,Values=us-east-1f \ | jq -r '.Subnets[] | select(.AvailabilityZone == "us-east-1f") | {SubnetId: .SubnetId, VpcId: .VpcId}') -$ subnet_id=$(echo $subnet_info | jq -r '.SubnetId') +subnet_id=$(echo $subnet_info | jq -r '.SubnetId') -$ vpc_id=$(echo $subnet_info | jq -r '.VpcId') -{{< /command >}} +vpc_id=$(echo $subnet_info | jq -r '.VpcId') +``` To create a load balancer, you can use the [`CreateLoadBalancer`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateLoadBalancer.html) API. The following command creates an Application Load Balancer named `example-lb`: -{{< command >}} -$ loadBalancer=$(awslocal elbv2 create-load-balancer --name example-lb \ +```bash +loadBalancer=$(awslocal elbv2 create-load-balancer --name example-lb \ --subnets $subnet_id | jq -r '.LoadBalancers[]|.LoadBalancerArn') -{{< /command >}} +``` ### Create a target group To create a target group, you can use the [`CreateTargetGroup`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTargetGroup.html) API. The following command creates a target group named `example-target-group`: -{{< command >}} -$ targetGroup=$(awslocal elbv2 create-target-group --name example-target-group \ +```bash +targetGroup=$(awslocal elbv2 create-target-group --name example-target-group \ --protocol HTTP --target-type ip --port 80 --vpc-id $vpc_id \ | jq -r '.TargetGroups[].TargetGroupArn') -{{< /command >}} +``` ### Register a target To register a target, you can use the [`RegisterTargets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_RegisterTargets.html) API. The following command registers the target with the target group created in the previous step: -{{< command >}} -$ awslocal elbv2 register-targets --targets Id=127.0.0.1,Port=5678,AvailabilityZone=all \ +```bash +awslocal elbv2 register-targets --targets Id=127.0.0.1,Port=5678,AvailabilityZone=all \ --target-group-arn $targetGroup -{{< /command >}} +``` -{{< callout >}} +:::note Note that in some cases the `targets` parameter `Id` can be the `Gateway` address of the docker container. You can find the gateway address by running `docker inspect `. -{{< /callout >}} +::: ### Create a listener and a rule We create a listener for the load balancer using the [`CreateListener`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateListener.html) API. The following command creates a listener for the load balancer created in the previous step: -{{< command >}} -$ listenerArn=$(awslocal elbv2 create-listener \ +```bash +listenerArn=$(awslocal elbv2 create-listener \ --protocol HTTP \ --port 80 \ --default-actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \ --load-balancer-arn $loadBalancer | jq -r '.Listeners[]|.ListenerArn') -{{< /command >}} +``` To create a rule for the listener, you can use the [`CreateRule`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateRule.html) API. The following command creates a rule for the listener created above: -{{< command >}} -$ listenerRule=$(awslocal elbv2 create-rule \ +```bash +listenerRule=$(awslocal elbv2 create-rule \ --conditions Field=path-pattern,Values=/ \ --priority 1 \ --actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \ --listener-arn $listenerArn \ | jq -r '.Rules[].RuleArn') -{{< /command >}} +``` ### Send a request to the load balancer Finally, you can issue an HTTP request to the `DNSName` parameter of `CreateLoadBalancer` operation, and `Port` parameter of `CreateListener` command with the following command: -{{< command >}} -$ curl example-lb.elb.localhost.localstack.cloud:4566 -{{< /command >}} +```bash +curl example-lb.elb.localhost.localstack.cloud:4566 +``` The following output will be retrieved: @@ -175,7 +174,7 @@ http(s)://localhost.localstack.cloud:4566/_aws/elb/example-lb/test/path The following code snippets and sample applications provide practical examples of how to use ELB in LocalStack for various use cases: -- [Setting up Elastic Load Balancing (ELB) Application Load Balancers using LocalStack, deployed via the Serverless framework]({{< ref "/tutorials/elb-load-balancing" >}}) +- [Setting up Elastic Load Balancing (ELB) Application Load Balancers using LocalStack, deployed via the Serverless framework]() ## Current Limitations From df750de5da4c7108756db48a2bdd8673421072e8 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:05:11 +0530 Subject: [PATCH 40/80] revamp mediaconvert --- .../aws/services/elementalmediaconvert.md | 31 +++++++++---------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/src/content/docs/aws/services/elementalmediaconvert.md b/src/content/docs/aws/services/elementalmediaconvert.md index 662330d7..99116804 100644 --- a/src/content/docs/aws/services/elementalmediaconvert.md +++ b/src/content/docs/aws/services/elementalmediaconvert.md @@ -1,6 +1,5 @@ --- title: "Elemental MediaConvert" -linkTitle: "Elemental MediaConvert" description: Get started with Elemental MediaConvert on LocalStack tags: ["Ultimate"] --- @@ -11,11 +10,11 @@ Elemental MediaConvert is a file-based video transcoding service with broadcast- It enables you to easily create high-quality video streams for broadcast and multiscreen delivery. LocalStack allows you to mock the MediaConvert APIs in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_mediaconvert" >}}), which provides information on the extent of MediaConvert's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of MediaConvert's integration with LocalStack. -{{< callout "note">}} +:::note Elemental MediaConvert is in a preview state. -{{< /callout >}} +::: ## Getting started @@ -98,9 +97,9 @@ Create a new file named `job.json` on your local directory: You can create a MediaConvert job using the [`CreateJob`](https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/CreateJob) API. Execute the following command to create a job using a `job.json` file: -{{< command >}} -$ awslocal mediaconvert create-job --cli-input-json file://job.json -{{< /command >}} +```bash +awslocal mediaconvert create-job --cli-input-json file://job.json +``` The following output would be retrieved: @@ -148,20 +147,20 @@ The following output would be retrieved: You can list all MediaConvert jobs using the [`ListJobs`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/jobs.html#jobsget) API. Execute the following command to list all jobs: -{{< command >}} -$ awslocal mediaconvert list-jobs -{{< /command >}} +```bash +awslocal mediaconvert list-jobs +``` ### Create a queue You can create a MediaConvert queue using the [`CreateQueue`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuespost) API. Execute the following command to create a queue named `MyQueue`: -{{< command >}} -$ awslocal mediaconvert create-queue +```bash +awslocal mediaconvert create-queue --name MyQueue --description "High priority queue for video encoding" -{{< /command >}} +``` The following output would be retrieved: @@ -187,9 +186,9 @@ The following output would be retrieved: You can list all MediaConvert queues using the [`ListQueues`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuesget) API. Execute the following command to list all queues: -{{< command >}} -$ awslocal mediaconvert list-queues -{{< /command >}} +```bash +awslocal mediaconvert list-queues +``` ## Current Limitations From 4701b2c865c0e69a073e3431e8a4121d3c5bb494 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:07:18 +0530 Subject: [PATCH 41/80] revamp emr --- src/content/docs/aws/services/emr.md | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/src/content/docs/aws/services/emr.md b/src/content/docs/aws/services/emr.md index 1553dd4e..4b1dd54a 100644 --- a/src/content/docs/aws/services/emr.md +++ b/src/content/docs/aws/services/emr.md @@ -1,9 +1,7 @@ --- title: "Elastic MapReduce (EMR)" -linkTitle: "Elastic MapReduce (EMR)" tags: ["Ultimate"] -description: > - Get started with Elastic MapReduce (EMR) on LocalStack +description: Get started with Elastic MapReduce (EMR) on LocalStack --- ## Introduction @@ -16,13 +14,13 @@ LocalStack supports EMR and allows developers to run data analytics workloads lo EMR utilizes various tools in the [Hadoop](https://hadoop.apache.org/) and [Spark](https://spark.apache.org) ecosystem, and your EMR instance is automatically configured to connect seamlessly to LocalStack's S3 API. LocalStack also supports EMR Serverless to create applications and job runs, to run your Spark/PySpark jobs locally. -The supported APIs are available on our [API coverage page]({{ ref "coverage_emr" >}}), which provides information on the extent of EMR's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of EMR's integration with LocalStack. -{{< callout >}} +:::note To utilize the EMR API, certain additional dependencies need to be downloaded from the network (including Hadoop, Hive, Spark, etc). These dependencies are fetched automatically during service startup, hence it is important to ensure a reliable internet connection when retrieving the dependencies for the first time. -Alternatively, you can use one of our `*-bigdata` Docker image tags which already ship with the required libraries baked in and may provide better stability (see [here]({{< ref "/user-guide/ci/#ci-images" >}}) for more details). -{{< /callout >}} +Alternatively, you can use one of our `*-bigdata` Docker image tags which already ship with the required libraries baked in and may provide better stability (see [here]() for more details). +::: ## Getting started @@ -32,14 +30,15 @@ Start your LocalStack container using your preferred method. We will create a virtual EMR cluster using the AWS CLI. To create an EMR cluster, run the following command: -{{< command >}} -$ awslocal emr create-cluster \ +```bash +awslocal emr create-cluster \ --release-label emr-5.9.0 \ --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=1,InstanceType=m4.large -{{< / command >}} +``` + You will see a response similar to the following: -```sh +```bash { "ClusterId": "j-A2KF3EKLAOWRI" } From 1bb5cf11d560fc8da7e047c289d14ebcec30dc84 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:18:54 +0530 Subject: [PATCH 42/80] revamp es --- src/content/docs/aws/services/es.md | 185 +++++++++++++++++----------- 1 file changed, 110 insertions(+), 75 deletions(-) diff --git a/src/content/docs/aws/services/es.md b/src/content/docs/aws/services/es.md index 1d0d903b..d397f668 100644 --- a/src/content/docs/aws/services/es.md +++ b/src/content/docs/aws/services/es.md @@ -1,25 +1,30 @@ --- title: "Elasticsearch Service" -linkTitle: "Elasticsearch Service" -description: > - Get started with Amazon Elasticsearch Service (ES) on LocalStack +description: Get started with Amazon Elasticsearch Service (ES) on LocalStack tags: ["Free"] --- +## Introduction + The Elasticsearch Service in LocalStack lets you create one or more single-node Elasticsearch/OpenSearch cluster that behaves like the [Amazon Elasticsearch Service](https://aws.amazon.com/opensearch-service/the-elk-stack/what-is-elasticsearch/). This service is, like its AWS counterpart, heavily linked with the [OpenSearch Service](../opensearch). Any cluster created with the Elasticsearch Service will show up in the OpenSearch Service and vice versa. ## Creating an Elasticsearch cluster -You can go ahead and use [awslocal]({{< ref "aws-cli.md#localstack-aws-cli-awslocal" >}}) to create a new elasticsearch domain via the `aws es create-elasticsearch-domain` command. +You can go ahead and use [`awslocal`](https://github.com/localstack/awscli-local) to create a new elasticsearch domain via the `aws es create-elasticsearch-domain` command. -{{< callout >}} +:::note Unless you use the Elasticsearch default version, the first time you create a cluster with a specific version, the Elasticsearch binary is downloaded, which may take a while to download. -{{< /callout >}} +::: + +```bash +awslocal es create-elasticsearch-domain --domain-name my-domain +``` -{{< command >}} -$ awslocal es create-elasticsearch-domain --domain-name my-domain +The following output would be retrieved: + +```json { "DomainStatus": { "DomainId": "000000000000/my-domain", @@ -49,11 +54,11 @@ $ awslocal es create-elasticsearch-domain --domain-name my-domain } } } -{{< / command >}} +``` In the LocalStack log you will see something like the following, where you can see the cluster starting up in the background. -```plaintext +```bash 2021-11-08T16:29:28:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=57705 -E http.publish_port=57705 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/data" -E path.repo="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/tmp'} 2021-11-08T16:29:28:INFO:localstack.services.es.cluster: registering an endpoint proxy for http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 => http://127.0.0.1:57705 2021-11-08T16:29:30:INFO:localstack.services.es.cluster: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. @@ -68,10 +73,16 @@ In the LocalStack log you will see something like the following, where you can s and after some time, you should see that the `Processing` state of the domain is set to `false`: -{{< command >}} -$ awslocal es describe-elasticsearch-domain --domain-name my-domain | jq ".DomainStatus.Processing" +```bash +awslocal es describe-elasticsearch-domain --domain-name my-domain | jq ".DomainStatus.Processing" +``` + +The following output would be retrieved: + +```bash false -{{< / command >}} +``` + ## Interact with the cluster @@ -80,8 +91,13 @@ in this case `http://my-domain.us-east-1.es.localhost.localstack.cloud:4566`. For example: -{{< command >}} -$ curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 +```bash +curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 +``` + +The following output would be retrieved: + +```json { "name" : "localstack", "cluster_name" : "elasticsearch", @@ -99,12 +115,17 @@ $ curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 }, "tagline" : "You Know, for Search" } -{{< / command >}} +``` Or the health endpoint: -{{< command >}} -$ curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health | jq . +```bash +curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health | jq . +``` + +The following output would be retrieved: + +```json { "cluster_name": "elasticsearch", "status": "green", @@ -122,7 +143,7 @@ $ curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100 } -{{< / command >}} +``` ## Advanced topics @@ -134,7 +155,7 @@ There are three configurable strategies that govern how domain endpoints are cre | - | - | - | | `domain` | `..es.localhost.localstack.cloud:4566` | This is the default strategy that uses the `localhost.localstack.cloud` domain to route to your localhost | | `path` | `localhost:4566/es//` | An alternative that can be useful if you cannot resolve LocalStack's localhost domain | -| `port` | `localhost:` | Exposes the cluster(s) directly with ports from the [external service port range]({{< ref "external-ports" >}})| +| `port` | `localhost:` | Exposes the cluster(s) directly with ports from the [external service port range]()| | `off` | | *Deprecated*. This value now reverts to the `port` setting, using a port from the given range instead of `4571` | Regardless of the service from which the clusters were created, the domain of the cluster always corresponds to the engine type (OpenSearch or Elasticsearch) of the cluster. @@ -146,17 +167,17 @@ LocalStack allows you to set arbitrary custom endpoints for your clusters in the This can be used to overwrite the behavior of the endpoint strategies described above. You can also choose custom domains, however it is important to add the edge port (`80`/`443` or by default `4566`). -{{< command >}} -$ awslocal es create-elasticsearch-domain --domain-name my-domain \ +```bash +awslocal es create-elasticsearch-domain --domain-name my-domain \ --elasticsearch-version 7.10 \ --domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }' -{{< / command >}} +``` Once the domain processing is complete, you can access the cluster: -{{< command >}} -$ curl http://localhost:4566/my-custom-endpoint/_cluster/health -{{< / command >}} +```bash +curl http://localhost:4566/my-custom-endpoint/_cluster/health +``` ### Re-using a single cluster instance @@ -244,64 +265,78 @@ volumes: ``` 1. Run docker compose: -{{< command >}} -$ docker-compose up -d -{{< /command >}} + ```bash + docker-compose up -d + ``` 2. Create the Elasticsearch domain: -{{< command >}} -$ awslocal es create-elasticsearch-domain \ - --domain-name mylogs-2 \ - --elasticsearch-version 7.10 \ - --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}' -{ - "DomainStatus": { - "DomainId": "000000000000/mylogs-2", - "DomainName": "mylogs-2", - "ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2", - "Created": true, - "Deleted": false, - "Endpoint": "mylogs-2.us-east-1.es.localhost.localstack.cloud:4566", - "Processing": true, - "ElasticsearchVersion": "7.10", - "ElasticsearchClusterConfig": { - "InstanceType": "m3.xlarge.elasticsearch", - "InstanceCount": 4, - "DedicatedMasterEnabled": true, - "ZoneAwarenessEnabled": true, - "DedicatedMasterType": "m3.xlarge.elasticsearch", - "DedicatedMasterCount": 3 - }, - "EBSOptions": { - "EBSEnabled": true, - "VolumeType": "gp2", - "VolumeSize": 10, - "Iops": 0 - }, - "CognitoOptions": { - "Enabled": false + ```bash + awslocal es create-elasticsearch-domain \ + --domain-name mylogs-2 \ + --elasticsearch-version 7.10 \ + --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}' + ``` + + The following output would be retrieved: + + ```json + { + "DomainStatus": { + "DomainId": "000000000000/mylogs-2", + "DomainName": "mylogs-2", + "ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2", + "Created": true, + "Deleted": false, + "Endpoint": "mylogs-2.us-east-1.es.localhost.localstack.cloud:4566", + "Processing": true, + "ElasticsearchVersion": "7.10", + "ElasticsearchClusterConfig": { + "InstanceType": "m3.xlarge.elasticsearch", + "InstanceCount": 4, + "DedicatedMasterEnabled": true, + "ZoneAwarenessEnabled": true, + "DedicatedMasterType": "m3.xlarge.elasticsearch", + "DedicatedMasterCount": 3 + }, + "EBSOptions": { + "EBSEnabled": true, + "VolumeType": "gp2", + "VolumeSize": 10, + "Iops": 0 + }, + "CognitoOptions": { + "Enabled": false + } } } -} -{{< /command >}} + ``` -3. If the `Processing` status is true, it means that the cluster is not yet healthy. - You can run `describe-elasticsearch-domain` to receive the status: -{{< command >}} -$ awslocal es describe-elasticsearch-domain --domain-name mylogs-2 -{{< /command >}} +3. If the `Processing` status is true, it means that the cluster is not yet healthy. You can run `describe-elasticsearch-domain` to receive the status: + ```bash + awslocal es describe-elasticsearch-domain --domain-name mylogs-2 + ``` 4. Check the cluster health endpoint and create indices: -{{< command >}} -$ curl mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health -{"cluster_name":"es-docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}[~] -{{< /command >}} + ```bash + curl mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health + ``` + + The following output would be retrieved: + + ```bash + {"cluster_name":"es-docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}[~] + ``` 5. Create an example index: -{{< command >}} -$ curl -X PUT mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/my-index -{"acknowledged":true,"shards_acknowledged":true,"index":"my-index"} -{{< /command >}} + ```bash + curl -X PUT mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/my-index + ``` + + The following output would be retrieved: + + ```bash + {"acknowledged":true,"shards_acknowledged":true,"index":"my-index"} + ``` ## Differences to AWS From 3de0c73d386fa4438791c63ec1e6ae541945aec5 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:21:43 +0530 Subject: [PATCH 43/80] revamp eventbridge --- src/content/docs/aws/services/events.md | 53 ++++++++++++------------- 1 file changed, 26 insertions(+), 27 deletions(-) diff --git a/src/content/docs/aws/services/events.md b/src/content/docs/aws/services/events.md index b3c943f5..53d50aab 100644 --- a/src/content/docs/aws/services/events.md +++ b/src/content/docs/aws/services/events.md @@ -1,6 +1,5 @@ --- title: "EventBridge" -linkTitle: "EventBridge" description: Get started with EventBridge on LocalStack persistence: supported with limitations tags: ["Free"] @@ -14,12 +13,12 @@ EventBridge rules are tied to an Event Bus to manage event-driven workflows. You can use either identity-based or resource-based policies to control access to EventBridge resources, where the former can be attached to IAM users, groups, and roles, and the latter can be attached to specific AWS resources. LocalStack allows you to use the EventBridge APIs in your local environment to create rules that route events to a target. -The supported APIs are available on our [API coverage page]({{< ref "coverage_events" >}}), which provides information on the extent of EventBridge's integration with LocalStack. -For information on EventBridge Pipes, please refer to the [EventBridge Pipes]({{< ref "user-guide/aws/pipes" >}}) section. +The supported APIs are available on our [API coverage page](), which provides information on the extent of EventBridge's integration with LocalStack. +For information on EventBridge Pipes, please refer to the [EventBridge Pipes]() section. -{{< callout >}} +:::note The native EventBridge provider, introduced in [LocalStack 3.5.0](https://discuss.localstack.cloud/t/localstack-release-v3-5-0/947), is now the default in 4.0. The legacy provider can still be enabled using the `PROVIDER_OVERRIDE_EVENTS=v1` configuration, but it is deprecated and will be removed in the next major release. We strongly recommend migrating to the new provider. -{{< /callout >}} +::: ## Getting Started @@ -44,16 +43,16 @@ exports.handler = (event, context, callback) => { Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API: -{{< command >}} -$ zip function.zip index.js +```bash +zip function.zip index.js -$ awslocal lambda create-function \ +awslocal lambda create-function \ --function-name events-example \ --runtime nodejs16.x \ --zip-file fileb://function.zip \ --handler index.handler \ --role arn:aws:iam::000000000000:role/cool-stacklifter -{{< /command >}} +``` The output will consist of the `FunctionArn`, which you will need to add the Lambda function to the EventBridge target. @@ -61,25 +60,25 @@ The output will consist of the `FunctionArn`, which you will need to add the Lam Run the following command to create a new EventBridge rule using the [`PutRule`](https://docs.aws.amazon.com/cli/latest/reference/events/put-rule.html) API: -{{< command >}} -$ awslocal events put-rule \ +```bash +awslocal events put-rule \ --name my-scheduled-rule \ --schedule-expression 'rate(2 minutes)' -{{< /command >}} +``` In the above command, we have specified a schedule expression of `rate(2 minutes)`, which will run the rule every two minutes. It means that the Lambda function will be invoked every two minutes. Next, grant the EventBridge service principal (`events.amazonaws.com`) permission to run the rule, using the [`AddPermission`](https://docs.aws.amazon.com/cli/latest/reference/events/add-permission.html) API: -{{< command >}} -$ awslocal lambda add-permission \ +```bash +awslocal lambda add-permission \ --function-name events-example \ --statement-id my-scheduled-event \ --action 'lambda:InvokeFunction' \ --principal events.amazonaws.com \ --source-arn arn:aws:events:us-east-1:000000000000:rule/my-scheduled-rule -{{< /command >}} +``` ### Add the Lambda Function as a Target @@ -96,11 +95,11 @@ Create a file named `targets.json` with the following content: Finally, add the Lambda function as a target to the EventBridge rule using the [`PutTargets`](https://docs.aws.amazon.com/cli/latest/reference/events/put-targets.html) API: -{{< command >}} -$ awslocal events put-targets \ +```bash +awslocal events put-targets \ --rule my-scheduled-rule \ --targets file://targets.json -{{< /command >}} +``` ### Verify the Lambda invocation @@ -109,27 +108,27 @@ However, wait at least 2 minutes after running the last command before checking Run the following command to list the CloudWatch log groups: -{{< command >}} -$ awslocal logs describe-log-groups -{{< /command >}} +```bash +awslocal logs describe-log-groups +``` The output will contain the log group name, which you can use to list the log streams: -{{< command >}} -$ awslocal logs describe-log-streams \ +```bash +awslocal logs describe-log-streams \ --log-group-name /aws/lambda/events-example -{{< /command >}} +``` Alternatively, you can fetch LocalStack logs to verify the Lambda invocation: -{{< command >}} -$ localstack logs +```bash +localstack logs ... 2023-07-17T09:37:52.028 INFO --- [ asgi_gw_0] localstack.request.aws : AWS lambda.Invoke => 202 2023-07-17T09:37:52.106 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/logs => 202 2023-07-17T09:37:52.114 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/response => 202 ... -{{< /command >}} +``` ## Supported target types From a8926de30bf75b83021b5a2176768f548487ddf4 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:24:26 +0530 Subject: [PATCH 44/80] add eventbridge rb pic --- .../aws/eventbridge-resource-browser.png | Bin 0 -> 113336 bytes src/content/docs/aws/services/events.md | 2 ++ 2 files changed, 2 insertions(+) create mode 100644 public/images/aws/eventbridge-resource-browser.png diff --git a/public/images/aws/eventbridge-resource-browser.png b/public/images/aws/eventbridge-resource-browser.png new file mode 100644 index 0000000000000000000000000000000000000000..80e8373fda0f55e019b763554f6a23916ccd32bf GIT binary patch literal 113336 zcmeFZby!vFx;{K9k(Ndp0i}@+Ns$t1CLJQuG3jQ4Afa?O6Qn04os*On5NVK-?(X^x z);eeJwfA*>=dAtrclZaU7h^ca`#y2s_jA8|Qc;q{!6e57fj~I&a<9}tAPia%2s!WJ z1K?k@B)_Bs-;kWtWM6_x2Pro|AXRV7tXa~}Roy}a}7-fnIG^0kq+ z$1CI65Mp;wi03GE3dwBlMyarWqJ3A>xeUaQsp5xBlnFv(l0f=fKVA?sb%NBOy>%jg z`;tFypGo)@^l1(GKfCB{6$nL6W~7|{KfA0IL@@OC-sT@~-fDsLASx`+MD}kj!apwa zLt@B4`RAvm5JCUDoAh6|&PF4X;7*C4V*Z$%^1UaA*r|5-3$(;iU%2eEhmd|@PSLA@5jt~q}{WbZE{llU$4pREEW8G9=z zX4rXK?mr8q=YHRu{LXvB2ZBlZ}YS2)HpS_9*Ef!}J z!L%z?`dU-d=OxP_5A?^(j~EP38R^%pn5-@}gW_{dF=noQxt;CKHa?6D35t(rOrzQv z^FORyv68F1G(vAYWCFv9ic8p42fH4a_9tXf0Xg_V|GrE2O8zhHHtbQ|V<)U4{+;11 zoxCtK%ANj>wl-Y8BF!K#G;?~HLA}H+<7jP!hk$UkrN8c&&*Atd1y!ISGkrk5qZIzc z?uDCWr0hF-a^$m&_WL(sbX=N0Ch#tG%4GbaqEx;dGpNdunCs(NfsWBITc^ZQSu0)P zUk2%_aA)Sp_{tC20isPT4eb*%Sf>5YLymi3uYO5llFL0h@p@S+JwrYj%Bd294Fn29BXgwhcu)pb`k)$By!8_J3Q^W`L>%+_qq zsDy-wGN?|-4+_ z9*--t=ik^kxo8^GFL1=DtKmDbQmb(lf@2#jhU=)=bZ|CQ#RBUfM9*K}Baq3-PyZtV z`4GdA=_NN)>>MeU$ue6)ie;wf?L?!n)GhY=C=xp@Ozyt6YqFWl_cPSZsclby~uq7pJ5^jDbDT%aE@-ZZV;s%Pe4 z=r?)e@Z4&DW-%njr+ysx$t38Df9TRp8DirxlZoo(2X)InzUiM8+dkbf)-Il&T_}nD zdk*zZYY{0=2Y$pkDriDFcLSd752t1?EZak@_@s8ANI5kFn@h=t1{xq{4mx~efiDL` zi-atbyg7SI7ApCkP=VxrbwA|DIsj%zo+GX9-bc(otH8hisPY`S0i{4j zw0Clmnc^ma`^z2$xoFsLnAuPwBL{#_a*Ndwy#c@b+p;!4ICys13dYq2ns;LE+q;cv z(t%TyG8$1KoOn68xpZuYPYDQORR+h&<3(r3d~NQZk{a!Uc%p6V6qeTW93#BzhPRBK z=e05uzWEgW&At;aA+#cgPv2wmSQk}%t!0he6f^ugoKje8L{1lhDu<*{#gppVVCo-gK#X3vS`Wu%B_Pp-S)ff#FNw05^yb`N@Zz=!`FY73T^kzz*m+R z&w^~P9#Ilt%F3dH13S8=F$t`MY21ln&7v~lUYp)go=Lacz6}MIlR3RaLXCKo-St?B zV^uG5Unk0_sd2N^31+FMKHV?e&Pu?-Kz!hY8v4n&SY^IR{wXK-T9sJ1{mB=V>d((I zcE-?fr(UWuZml~)m?b{QN=o>@f(;}E>y9C~>L7UDJl0v5X3eL+LEr;+Ypf;${}M0U zqWwI5lNI3%phTJBSZCQD$*|bakA>mF1cPVMBeR_ge}1i~(4ZR(n{iv5Z{Q-IC__u0 zWoH*fxx$5gYcny|SfN)?P{?7(Z~gM-aTR3hQTXO%;YtqTbWvcxNE3|{XQ!xE6b2PX z-`Lo2zdKRgJfXUy>^#JIFguj_Iv;H6!RrT6@1t5K>^gw8K3G)<&;T?4AuI?3q}rY1f8&z$sE$~)y(qIs ziKPpXE#p~0bt0Q=OeOa@8oZfox_T~LYyDZ6&zo-KYdV4~WXCJ=%g_?laH%x?iTzdC zn4ZJ>xsJ2l1a>w2B2!~S?I9*^TVHtNnBd5>$QuDaq7WV5IpFrZ_CMWjYJ4VdhwziZ*y^VEhusMuyI>tqV7gjev;@`mh!o#aEX?<=-U{+0rv(s4(Zl-(#gSXlF<$FDr z*gF#_0~BLiF9QwkIRp{a2cKYN<6$_vyQKvUSjb|nIa|91`pl5}0vmp%@spodNJtt@ z*kp-txWq)k9qg<=7KlqD9-5tx^89ehu;2u;LRSc40Iyxo@Pgwb-@RMxm*e+Z!c$*>ZcmmE|M?erk z`rt;^)`SsHAOO8H2^53+b(O;)vdyQlXn~O89ulYDOAe&}_y;G%c-cBE9w z-BJ!RTB-vEwzOlj@Qy0rXYOA7?Isvj-&|K^E*84;csqLh z?VAH$8u2-avLfMDI>>L#IrY1OsIG?dWo5zAO5+sd_~8>qlpxY)vbG{RcH=PCAN2-K z$D7kih>UrV+E(64)!U-u`RY5tgwoldHS}9_oGlG46Am2&aU)(vt1KkKKQ_RpH26mn z=x1kU(fMs;?)p?c4p6RHO7qccQvt*R0@4Q)6fgBZt1A2JSnJ4g75NRX$+HBW0C)yd z%N+f9Ya_A&+Kl-`kb%-@P~Q*8pC47mwxVgaUbu9dW^8#aJk)d(kcECDlHZ>=Y=;ph zt5WtG`=_Ii<=)5-R3pc{p*h}x_-47C;le%D#!1MjKW*@=W?2W5Y-ZJ#BCC~HfuEn& z=T}RcHe}W2-4x_elAk&c$==aA;@wv_%>_?)qUIDK8ekb^_Nb|?Qwry{ zi!9jg@FJ2R_8!pBDZl=6k6Xv%&7ZgiFKXQ157F$Bp)p1I0t7xGk{{C-D5eNwJ zPRv0#!N;nvAxzg!=W!|i=D(bjfo|~{0loe%ecf!qXH!z5GNtazl2#Rl{j1prolxeq zyR)AzNyxW;F5o@YNgMME! zig!?4HB+nV3E9G0p!_1PF5bj7c#*=c&ls!I>@fKYn>9LWKl2xFD~xXlu7vwz)7s8B zk+Jgcs;PVOm+ooV4B^=cqA|UN0!#QE(uW^fC-L$na&d97N{x+9Y_0R~m(I<|mj*{? zdk>YBm1DbCK>x!cs~n>_Bjud3lP>@etqRAm+LE+pNA!!o7*KFk%U~FF#s1PR)|%jv zb22OFSd+trE>rW>4kurDr0-0gXDEW_#{649uXSuUoOz}~+*!<*nM{1}g5t(YG{}{m zeDfr{;c-A@%Pq=nNouC@kdiK_P&Qw4{bCZaAIJM3>hrBfH#?vwQnO_?CiKRDn|>Kb z=jukk8E`$h#36o|etAlElA&K)yDb)WW=Qn`lS9dqRZd<}YC|pet)&i$(cm?qgTtru z{Q*)H_$Z=~HMf^Hxk4YncAHa+Gb6q)Z*-f<4s19P<-5P??;k_4#Or6B@g`zI?D_#8 zT(f<0F?no#`2=In2aUC#`}{Wo!>azvBZ+HkA89x{dF;0>5D;Zb ze8OisK+0)tm3SpRuYoSY%S$BxNxr~vY3^5&hb?=Gz{nL+>;#n2xIQFAPQTf4rMXeJ z>kF@qjg7I1HHw~|p5O~1>m#+)k@eG^naKth3S?xC*1=K{a()}Xu5gOfD$IE+zUY*c zyl^hxTbpyY&G$c(q)VK}hve*Lo*sqtmseKTyI~vX>z^kxS)u>Lo*^WK`RN^e3JFyn zrBFOm1fe%N0_gItm9JOOLudB`io^dGElQ$|ijMv;I;xSApD%TLmp)TFW?{FR4Cbch z^|nLGSx?eCUyNjgUk)osUBRu$52y3Rze~=NH>yIF1IXQpn$5_?ZC3Ws^ zE?oJoCidI*Zm)3(u;Q52Wqdr0n!j=0-5K*c>BELlQH@uxi2>e~N*~31 zRpT)siTc2(TW=|-t?~@vysftl7t+Lw+VCb;#5YZF1$}Z_+xT+X!K(fmH7cQWD#!pmVyQzQBe*hKsx(-g zbQEsTL17YfqTau9TDdcr-YDpB>d#%EJB31ze%v=w-wq?hI~w`sLK-Kb+Gs~I3SD*k zML;8Qxta~pmA$j!3Z*o%4Vv5_>3Cv+oAqR;Vx~Bdp>v1ulSMxg`W;W+W%GmS{ zVvgs9jxH!X-O)XR?y@!h0cNBa$C~K%spGn#_kil+I<(G_QTyh(%|s~a|`o18P(SRK_u&qNc__AOK~1XJ>JJ9V9L;K3b?wG3H%FESL0s9i=Nm2BSx$hd{W& z{Q>}>CXeumvjrkjw-6BIo~ACl(m}Ylv#D~2faUvQg~YP9mXh1+BNIHl+p*)6=znlbcne_uLbzoQqIP7BqpifB zCxx%Lq-Lo$u*hMfkMDSI3@u*d*Qm8I{;(CMfMq>e)cbn0e*OgwiANdGFFbG!cnCKM zNyz62@W@Kf6|i5bMvt%o0x{%!NtmZ8fr*Lzy^@p^s7jyn{4@@uStN4VUUz({LbK_g zY9FtmWMgN?atRpl@JZNOx$rGI);K>Ck{M6Oc-Zh6wt5}jJr z!sqE?ng8;|#^c9zkBGwJDO`VVJZmgqX5)Nd&;h~H%2Gw7uuN(+r{ToJsC`{qz@QX& zLRykN4bE?5WjL<)zEN`C`GE`oCjcVz{r*kx)n{H6RUcDxF?|!vTBsPa^RSsZA{ylu z1%MuW85$jW6=oEGp>!CA0!NGLJ=)rV%>b|zRA#jCBqgNj4x^g-mx4YC}cH%r!h zZ2H(x+4;v;-{xMo)*ynxewt9R3(qxMO|siGrqM`zEloLO>6Hc@xyIDBvAA>4&+`Mq zuil+-k$}g#`&Raua{|O~$cxFG#auRd-@W5Tc{n!*Uf9{OweP>v|N1EQDI&cL5%XLj zlw@GI+2F*~$S(=)x{m|WkbF|GL@B+(EkrI<6w&W`x7+DC_!0|SY{M2ucZ zNy%cqnO)e~B63doraLDpaTqzqmHl)oEJ)%Jex8`Kx29qI@WpW{2^KB8QS9k_mRsO$7)G41 zujuV9VmJF4OdPo@`kphNr7^L(O2bHhKsa5=aNOyh;1=HiSP|YH^QK6>ty6UtqF3v} z`NgiQJ^%Bw;*#P=gicuH4hI%fIT`$XdMfq>DY#5rLXXAbmyz}i7#N?0er^*sh+=& zkISGvxwlvD^&5@FH}y}21qIs&2RY^^x~_Li>dS@*w$n~$ZTL>6lNOb&!GRTaTArQ{ z#I77T7`%iJa@(TTV$mbg+Xj8Q-)aq#+i`HnDJpl5{jhiUs$c3Y-7(f0)C_bOx0?ze zTtf!`=oxMn`Ary$?@F(R77oB-O~YR1LhWjYXB-CgrSNC+#5t*4t zccbb5+Y9it#rr)37x(mhz2giV#bzyJY=W37+ec+!V`$mWzDQxW^d4AUMF&H7Pf0xauLS#pI#B*qot{!(V00z0c%<{|#=$+BG!K3kO+-S+5_Flm*2mnzF~ z3DJj%uJljHWtGusS2dA;^85r;UGgy>mTbckD<<`)PP+Vsa?nd}+TR2O#RBhF6m!7ks{I}9aD_|RH`89$fOie| zu*M5B0cf(r8WN!1O5f-vgJH{A*YjI79gS+zRd%}b8c$B|dw7=LN4{LLww*BN$-*hl zv_syp1g=KiM-%249(Gec4*?)4eKS4gG+T@aR)VjQDmz;iwM!pXDwQ^VXrlu_j2dy~ zFMZ14CI_mM=F;5U{G}fUP7e|Ig>d3{R`bx;)WkoEC2bK~b*Qe(Ypvlu7CAXNS^7d0 z8VAl=BX7+Ra|*e+D6&Jef(-hBZ8>l?#d(&8kqw_l<1~n&F(i-3!T|iA=`@pPC4*_e zb6hD=+Bs)k{S>ra?e4-JZjL1qkAmTMuIh@&0`re?pv4nwSsy<;hNJnaMi~!akTQeetsS5mdv~TP9D#^$1Ypc6aUEHwz6t?RtBA;WqEzd{?9tc1Asf7!;}H z5No~EtK0>1* zT~uoYg*TMe=L&68wF=a1=D(UtF8@#sddei6or%6=XOB7Yxw*Mj21k9~Jkd#8PO&z% z#K4$e9%uNPbEV_bg6Vn42rUeGzwQZhv8gvxu$N;wif4(R&o*`UDAZ?=ra5L$&iXW1lY48^k_}@y>povO9yU!%Dy5|GHrX zh?ohw#rD5aGNu}2=?mkR< zq^8c>HS+ZEXq%gx%Q?U-s?>NU?y*adA^^v^a$FhZl~B8|xDuEBfiO`3gt?}#{DE6p zAhg$YengMzpf|#lR8%VEUxKIdOjt`N)(D`&_Gf@*Ao7rC)P)DH2)Z5$cuDd=YNs3d z*aKO~Cl>v9;0sP`FBO_(^xM?|vS0O16oZ3WBOI(;%@p2_`4ttt{Bc6 z4=?a5ygSXWtWVB}u=cQ*Qh>=^Ff;AMc~q43{^7B|RKl8Fo{hQ#LM*jnXj=gZGmi3u zf1Mo(x+tKfJT86itiniE?z}M%Fyyg0yYVWF&C$ZxEupV&d*TnV8w9ALt3L1Uu8)^k zN31YjKN*P${P>ZUg$0vtLdxOF^?`_JlSrsc>}p?0B$4**Gx3{B*)T&l5} zLgup`|5j~1VSl*H{7OdVw{DM14#TsKv1()%&pN3ZgiJWO=S+pwi4}2DG{5yY&vaJiAjKY#%?|$aUK-otlwgcA`|W@54rI=+xCl$j66+ox0z{{x~dgC zULU4f8%*y=<}t^o^d=T~^5jRRq!0FE5jpl}8`~u_HP$|T+S+e_WPYfp&i?w&|q6A%YyhzONM(JKLm!9q2_UC#0nCsi9J6DNRixV-7ZB zaK`t^W7gwVGs`O*pSr2H(zf_t^W_70;tBT?0s*^M1r94@?w;6Ru;|Thot7_J-m)Mx0V9Hx=fQ$9Z^(iiiT4OOckIrkeYDg%XC)$Acm@9rVLkt1N%McgLVP z+5V+FfCQVbF}@Njo~apnD*yyVQSZuM9cU`osOQhK&Z&6p9Cne>8~_B2pWzS*_p2A@ z1S;ZkhQ7)IqlUc`EB8q0%G#t6UZe&%^4$`_TN6_ZQgifk@R5v=Ug|Y=hw1%IlcXi! zv(hrJT5EM(5S&{SJnjxHVic7_+W=&Z6G3t^6bKEaMv*haq!k@&#EYO7&7wNPAHwN~ zW2T`H!CR(Kx`9!J_})&HA;jJHCA@E^@;&<%h+#W3Zf_5a6zos8b-;mZyR+Xh#CNv0 z7nYX7Z-n-bo357Q@S5tty$&A4s1M$rvfZpFM7^K#u& zF(x;Nwe_|8#hzLHSdvUaIKEj92I z5^vvT*7#Ip0`qzACIr{}^|!?KB4<$ z$F17*`r7L4%=kdP-MfBG+eZWhaKzn-mQkkfo%wNsvXW9)o?>db=@!4uObt$Xxr66k z;=Kf;KR$jLaCxG^ zRBGB!@-(n3OlVN8WIYtd%^KNTWsQ=tX{B-Aqr#Y5Jo60{!zCNy^Vg|{iuWJ!NXcqv zIfwS$@KPm$?B4|~V(wsgzPVagQHL?21S1c+LXm!0otGfz7twswxQr(!?#qkr;bxYS3M z{o@n$m+bn_@ph)G$1W92zjz$oeY^I+(qM#LRXeWOa9R8SHJ!H?P>SzYqDM8jZNc;O zX3Cv{4U?F!&IeNX+8`|h{zajoo<(1g<D%@r+%IMq}X<*M!N+XQ1iwPiJ|bA*XvY-?Z)e|nwr|K7gR1VH<}*S z;nWelVmaNs9k2qFC%Q~JsbG`4+wZg~X)q&iF$0K&6MEG0ujuFFqlK_1PoBKiOGKSy z@MavDGkyPS+^|fe7mG6-ihpx?!lmswrQf9{xe-ib_IA9ig@eI}iAe_oX!aQ$;i`b~8i>FX!)P`*@f%y?)rVE2;5R|FX%&_nWLu z#bIY7hWg#*ev`d3ord4BzAXg^4^mE873~&%Bi9dn&|X&>$e1etpPo z_e&v(_dpSG(?>1QcbO*$Q2ExM6>Q`Jb{}cXz02kX6+rSeCO>`WyV?UotLYeNHCv#f zV4?2uE)oHgclNj5p!SYEkN36!r66D;7l8U3j!|H=XdIn@)xR=6WrS~xC}oHglA6Uh z;Gjjt#Ki2cev{X(cMPwwO4ckkylE&{&_((rFTZ&$=yjTgnJQmdQ_}@$x%F2{7haxR zMMp$UfMTxOnc9R3C3sz?op&l}+u)Fv3zlx8fK^;tG*tc8km&5uW3Qim3GAV+wlkS< z2)M3J5J)d%NlC5!A@MtlLC2-V5ldL6GHlKhdzII)(u~T`>jsB`k+H}{P;fF^Dj2t5 zc?CmHk3t_0t7Ajm9O#RWZ7G@%MGLh?=J3K>%SZ{-dCz|7ryo{d zHhqkIF99^R&c48ZX>5OqF|_aZjV;>yi_-xRV-Z84+f6`}B_)x4G{lpf^*jkJp%+L4 z>oirJ&ma>dP0Lue$EDz*TM-7{U$$lK-rBHU!`Sa1S3{gRzBn^kYvA( zXVIyV^u^@&$d(RWCF6b{m9ks-P8yG|dZv7-dw(Tyo|b97V!ZZGN>g}Qt1%UAA5rxo zAZTHKF?-rETQ;&i71C6CYXH|^W|w2U<0q0xx!oA098s8UcA|*Jj|Q3wTB$yUrir{? zfo!B*60gG(;0=eJc!?wko+TTW8X|=4-Q0q!X#i5kP>5imLYnll<&&Plxq!?K(!mQ$$fm!F~9ehWoxox!K(wQ zg(k#=6rq`|v}`&RM68;n;}kYc zk#e+8S0|6f<-8jys|57#J}yN3-Bsi^62nDUn8!vgWoTY*{$k$>>$!EW+PBWxBPVC) zZ+=KO2b-*fZ_V??(bDQsvh8Id0z>= zQir7=ZG=^zuFS-DOSTdA-2q1fGH!R;IqxGJje-TZ<}*63WY_%wJh857Yr1LvFp3xC zfBFolcOidd8kpP`psb#38srH;KFkgN4)!MiReCTO-My$B;dl};P7b{ zLsr4uaHbKrxg}hb#Rn8)VS4S%E0#0cYDb|G*?cW}fFisw@eM zwVO3KZRkJ5MWhSUvuy`$Y61?vTf+u2preg+B@1q_vuS^oyK+nAXXjc9-dkK<1^PR; z%SfFAr^!`>!Sogcaq`rV8ur+eTLWohuZ1)l+@BzHdf;mW4J>Q9Rq~nzkaBk<)i{<} zPbxqGtF0qzep#0({y>(SLgyor=))>SwM-GP{ybsi3+7bG-Lhu#J6?_z<+A%l##;b6 zr(pNMphcX`p`YJG3Qa$~JlAL|HZuF7M0N9;mq0LfNpzzn_Qds>7s#?ZBAraw<|D}* zi+T-qp5hH^zysW#MHX;q3f}~t&wD)D*#_cZ$$;R6^MlxFPuXzt5516<+`OZ{=|ObV zw>j{K&O7r?CDEc$_qR;rz5g{u!HO>Ti`Olup81LB4r=8FX2opJuX zfV26|P|k?H@_Mx_`ImAHPho8Xsr(kuRo;Xj>NgLlJnFI!=AGYuR~3gcb@pEJn5s!Ie$Cg_%{*E3q`OGMb+L5=b~a zo@+Zf!{c#E|G!DsZ~Oq|dJ1W2bycr*^qyKSr>0STF83>hkxsd&Bg>g9rK_DK>`+pE zE`K7BeqY>%2}4?3u(ZkyrDydsxD#5d#;+w&@f{<3LTGHjo8(>#B9K7vd&l(7=DZ?0P2&)Xx(D~ zt&FwgGW8FM-}pU)dH-B6D=}!q<&2;%LQFJys26MB)}HPGnVNjK@jKAzLHG34kl68_ zxQdr@9`y^~U827D5xn=rGN3}$><8;JwRjCKJN|v~tZS2oQb;)^di-BB*<1ASj0})$=o07W^DConrM!q}q-02h3e5|T`iE$S%_L?K7D%Y&~6{I-|zsJMOY(VlX zvZ6Q6M(0;9Ah&lTKqGbedO zyALj4_Y+$H(q&b2b+Pu?y-D_r(up1&6?I`hnts~x-8gIFkPkG%$S}d6$%^NcS~!>e za+j6IsP?I}qWwor5DzTR6o!T0aa~-DU9G z3Y%S@SDx-^0gIo*hD4eS9cXbf12baK+D=vApSrp}5TIvRZZOR39VkJLpcX0sPxMkt z!1ZeR`EB(f_QeYeMHVA90drpHr+bpQuP+&`6oCOqoG~+VujiJ7y}il|B{6YHo>%PZ z?ru;w&0a)eccfJ3XX29t0V0V8YG8&AL!kcwWK9oeNS9 zhsMV>AqW5j4Wyi~ie0*6?#>}<$=w75!pQjuPPQfjg7eWuL`4pHZ_?_+E49pG?9(+` zW0-H2n@;x&&VNAsz3;A!>fA5pjv=_NEBn1U!xzG`ba!~z&tt5l{X_RjQw(EMJis%k+orW^^kGF3DMulZBg-V8W7<{T@9Dzz z##N9eHIanXY#tSWQb@O>_>X{$JM4YtD4H>n?;Ek_+an?dl!Yl)a(U?@BouBGyD9W$ zYDX%HJKsJo1M6RPbmPT_$@xcmwJN8e7;l-39hgq10=pKR$h)aXA`dODaDKd4Cpag^ zi;f2{roPFz){L-R`%Wc+iH+G#0crXq6&$fi)!})v`2~Rd1v=Re04xWWFaX5*h4pod zA8o(797~(eZiDMw*^flNR#(deJiyU-*jr|X2EWP5mc*rzVXVub!256c{fw!U6i zl7+VXU#I6p&t-suAEs&o`_1rnGu#9!^Lb==wI+q-?dF>FIOH|R@=p9N6P(JQxBK7< zFi#U@`XM2ymzL)pue9Auk}CRWuluSak#cO1a!A-yj4vv3Wni_#!^3d@XpIlitJ~Wj zXB*vG0ktZT(@5soT`5ZlLkSeopxx+7%*Mu6b>ee&gYA7fMJiKfBn+G+AQYRMvkJQ} zzo59jEbd{8XJvXvbaiH6}B2;U+sPs{$X6&P+|RFB?!l@5eS+ zNalN6r(0g5?xdDg?(RCijBQxwgrSu@_P##tu`Bs~wyc2r|hv9MCT~G{mllhS0 zwa5`Er8pFsiz|5~V?&c=gl@b<5AaBgN^bF&3|B}WQL1Mp&uH6KeNTUZ*RNIltx(;( z3e%{;HYQW=WV5+IueNl$Mv3Z0rmO8u;mx5-YnZm6HAuceIr;d>y{n6C>aSQ1fM(0G znK{2y`XezCeU4Gz2n{K2T&>GHM!xpuh6Ntwj;%FlfvERuT;Elc50u#yFPdDgw-*t z=P%~Npu=8@EQ8DW>^c<>$BMw}2A}l+vZ(X=$aQa+5DAI2dX`!o%9i68Qwpq@klP(RG+B160B}7=?sfDH#avc$cXBF&OBC{yk4az8bCqgwvvEf z7d?2zNh$8ow|*p*=4vyP_T)L|8W@Tq$DOBfWjT=*JJ-y4?!+PN`aSJ8XA4q}=Ly9| z;OFMsO|0V~!%rA0#7X?c^M2Xg5lfdRDj$P_o^Wu8b(i)zka4{eY;g6!K0Q61zCtth zUK^ykN8jLVVP7!`+t}-Ttf>u6!v<&OWA7&P2RP&f$Zn>6alQR?+#boq6t3@P=IEyd z31!3r0P91l2nYIK<#%Apks%>ppQU*i!FDEC?zmZ2HU_ zK$=7aGG#CCm>r|$7jWcwVVH=Cc|r_ud(!$x!IV-zw*>?n zcMlqD@>yG3+i17wDz?Q{|55(0Ek2*dXq0o^fmbI8Q zzdK-Ua2-fnWxuKe_92|?0%*v6*pGVS-UXoJ^z>V1C6rgxbcL)1t$BFp00!0=jT_6k z3cmK29n~(1-bXG?mz$_N^A|X@rnQ6YehjQpCq@lKAx>*U^b6M>s78pqyb~;J>{qW| z<<#I1?8ffIv__T^=ahfkS{4Duf~|p|j~_oiVdEttXY}9V*ry%a=tzujQL>Cbl>PZ+IvI zWqxRXB#2&XW9K5X3N3P8#t|?^=`mi+ydQ?WA$h)XY4_SFRv_&UmCw5YtFsV%cGmA zH_^)|*&~(c6O*&CDaNFs5^;YBXjhu*SsN)3FpbT0ljIfPODX@DWB2M6?SePRb#E#l zo>hyUm-o#~X|oTk{cPSx?@=&scM`rTSCr+XUFS3S86cFl!$M&38=$oh#JYNwJ$c3nKFVv{emi@UqE8vuS96`jS|6fV- z3$Z+XUjmbqGjsH>$Rt?&Krv}SvA0+Jx7SWt!t@I9$c9i4%H}J(AlD%1;Q>0CkmIWe z9)1xKN_O3v;g`>IU(W)lj5|T6E@p&u9uQ8?gDNl;*9Vo{#~U zIK^$cuw|KI+A#?Un|97kqg1dfi%Q(gsIEc7X=}Yx@>RO1UzOFERoLUJ3jU17REh+4 zJ!BdhNcLc4lRkGi1}!iZ%QU!Bk{jZ_`IYmjY$;xgo0Zmf@+SnB#1_H4kg3&lg|m6?v&Zj6hdH}}ym z9xY;qLg;a)vai+5WTlh!@jk>h!FHAQuM^~ZzQ@EYF#0(|q_*Jsf8|-yRga5mtNx!m znY<-uPkEn}tHgii+0ibV?0v4Pxc_&ap+%i;PZhB|-gt?tq!bbI=@Z!8{^D@utH+3^ zMxh#G_ha^c;8-E4Yy|bhob_x2WApVq>Hg_peG?lIU|a;u0B!RvfCFIyHeCQzx7>#5uWT_;l4UpaXNZ>i;?r@Sw2^% zOpLu%Y>F>muYEWd#qUbBE&CVO2EgsuKg2jnl~@!yS!3$8 z#vT2NYYs%f^l8w()4kmk8Un=#MId+Iw@kn}{znVsyUVbDbsrOf^eSqlKG?LTShV1h z&2Sj>n$E~vr$o0}wZ`T#LJZ*hEc(&am!Ir_iSYEh;V<7m(@tqUHU6SyvZ~-@YeT+Ha0phLLf-QM}VEf7j`q zJ`b>uATfHkJ!G{&)=M+q2paLMmjZ=_meZbAqXj`@MVjm24T99CimCL>%sP5@=U?gI zzvF^~(Sf0k#t#-Y5o|z^!2u-v+}c?k6gfb`=_FJZ3rrO-sWKJ2UN8_j_(li}zkV3T z+UnQZi5mKv#GIy7R?_2AiGVPIH&W`uCLog`!`a;yNW8MLUfw4wcRw0jSP}Lb+S-E9 zq=Rfj?Lvjb!6crmGjA$ZbyCA}<7fAle;oC)1Ww>2sVzWORL(<2%91@lPQv~wMMxWE z7R;)|atcpPPLf?eksF>^CLsYMsSi4-Uy^|FoNTmob;+O5XD_U-CM<~+33aWJ;VHL{ zI%Oc`1iNk5ncaul8R4Jhhno2QQz!V(Hga#GBCN5V=st9rgC3nEJ}4UUK0bJM4oKaO zVJ9t|c|{VccI@=@C=i6`V_^HoQlG+2Lstv5TBupH4qtk|d;&O{>A>%mU}D)pEb3jz zFhy7p5=zEc5Jw2;=V#)g=XTz!8oHXAro)YH5RFoe$Ay;6ET~}Mz(h3;!1^Oa#~%~G z`=)>Y{_Svc>GCl!(9#&kW4^DC06<->vNS#g0(@GMXi*HKJZ9U1#X6OSGa?T0mSb?O zvIi7;6vUg+&%-|ER@4F%$>Xl9C1GKwcMtv}sdTK=08252SB8<3^J8Ez{k|G4E|pkV zRn>R6az$|#DCSp~2Xqum&~Z75KLd5)TXhPAudcC??g=-tR;hmH!BRIqfujNY+VZkr zAlB_DR>8{3t08I$?&HVNg#%x*zc{cnpbihK)_DmWoi{WzER+CjVZkqL?fNL&@LLd0 zYSjR6P+VrHyAiOSk;}b#^Xje2qx*riLZZK;jnz0Prs<+Bn5GaESy~bO(dY7HD_GdY zs;#t%iUByYjp+1cANK!1@SKAC{Ufn-erbre(_@7w+iyK<2Segl)d!5NC(bRBLW zLJ=`K>a{_%f}>Yr=s)}7D?5Q9fA$@Z^4iLyWZUGB0s|JI?}%)dx>%q_ep`#<-JRwbQWG=2|(sv zIUg(b@Xi9JpiOb3dH92(6%y-X--R7WkS@yrk|p)(?>;;8Vr()&ZLFI2^L3%5gYZ<9pTIPkx{l|LV_`!TpX!BcD*#;81TZRV{4r^Jg(Ik6 z1Z|9ncLQgoj*;I^)i&DA3U8X_X_O6DdmP{`gA3$Y2v^46H#|>< zb6StFNg*+8wAfzZUY~C5P5LfmUP^CBD!!EPxfBT$n0J>_L`s60G5N(M-f~y#0HcNB zt=?zPq1W2%aS7FX(b=+Gp2ymvQc@-g=d5&eNQ;ZfZAn1RVCc#}1ay4^&!-6qreH9K z5bmc>^3(gkpFbGRQ?!1Qdie=16OTylcfY$uNdg4P2rBU`H!=C1p20z-bSa$xbAooy z%MCI2v-Dn1;_SnBb`hrUS!!HnPB#Lh0645NG(@UMF{)N%|CKe7J}d`|+5kA`o{!>B zYy8x}>NlU`}QkLvi@*obYA3;tumzyQo6X?zWb~b9e}rCJt1)HaT>SjJGsxev5G54*^); zTaY{l%WQ*q1Tf*Vl>MSt%W=MR_I;y&*Dd7zCE(R0F*!WK%CmC!Z>oMVXqK7?_G?SYcL5hbCAwdbL>A zEmNf5Pb+stz35tK0gJC>-pdib*_!bpt$U+M+Yh<`9L-J7AEr;r03(Md<496Gr;#Dr zHn2=#yR!|z3fOs$q?#JQ7~sA)J8s1WmAKu?X&T}ol@Q+naz2gtFY!g1CR^h>(^mk# zIPB^SO?>XQD?u64w8`ttj!pkK5pWQtqgc+IL(l=h7Ir05u?ND6}(1_nAYfTKMuACUA(X!Osn5C4E!ba|M>dqxURN7Ye0~aF6r(P5b5rc?gmK( z>Fy5cmXhuUDFNwjq&uWjI^WH`^UTaMGxz9x5&F?>bD3%Ai5F+8fElL&?R#{WM~OwBGJ=BSQpZ@Fq&S{C1? zvorC(=L_DQu0s@$B*^OcGN*03>%7(hb6@v?R;@y>Btv3gMfl7^mJaa-OpiukQ`tX$ zHGV>9&gA{93x@63%zD`w3H`PT33*D&W5p&iyh|0#ZWnH>5+=p#6Y?Wvj{ENn+B*U{ z?SGffqZDD%x9v|CZxtZCdxgtNu@Xr3g=>4Rz#IYdvhLWB76@@-p*eF|Lun6}oR>%H zJe)62DWgf1O}6U#k$iQVoi;X$l;Kbi)%#Ub^82TIUOHV~?r95^Y1awgT^_ztOTM`~ zHsQ26iT>V{FZXqI&J1a#FcxAkz9@6gYTlc=#o`SF#R8?k5Je$5ysA`&mX=mg{N8q^ zXK{z<1Tq0{NZXU%sN);NEMu5$|n6UWUN ze;^^RAFk%+Slp z-1rN@4T0s&4K9IlzJYd&DLVxtAvRt)l!?;17Wy=MvARSif6M7e^KntpmBV3Cf!kUy z?#gyk#2I2&Py{|2ri12^Yy{|E*Qji3oh$t%y2D&b?ai>h(^7^~kZb7)klPtcT5mRS z@{|ln#1_B9BpPgmKMT%@chqk)%%O`Q#&JeWG+|sjScjHJL8-M~!Yk9R@5n^NUhN81 z)^bkC7#d@iHL5o5eb4@|2q>{YYhUV>6>l%DROsdKAzp!K3a_yJd(@Q7#LlROqUQ-t;RiYn-LI4 z;V}i=UfwsGPQ)0PJ=e7-~zQK+nhpX)(uitqP4m+;dZ|7jSGOp9=lKX>ak2`zcWiEf zi;KJWI5o#+R%Lmig(V()2Z{KY7H&HH6>H!--$6pMT}JA}jLsb+%5k&fanX_7j#g(7 zQl-Zn`e8;!<22>Q<`zMs`xh{96#+~#MbPHimuDFBeSlORb$uC4bO!nM&z{5=7b4(E z;&NE0J_bm4yis1{WKe(r+#MQM0C==$XO>8441 zY`5)pYxhFIi!>G@Dm=UdK}JgRe0}1>6Qv*)8ZIRM$B!;< zZeKCGVooUtsN0*maw%pP=1T11fU89W6-#i{7g7tW8yE^cJ|v*GvuqAq=eVN?aM<%8 zDGwCik?VI75Y#;>bo(V;xUG(au4MG@*sgTs&(4Od#I#tyD&yXdlYw%{Z;&zch$>n` zvOk`2tjAKyu-Hy$0~90zbQQ)+^S`2*siorZPn1Hnz6Mxq9=|O*A=jq@aJG^@!NBqd zX@^F6@)g#^&p13hD|dHCLf+wnD@A881UFQrdBDahJ#BS!v(zfwnvd0_UY7E4Lteo6 zXz^yxq{xQ?q*Mw=plH1>M6x5U@d+612H|deOJ;Qj<^e4<$gxsQ^9qm3F zpUW6mnT-nUT}WZ*_tIC$jwPZIv&5SMh4#!Rtn!+#GLGgPFO(blJ*gYL$xP3#YL7mf zH^N@LN&Rc4FkCkZSiyY!Pxmr?&c7&5>;WOZPvwGM{XU}Sst7av;txfy=!hcxLq86> zuCA`V%Fls)E{1I@$v#JRkKKv9)vc|B{c`D5Ieq~^tMn`LP)kpna%Eb1!!hMBiKC^J zsH!@XckA>aSN6tM*)uXtSBHqx^INIR5leroG`kzMnHeV+H~&eEb@_(sVlT@@ni)h< zLnh?yg8pR1J9LaVmXFi&J2%%GuFH)PsqX;jvLaZdl((i`M^}8S)(voLG+f+a2&D|H zMiQ7IdV_(BrTEsHrEq<(l)xzdjIgilB_*iu0pI->>REFhrmf3W3yKO>278HmaT}Tg zXk?y6bQzvbm2MfA-P@VP2*OQfF=vz!Ag1Z6hCF<_RRJkWW1$Tcjcwi+yLWGy^pSh^ zE98DQ*c#Vu8!5s384i8wK3Z}IojCK*HEGQo5!+qr?MWHqo+5;5J~xL>z*9A6OYd9p zT=BRg^m=r~2Gq2kpo{%4w|5L}+x$t)Zcu@)hwjPlqYx9FQ`=24Uw*Q2V7Xb<0H=+!Q;NVc9M2#`JbGTj{Tio45dPeiKA`AH1w7a)4*9GOPGymX?`E@!Y* zsi0EA`ydumJ(AaD+NEJZ_*fyQ&FlS${_dSC%g5iZ0Hi0}5JHjO89$$MJ(yIn3S>M+ z=`nADm#e9%?d{%HC^sh>=}j~R?Pd}4-RB4!)$UHlHmAke>;MTsgY$8#c~X}z1#fkI zjY!pcw?c)UZgOf0<^I{`$|GuK%nR43qa{E&)%@Xd14SV_@oU^y#0z$+LLCKUbLrni z@5vgq>p%MwExZ1>3h?$37?PDUH2cilv+&K!6PQ_j=&H>vitLRt8Cj*klaW_sXX3NU zbt;`=uRw;P;P(FUgP&#^FH|WG+bvX`D)%i8)1j1%AVjFd0~#6;lOBJ5lG^T5F}7U* zHJZ7B0mS(QQo!qk`S*w1g6~giS0=*aYTE_^>jol8r0zs!^p2yJ*T0Gsi_^={YiPz4 zUY3NznaF=TRV-3(gUrr0dgHivEbK#p1MBrUt>ArTPpc;rjb=3`oocBMU<2j=;)KiD zCN5%hD{NGo8I(YXPyB~ov(VB%q+ejC$wtEIGqtCW*_>HV2JP=qW+Q3JC*L@p-V+F54DNn0G!0F&^tGbxz_gvZ4MI%+&sJ4HUBW?< zkQ48~v~pj%F_f$i{2NHBpU}w=7zR)RX@Y8t3JX=n)Lto^nqLkJGPR6<4PdPhY~Lc` zpYHvlKlKFrL~Ov0?uwXKaDTcQ#|Ll_UNt!#23@BRImK#7=}BUWSgb`63y?0x_pyX7 zdI10t06@O{E$;qGwS%9(DyO-0y0sy`vzqtojUpD3m)G}@M={wIVs0!}kZurqafiqP zo6sNU9X#mkzfM(WM{1^zz6KUj`V=su;G7N?L!Sx45AO17rXp5joga`Qph0_&A`XKYYQgN({SE4SzlS@NFmAJvWPp3#((Va3Kk~X=#6B!$i?NkzH*+g43=DxT z>{Zf(zVgJ~^|e9&%An23D(RY+=n?n1V5X?{`8rVPS7!NRw5+B7_BHwGt2EshKl^Hj z+j{v&C~|rez#t$&Z-H(>g2!$h=6uZC)x_<1mW+Ctt76V~xLkvSKmEBYwf?9>sK2jI zBX=!F80trrv3QndjcLa?ec8w4F0m1q4@QCj#r@%S!>jn`{J|^f#q1APL?Zt9P@5fs zh>#G>4q9}ai=!nB@Nja!O+Os3GDME1a-eWK?*#x43{oa0Y>JU@_K6629@^i}nK2o) z-YPctMiTc9Bs}@(;k|tm&6T5=$|@Ah?{;|*`S@_#KF;DTE-P2;7#(7Ms_z|cyaQ>W zB_>(iW3uU!Tkh(JN;1xF&@(g#wQ#Z~60_1yirWFehw$?Rfw~)FfHUM@XH1jns$0j7 z4R>M)3Hf%g3$%%N#nWgKAnnb_)RVV0H}j0>H^8Z@bGW#=hN^IY2xnBrVXhz$BSPXB zFp>lKlL%FWdFB*r=v?P9KPwE{{mjA4dPjLe_)w?Ci4g1^v<^x;>g-ogkvD_^G|@&y zKxL_}^P;Ga@PVcU`J#7Z1e@HO=yZuvBD)h>aE(d`a`AKoOm+-Wj z+w)$(%GsTK{P|7jMO;KwIV_+1rLV6qe};g2DQbwR)8fIL{po1iN5CI8{gk?c<7kgZC3D)q8@d@f`7d zGul5@Tmgcnhn|`d;V_b#d@mNgBB=#GD~zObpuof936C#;`2=vr39ld8F-=cO;;>$B zqFg45!)Ypd=W&gC&uZuqx!w8V!@r;2=cDYKCPG4QT{xVk>)Y?DbGs$y;)4Pvw#VB#f~eJHY9-&hyBrN8~WTa9&cdMZ|BBzAf%K#vuv z9{%17djX4rXuT=xcpWVhRwqO zm~3EWLw`O-0n)};Nh!$@L5g=dm{b9AI1eLa?OgVSLXs<{F#uM0`OTtXwIB56Ig zox1DIS%QuI?pPLKUlBbg4Rlh44&c~xWMvhRDv?FvK5P(^Jb)q=0a7WBDq?~{I5s@0 z?;?GHTb z&PAmm_$)@b*L+@rPp!#ZY*o#rKPD6C)Vq75_^-`=*#Q=mz-d)em(@b+i?+vGeCc?a zAhrVe{03QXA}%~8CfM;3b?2c>4h}wO*WH7Mg=4Z8QIKtV!RHN!Q|2j@2u~c zg#(1jrI_xXFqjxsbaaqIIJ}AgC@LRpqKV6^v;aVf6g)hw}5h=uGHr(U=M2Jf2{nYY0o!dzg1J48! zz0quQ^fU9idFiHt==(S0<1By$n`YyY9}GRvfkF$Fi%iG^Dk<%3K6G5wQVpT|)T6BX z%R9`|=(K?jm_MR>r9({5`BkH6Bllt~*8-6gkGwwT&-VMyzCOuZn1ivZaGP_>#fB^w zdOnM_WhI3}csCZ{>4$${xd+!P@hwhOjs<%w(}y zg@{oLBhdf;dBs;M&IAk>#Sk~{8`;A6<2Z|yx2@OB(C^J(AO< zpo@=jc|0-P>QWn0qM~FJ=-dN$@n12Nlui}rSWN;1!+T=q#P+_9_Cq~;rqx=_D8j8Z zIUI!mH4tf(Bhj_TS$pFjoJ#jAJrJW;eFnY5_7yDuNJJ7+u<|-w7uuZdj8(OuA_5{& zHz2Hvg%P3eA1+p@=Dx{6$HNQDXt_l&RQO{mH`eOOvt}f^&7Q}r(_%^}EbMnWT8w2r zr$i;1CrnmQG-(EBmX{^+GDm~Oj%5-xo6inD_halmDVuJAo;TXv#m>04j>PTxZYbaa zbo+!Houuy#gbuNJU(51)2b`(KT&8}tlpvv z9UV>Q3pMHvgSQVD`jYc~t8GO-`3<&lf|tC?kqwZPoZQyon~rAqdg*sH?V;DZoT5KN z&6dB4efgT5pAX~RA_^bSg~Zu8Z% zCd|gs5vGMOR%v`@24h6hac@d=rPZ@rTUF<;I{1Gc#zG9=(uS<*ypPU0jdpSdTHimx zbJu=Ovi=|iBgvd0fDIL#E9Rw~CMp$-+yo~fWFBQY6RU|gb_6<%J zjEa8!8UX#x^$(Rso3+)~YmgL`5Y6YRaf49sUrHC`=;Xph&)-iW1BxcjFRR5$Y}FDq z#atNovvXP^{%Ae|vQy1xIMHJ{e+?@RUa@BgI#zlDD8rTh14c4Mn^DIqd zZ#JC8oZISylrczRLb7b*Ooa)=MxcS{$u}#@kPH*UJN?Vyf(eRSzTDjl|AdJsaPxY+ z-cZtpG|WOmLZI)>mh2Wc5|Wl7!_kZRR_P8pyD>1BH<+KI?I;WE!Uy#DS&&6R5AiLp zEP^SW=fMUN{erxVEcbwdKif-2_Yaq*j{}%GgM%a$vTwcIKnGoRwoZ)dLn;y!>R=v6 zEr+?Ddz=9952Kw*G+@)$l^?OMP@XoF5Q-2A7mW0T3z{a8879Iim($GkKVT6J?*l@~ zJb^}}MxnvRQ0&#xO#Uq(tb_v6x50P^OxQ7Kb|MGW%d>z=Q214`&i7ACVIaW6XKj2h zOT{?E5(`fdbaA1WE`N{+G0DTkCXveOt(uGQp(D#J#Uz$mTFKMHpWybk%WBKY;@4|) zi!qulQZ5*;cp@?1H(y)QPGQS#>KZ?0&66JVf)#K&SIG9%=hu+l*7*@|@Hx>R_E7}z z>Z9jur{3k=d!DXi<=?a50`aJnEz{=RZLU7G!De4(H_IM6hsnU(PqPi0_sb0?e|j)` zshtS{X8+FVXf|=$WTleNbHzK>pu8~B}C9k2;5YOGOTj`J<~*6e>f-;+n&nXJH))n=;*X^&&W860lDGl@-h+f z4WYqDUe1#b;ZW|MMC4Ng8OFRlW;*Meoh}aBW)y&A^ zZ4a09;;)h#?fM5FADCWqS(aYgj(vMf=C`t}K8n^B)X^zzm{A4vukNcA?am_j=y({X z6=`+by>s02eGY}%nJN(;2Hhsn@$c0T{ezQynwf|=v_XA+G^aN_&gx=XbapDhjSENb zhX<`r#(b?sgq_J?*pxQ~2}SU0;q4C~2P4;6%|L2t z*{NsSg+e<4ErV}VMTO*N&3p%oA09VKi^}h{!f@!R7Kd(8UI)Zc%jbNY8PT}k?)cCf zBWUsSCn@mo{^@_`+^r`_%eb_-*mY4lfm6?gkoyrf0nlvMM!UUpWK)zlZSM=SM~?X1 z*4DcR%@;1clX-X5s{{shPb8yV*48OwVj~t$<0o4@Wqi(Cs`4B%Dj{h3nqi4ckfA5Y zg$FE8W2PQ-_v%Q?{A9)cnI#;=bvDlpG7xQ>lpd6v^8+t++<@jEq5T@ROc#Gr-NE{4j* zWivAb#B@;UQ8^vY$NZn$J`#d9oEGKXR#8+$sBdTyEdwJ10M(Q8I7aQ3L{EHDo>7s1 z&juu&@E^L7*c%i0kbYmSP#IpIWbG0qRYMdR2$|`$I*@7-#;fHU=2Wa_#1q8s_*$vN z%MQIUzQv3=3Ppf-8cE}#e{A;Jpt`^J=;njqvidp#y%E#^ zYKN|v5+WP)0ad&EQO^!B zFY>LZ2-1w2I-WsQsUh`OkuD+#WP|5-t}E!IikUc17xlu>DG3%c@j`!xbx|V~bR+2k#dO{Y?tq+qNueJ`=r5vY)3YUSeM0(h_A&8I4b zsyFyu+>J-lZpOwSGWa~9m6esdSs)>Og;yzLs&GlW^3IhGFFtbEoK0D9a&7h2l_X*H z9xvCx)|d`s6`}GtnMYNd4E_YT?RjdK|Uhy}a#v{&WRVlAZ{6or$Z^r&W9P2})jHA~gsHH+E**nIy2>jaViY7VAL zb%Sx)J{0A)Zbg&zy8oHBy1u;*A*NR;g>dKLIsI$`sdTZYdb-le*KZ1>kJRSN*>&so zc3V;rtRAEiIG31y zFYHL{>>Tb*>uYQ4DxHSgEVQh^soli-;dzJcxc3fqcj@f+u)j7(1wAx@611d>K+8`t zzcL=@5=vZZ2 zwH{$lhldAmGxtPXta7twRO@jgJ)~bL3Dkneac?{{<^#RJPn4)V_;anGk??sx5=69- z3rwdX+6(yslrL~XG zn2i^e#AJ3h7-U)}-;0}K&y0hS?Ao3#TbQkNWy|1o4MHLQVPIiF+m-k>xVB)XSXG%+ zlq?2WS0Xl1ldS;$RK8!yBv6JJOGf6;hu-n8$Os5%nfgC$ut7_V`Q&xFP6_CZe)Q_) z9V;vH2OIPA+`>9n07q3cQ-VYf#ob+OwY$Ju{^3fl^FZUNoidRZAXuSuW7okhdlW`- zENcms`#D06i!tr&n}$z(hOU2HW~IsuT@sQ>5S{Ge%rQ(j)RLEN+u?Q(7u!9e;1g{Zb3Su*2-AHc{?BA&M%fC@;&wnAy{~kkq;5SjBG;Kv9l%^c^ zmt_`Rm^G^#Y^<{|6{HXwQ62Y}F)H*M5w3y(WR#`TSiN2ePSDP8&%iKMG<?m+F}f+a6g2$pp!+B4hIgNdb@xsMapGn=yV5d4ow}2=sLE)?OPyj-j_= z9od;)dnEkIPGzJVNOsHLvMUOxb>8YMEuO4*Bl3FQz5=|d;O0B`we^~h)OlzniMwV| z@s3QCYQKDAQ{nl3WYv!pMW?DrA5m=jkTHWcv(xW}Rf2*J;Gi8RXw@pO0X?ASK$9)b zF?fO#aIxV|{c39B*4;tN3{soHDoM(0(Z{6s8b36s{FIn#sa_%_=6%_pXe58UmH9m- zDt6=(500q0In6rFXHp1zHny1J&71Hfm(zPL>!sERQ2N4;8??ANR(dWZViKEE?VG>! zj#devsNm-bjVhE^n(=s#MiL_e!}WeBh3|!D8?Q>jXi7x|@v|4l_Ne~Pjg|sb-Bw3$ z`ibIa9JSc?6y@aP zybFv_6rMeufX5k}b;4jhv}94l@YvX!?krG9_;X;DGijqqN(IEHcS$H+ITy_Y)xO8( z;{9%ZBS2d2O?JW81t_{LA@K68H5OCVSy|}&V9!i;sw9q~ErEy5@&vQ$f0m-S%wL;+a!BrCBR!2#^#l(k>F@;nYEboE>pq_)f_NO6Glz zPhC?*Y+X^$Wj|^g@X$JS^@SQSUnwsm3%X_kN-uh~dvgBN>p_<)M`i~XI&##EbPJ0~a6h3^7F>Ku(OBa@woce*+!H};sHKAo+d7j1Mz1R`+d)_%GW z5ET>4(P?Czt#__pzt1|B`o>}Z`xEIrt!6cJ5E5bb&e7HMo6wOo#J=hLm>5XCmiTmf zl|GXai<-pscD%;AhJzJEdV2cqtIZ)+;1_o=&nN@K>Ag&FzV}Pyw51q|Lb~m8UNvzq zYyxq9bFiQ=4jLQeOW!tc2w=~2y0g{-&QdYA{%)uYo$!G5IyqkW-_uUJCt9u2_~a&k z*e8)K>4aFh91li1#OOY;kV|$#LR}z;4u|akUX|-^w)@kJ2eXZcF)?Tw-yLYR>Jz#4 zJWCY#w&+#-+Pp~;nf3jExFpYlmXOB_${!X*tkuJ{%K8vbrBH!iad9A_1QT%~6()BXA6Q>EqB*@r)Z<+8wv z3Zv~K5uY8z*5F{d#2)(gx*dc-6an!6$6svlq7*LdXuzx3nos9L0iK{!yd>F6P?sAX$+{qmpE3co;alg@Iv`5nu;! zIzK|y*1jEY^VSFOsHzYjI6~YmTgBd$2JK{P9>OC$z#&L@O&BH+wNrouqC&5=BQm{t zyw-x;VRupUbWGK&CS^~f!GP)OTMLGX1qHeS@c)Me6P=x14ws0oM|(aU_JWy7*9@cf zyWY~2$*jCQ7@&?l#pRN7I37o=&~Fw}SBH(Geh7PswAVS$9fcZueSHncb8#-mYrcx) z-s0og78sT+1k6`n+8X#=&orB%B{pzZCI zOEn)*!f=?G0w~@R$5A~o5veJIRv|{C*t#VM4jEaf-ETyr(TFPyi=HkS8Wz?VjH!Xi zQC^5Kq|*Gm$f^x0gV7An^;IyBGFb1@S9(PODUzBwAA0^;MXgjL%*o0d{fVkd=$QY( zW@~``RM6zIbhl%zYjKgl*H?(lbFw3lbvp|lqsQprbqY=7z}2*kC0NsfU}SvXjb4*T z=#Ac}_9v}+?EO@oFklLSA}u2(EJ{r+aj;k=cdxIff(5#oHdLTEk1eYS-577t=Frg8 z^b5e*q!_EyMggmXz){rfY)yr3oM)xRdPLI*7i#$|jRS8W-ZsHZP)?}=agRhW+fgg~ zgN3N5M-KpYx`--t`U1ZwZ7P7303<`{+$(yeT$}3G+}pR8`%yAU<=JLY)*6+oCFFDU z`GGo*Ou9{e$b|Do{rchEbz~N#RV0o2eabn*`jx1FV$o4Q5jC){^1fZ%@nF6Vb`0X# z`c4v!`g6EaqvP)Ui?3ht$k?g*z9it-?S1>$2Y5n&$GVqYe9S0-P2w4a>Ya%J7Y(<& zcPLS)3LNvX;_~E%n8XGinJC}!Z4|V`dsR@FHNqhN1~+vS+Og}LNyO(K-eSKK?VHTH zg>u@`)TO8&9CeKgDg~itCjnqu&=%Lo`uK3g%5eLKIi6AbB|zHLzJQ+iyVXLZKJ`N+ z6ik}U-sb6tHlCQ1@49y*9lpnBd4Kv+IVyHanz~}bNgJ)Ydwbyv7IT@s#_HmfhGI&7 z_eAWE=h|N0dNT5DV-c5dKi6+6$A&$k05&z|v}63YWcA1r|ioxR=Acy<8RVF{DE zy4pNNqN?{HE)7{PXD66PDV;c&ujs)=#4F<*%(qCu#W+baj6L|SD$Z{{22 zkYT|10qCnDArPX%h8CJHy?(v(?2%?;=kUza0iWC;xfOXY2g23QpO7!mBji*eCpv>s zSgh|wNn^>)k|PP{JI7htID`A;4p*}}{eZVf`pI@SOtGw7P){^msI(IXv0~c$}xbA`cUcwyP+G6;U<0AuyiTO5occ>yIhCoNOvPhw*2++AQ?59QY|t9_+_G9Cid}@+!(L z?dp-x@AaD9XvZ6hRmw;&zqs0)g`e9gAs|NQj%{c-lC5swg#2)2sMbx&J2+M{nnW}j zh_{%nw5@S*HJF>A1U)c@>R7AG!!6SJ?t*@UsP#e<{I3#kxx*4BTcGpptTZI=8ie1U zp>(RwX?qd$iZBwGY#0m>$FZpNXR+37EVp?_nGS1*w*4|E(yF&+e0-N9>}@=p%<|&y zk%vTyW0G@oqzI4=$Qj0@+A=~FiVP21OxGH^4#anJ39(R=zEnN4jLmX9VERym2GL0&P zZDt)zl%~#(j=;E=d7~td(xL0!RFNALUbtgq*t&dKe?$8Iihdyz=G?r&2V!|We5bPagxoFBc}QJ`Jr!$=mfLh zMG^3Y0TDQ2FS*;tbp9On!Ys(yg$2NHu)DsP(O0MxgjCI^n(30%{j&#PwuT*ffCmf1 zP494vpfrdG3$L#227)hHu06P;uxvlatCC`w(UTFo#W+}fa~D$7p?sKpy8r>Rx22a1ks3=Iy~%8 z7PLuD&gDorR}t^;?S$>q=qdwh6XiTR8(Vi=FAApnQAHe;@k;9hS)Tc^9$$<=oBymf zl)vA}iuYajMY+=<-fWd8nCKR$AGfYI2O;f;JA~fe-WF;Xj5zluPCzmUD#7E!viQCt zC;4kGxD2`2_5?xhMMSwMcz1E=4?OYO1@-msBO4tToWOU=a6rgLvrZovu>~jOj^kn3 zYomv;s@`dDO7Z>>0}nln&1}@@c=<_muW_mQduPw@5^`PB-tixD8TUNkz0pqWf!Y@p z`O>MwY7d}Yz)h(PwMNP9dy^xEKF$=S^-TFvGK>B9>t>U#6GK*^)|acN%P&eEHIel7 z_1Dn3?KVTsnT+}QgHij2TA1l$00u|DYrpW{a^AmT!F@u|)we--T=1Yimu&z$H8C`L zXlNLaT2XrMZvtOLm6chsuAd9~K{KmJDzG{9t*V#>mi|E@zdH;~9~>;;4!GQumk0MHbqA#!HvE_`pWyFeh~_h1m7ue0*UW4EM0@V>h& zbNP5VNlgqmq~W#lGLNx&&9OM>64Cozb^9}7mi4Qv3Tu8sC?t{uQ@OUry4=(~-ItK=zC}k2c{m{^<6=_!wz8H1} zQCNl%EfRCty*gfL>x4a)4+(+8d8i+WlgND}6H17Te6JN8m(}}sX2b3BzSG5o@&k>2 zXs}WdmX?J5_At(SRTUal_~9zMM&M##z4)Vuvv-JbDDb27p;nzjmhcT`iM$okB>qO6 zcU&It_~ftdQcX6c(v9&P3E>b4zt(R*4PN8nlIbu1piyrn0Dg&IvnwsvE8@g-DSjsN zn5k8}^)LC1(9&)8#lAJc3S~Z8ZiYocL8%Jbb(cm`D*iybmM`v(m^20d_c1#I@B)~U zZA>5XK*5Dhd3bsPSejBak3uVNA58v3y9)2FnhpLWTG=>z!@hXIr=h9n){aUFVdvl= zoD+oftlT7>l9F+~J*S;6TOpaCd?k6Y&ux4oOOn|{G*5$)C zna-i)riFn4iP2y@6S>kV0iTl&T$J@nQ@)Ff+|2OSpgJIhmcSvRV2pSIhVLfh+ibKbTP1BJZceh4jQMN}iy;dhjNTT2B(ea>kuw^ok zZ8mzeGuEoh>Hy%68Ng{ZU7IfB`F5TQBoHL0r3YWPLx5>iSp2w@cQ5D!(U;=r86R5r zc6Uido6cFk1P2$`3|UPT`l6E1BDSk&juPawLbJLb57H?C4h-GdBGW&1Z?An197H0l zA>uy5=aFB>2xGaqC(n}w=hfDVFtKi?i5>}UF2_y_cc9|Qi+-ReWnff}$b%g?AM*`B zKmVCx?O|q;6?`z|EZr!MQnW>jH8D?Z2fkK?fF7y1i>8=Y_jpc3{T%=0J_~rCpHf^S z!kx(f;Y!S4Pt#i2+D7H;>o2_!{Fofz=@1#WpsbWN5oKC7askE{JjEoF8xL4@Hymkb|YGrib z>veD#7#j|ZkWW4k02orpU43Z6Gpyfxy!=@6gPE^A#RH@2t5Fcl!$;)Q4-JEh=-=qam96u{-twU(CqvMqr0uDAa z)Yx%_jvce_(DGe%*joS#hw;`A{!KUjUy;?9K%Ar=xQ7`|_H;^7L(=Z+NPpsP?hj`3 z{Uju~_f?%55*@MtHUjYH(+K4By}Lv0D{f(~fd@!sO{l&5|9Zs#;9B=(LE8-?)e&%e@|M0ukA5a*k>M0eA8iLdB?ttG?gOAG}GD5V)*#ONv8>BdoTcbhMvlrEiVs~UY7qhg-IrtBw4bpO+PlBJH;wk|Z`*06q~jrP~wo6^I9+l?0g`|1;LUs3+zmVaP{q`hm7 zpOqz|;^=+8THhO}Mw4RzLIg45mWTY~y{Gn-~cyZE0vMhm?aD1 z{e5+5a&!JK&vzBw|95O6NppylyN&YT}@Mq6ngL=L%1#E@oUZ~u(MlV;D zf7~z*e_c4=f9qup*54=NyWh`rLnw>CuTC5B?dHw%T?tZQL_hPMhc^xF?4Pd&%CBhc zOMXD~n2lWv`#WTp{bieQ4o$~94kyw3)NPs_Z z_?`1{wsf5(9eeL~i1aL{q#V2?pT7Alc*T?b+t$yocmrP2H5$Mrz5ZK}tl)2BaF5Tg zfXfN~&(D8+l16AE0gn)Yq2kC3G%=tgH9PrScHr^V<5uOD=J+HSh@3a$}P`DE!dj`ll^zhXM(({p*|CY?n~L z!W`qB&=Fhh{nqNYtqesY6T5#`1xHFMV(5OB0D-*%dJ8;Fh3?CI= z+T+nVudqh{=f{5oS#)Rm(s?^r4eUtbuq66_96=wEq7o8D;EZNuNmd!XdGn@^@5Oe^ zZv?2Am>8FbGs>_e26&+MOXwN;`|TpZf9U$ppT`3~&kBCtQb@4)XZ)2}U9#yB1^63R zQgE*|#aH)cP4zjvk^g@Ak8jx>?uYw!k zDAqO%*h!-mw*mR*d96QG*>HfhR-lK%C4IB?pjnom#sgB3k0=yw)o1Ag;ryT)2wG)Z zd*grK!T%b%gSRvX4W>sYm&5p+++6M86DVdVi4sX4;eey#`a`objY1{KDMV`{AM{IIt9`eTAON!Z@t#@i3Ap zVX654LzDBL_aZn#y}G~Q|MQz5C+B7o!*t(GtxLdMIs|+WFCU+Qp*7_G4_CI_W{=1Z zaN^|@_~?B*f5eKPxAvS(j7<_)M+I#tT-xX7wn~C@25H0B0kyBslZ|lLbTt|i|`jmD%kN7@bcd7a+? z-*5zFWsH7K%@b-|QuXH`&O~{V1D!w6hIlH2*yvz;TA3n<4F1#A@IfS_I$|^#`TFUm z5RH-1dN=Rxj(I+}{#Hq{umrX7r^Fgd!@k}$r918>ys^p2fY6k8pAOHz<)uGX1wr?& zkEf&V;9F!Vsj`{pM!r?#z?O9s@$qJucdQb+Xfo9dq!C&0y!g%MC(@yi9m4h{;uYnu z!{5Jy?ofy*N<@@Jlox-1nkWFcnWUu=WEG_#pTxR#xBs7Oq67pj@9yqy$!I7YrelDC?gY&#_G#JiHd_0Je&Mlv{#gT&mNPEj9kbF#$H)jLsQ#KW4`V) zQtW69L%NqWZ#w9-J*3Q>F$hQe1ZdAW6T#+P6knAG;h z{v$013e8!rP4mN%u)SSfMmIftVD8en+i9qyJ6t@zs&`Vd(Ss_eaA;Bh@i+sIC20lm~+(CztzhB*YZh z+xch+jZX6*;lT?;F{l_g)Qe58ULkdRV!O@Jvms`Zr%0tPC3UpVl%B$KD zfYM9wJ$3XiDCOl$`&)#KQx@Me36c`9ouuD_?ZodUW_(0kZe%^6^5&+zq0o~f^*AiP zx>JDj`I>egvOp8U~*SjWT!l)dSw=>BwBb*J4X|9>eH|JxRm6!lrQp~@0d`2dV+a)dYzcle=k zsG_l;L1YL(Ov)%{I&bBajZ;=v!{(Q$j^(dU$MZdKX)WY7X1G>PAm{~!N;C%i2kPe_ z%w-#Rnw%^+LLQxdIeZ|V1S6>rExA#EDd=VPpt3EYtjxFEt6ejNAx{+W%^HK=;UI%~ zJQ>940pt8TkC-)ExK|l~9r{EfY9FF#;` zEXjkC))nDCI#yv%AJ43-a@NRSxUsbkbC0yF{17izlXqc`rW7*-lxJFpTruyh;4N>u~ z&M^gy9;$h=3(X1P6$$Y!A@!)pap6z0ej$h`5&nta0Z*~cpfsL{kslcjA^svl>6@#o zhxjf6)z0jaxQD4>9yx&vSTC?|Bo!D;WgWz1WqI9@>;x0O=7${xLidKXM*j@C6&DD8 z+1t^lA(QJK9vLa2A(>_#J}+AoJ&E2e=3ywvK>M%+P}gz&S;`1R#0ZQdFq_aClP8TJ z8iv21eft)>7WXy&$M!x^d@X@Dse0xboY!dM3k{fTD&gE#J+=X zjQ;#7B$AaC|NmF@U$w-) zu*7IE5LZr8A63Y#1|HkiaXsje4QbH;O`&DAC~a8V=~zAdWVn_T7kl2 zz_=_N-QBSfT%N(l1$JTwhqn_Hf@JL@o0UHsw1zfs^D!raY=iWa~sg`-qr+c-;FkSu}FsG&O6H-w2CIU zxQngL<-ED}2zv>iE}3hRu8xy8fM+~BadmR_|JZx$s4Ca4eH0KxK}s5=Rl1}b1*Ac` zTe@S>snRXoQqmv|i&9ca=>`Gma!Ip@bMx-^JLB8${`Pi%zj4MHXNO&$?sI zYhLr3bA~@%*LN+PXLH62_=CL9|yM`^-TtySV68aC66!k)!(+FJ9(; zuK4`Jh^m^Ar@}fu@htWa-@FvbnQJ~Fn$=Ut^&JSaSj_Pa;*2G@$!0}=tx_W$UPu|G zeF174Vf!HcVYFRG_Dqx0x%=?t-(mW_-bSj zSl1V^*7k%ijWJYondvc7qGF)#XbF$WI3)AdZVYV&%d1-^l|~5#A%5E=*@Xq7Mk%6} zoyd|@K{DaDMw;qlHJ{9q#Vgo;7dOXaxl*&Jb;%T%)0gmnBl1uj4Lv+CJ3H~T(FBWB zumLv=gHcFa$M1>}x9u}}Xovy@tfNJTij95zRbze>wjK`Z`l}%KGp2hFiDVc=&%faW z#z}g^zPVbwZQ$j_9}A9ltHGv(>{y0FBU{7rpNSbeuIj#%qAw`Qr|Q&qbi8}#NAy!U zmsh-XzPNqj3d#&w(lP;#Fc_G#%{nI}Vtn|q+@VI1rBgXS1GHWMEwC#cS02k{aO1A6 znKgK>Quhy-J}bW|^#DTlualF3Y1<_)2jWBvt!lNG51iK(vNeY*=vQt8nfQ$A?E||Q zVR-cSaB)?DIwCylV6`ti%a1&0FoXYYJ*|faP>yojOd%)TV);<-CaQ03jU4>u8%V`y zYNwifj`_4Po$HlDq&A}zNYnE_da1uU3&gVi|8g`P-TYL3h?n*wk6>5{1y~)GW}5=Z z$rda1je+6}PdupBDerFpFDpLF!fi1%r|iADPf9bAbLJiKEZ@qEnv0~7;_O@;T_ek@ z{*_UZ0R_!t-n+3l_>BmmSG(T|qr=7r4ANk$&cvIkCtHO0kZo7kuO=c0u1@m)LlW zrmwC#b|pMHJz_!Td)+ucLaDHjN^YRW(6o#(J+<|bDD~1%6vje>5A&Tfo1+(Hb$mMy z4&ShCj0V%qHR43L*}uHhAfW8(LY$rS`+7jBl*>c&pOA83NWm=P8qa&Lq_|yb6KynTgc9Ke;SSXvlP82 zl~wd|LVR0ZjtOSAMSXi`h@+P@55!kb)i!Gtma4sM$NV2h1XuHACOx?($Q$XF{K0%&Wj7a zbm;FwJw7`^5%DN)j^G`~%xHeYp|ST)Hmw^dX>6XW(r?9p6S%lLTi;O-P0_Vp>FD?a zc82VCjZn!Al8UckE8U-f$|}beY<9i<;`3S8jCc9@p=mtj?Oqx3B3CG75W@g^@&C=0?!LG=%9F}U@u+d{Jr1V)bF+chV z21v%oPZh&o^S+^X2lWw{*j z>Z|l4a7kU3To8`;Pw8~u`KD-7YI$)|aI$hi%q?VVnL_|iiS$q|B*_uKG=qv28 z8b0NGqcfrHEW|%CajvrP!99$&_urem&(OTReUxe}aN_Cpv}=GNp-dO_AOU?ix|c-= zACuNKv#P5TxI0{zJA^3blNAZ_W&Rn-{AZr~Gts~Aqw?s3sKMLx%Ht!qtr+t+b5zay z)|xs#i=(#w4wpe);5q(CwLQZDwaB4S&CT?-kvdujcDyg_C`xJ>JiK z4X4Fv@OXrMmT~V#-yza6BiOa7$VYKU+gH2THoB|yBe~4y1@1ZfqYR?d?D0{1lZRHS zck|=D_Q^vhS=*J~H{YFc9S9bjctT6d;Do4r*86@#q@7Bu4cOb6s`J-UN$S;^DA41} zt(;ulPldi;P^4xjyjCU>L{%$n1NLqyM+5L{yZ8f7z`ou~Pek^p%$ zR;WcvPOfjU*vd@8WfK$e3Dw@dKZU~}Wb5A*I(hXO_fYy0{!g#hW|AdJ>>^*~>`?vZ zx7LHpDfUaNFyhye0y!=oCRCiM@U%3kRW3{QrxK~PNs1knKNYpY*u=z`E#AkhGIIwh zH+6j)lS8de#KY<7MUi^HHj~`)5Tzh{Hhbzr@2&5eH5&%w$RLh6fY)bLLdpn*JtVW$ z;LR4tZ52h$-;wgnY)glWGnkk&oX!#vq{%0n<@IZ9PHKf$r{a;N7ztZC`plOsz?Cc$ zcxWHJyz1DCl-ITnT^X=W5sRrTQV&xhCMIW9PK0cl1g9R|N4HevE0OZ9_FbpCDYh2# zs+Fv4S149J7$9my`117yMTJ3&6s!TLf{j2f927-_V{A{p1ND>3yujq)d4VCUU|0sR zVx}Mvd+C3usSyvo|L8@X95EiN{=HhOan(DqbY0q@b_E9)mvN@HMs6^H7X>9SWS>=^ zH~d*3T9{u!!TaVo6>dJRMIp+u-|5uEC0Id}3y=T51;5D%SSi)axhZ zJ?q3)!Z`6gkYYB zsg%d8XjH?B6Bg6lWJq~udk$X20c&9GpP($=AJ0lfD3Z3ezF-A^k{Dh5xH4}(C9st% znjN1}_T7ZRpzEH5thYSl`z_*C3Ze)U#`^a5ej?nU<2O+XXjU9*jHodLi1*bNiy@?f z?mzC_LOt$@*i?}S2en3c``g>WAZz;V?11ox`~F=;Mb>7BA36;U&EaoF5pRIx_m$YQ z7f=_E&Yqs!r>{L798Nu1@;DEH!nS?lxTIQ5dp6cTR_xCh&OZwp0`5Kji1ZL2MJlT~ zx6-1Ul!(Z}_~*{Esly6FePN7_cS@j`#N%v(IIvM=P8?c993K90?isDTO7b{T-U02c zZ&lkgZ~1QoNi%hOx9s#Vo{ej-ZSZ|%k+@BNH^$ZONB2=*K%r{K1E1RkBm*v3w~#30tg@~XVte!(fY&5@C*}yif#lAC4@5>JxD>e)oCzm(dSCQ4zs)QMYSqal;)|`3~q- zr{2~IED>Tyz{H?5S+)5L<9Q@;;K=5ST2B{a=(o9cojRJZfdQgQ>p(dMQ<=s5IyUxk zlord#$||*xS;lXo+=mvHQiZVj{fml&v)5~C3A9`={4UfOw0!;w=0WO0UGRmxTsmH% z9CV(5Intf^vKn8r#dMg?tl4X7-d1DXxdTvA;IljVG6KYy$&CL)2ckcPx&PWBRqEdk zvp9SvpK@pB^l6C6jU}}+k#*ULowE&Au2u@;My0*wvT2LngLK}G6Ds5FjkM4l)frA} zMD3H7@jCNVHw?jY8;aH7!*wU2I3+t;nV+|qhYK_Y2z;ZwKSy2Z)+330+V!nK;p}T+ zeuA+hIQu`3Bierk?qib^ge$bzyuk&Lf?Od!ZiYXqo7e|PWYVdYmj5A43@>ru`u^nT zZ8dA!C@*&&WS!PdfSd&veY&^W7HJ;xsz+pd>NyjU_~5s{3F=%PJTe5gG22O=;|uk} zdbdo?RwLFhLZ|i^{ayTnk`b-5$aiCawi+J(ad6PF$zje8*}6A;wWlZWqD)&&DXpS{ z6^K+3@5__{nXq=P6@913cMEaLA+y0$48o+>%bEZwG&dep8jrWp>AdFzEd>a{sEx1% zumMJ=Az6;cJ18#O6U2Tj+H|e6EncjT<*&%j&pl6psur|(p#bv_mIbVa0D-}RikV}T z&1V@jP@YKtm4=Op?T7$wK#cetPR3`RW}qAxoCu zSlcGW;d*CARXP>9BaSBtQjG}Gpq2x@tpQ_r=`O;vS7JA6poWGAqH=QgQn{?dfa3V3 zWzCpW(J;T;W)RS^=P2igZA{kl$jQln+h32KEVt8avJ)~kG484aD8TDv&!m3}2Bu3( zR%8mjs8*LxPEDr#>HJXFpvkza*H66PAe<1?shKS=E-ft)ad`sCEnoC2l#H8pp;Wpf z6B9ud(O4`7jW_MCjHhC#PIv32sh=mf`rdh29{<0^zEV7hpmqU=5{FN(rJ`t5nVQe@ zEyquRZQ66I4jDy20U-g%)UU67n5MuVy%)UPoln`&Bta>R`QdZ!>*PXYhS8)EB2~PW>tm=NIqo}nW$z|YcnE@`;Pw1(fw)oS&aJ`K!KQdnhylNrI~qt zCwiG8NjnT<&3uE7_%@}HTf3%Kp|)IcVfbCV5#kj)^>|8RVi0y$3_2Dc#pb3TzGy5u z&Rz4*388O9rO`tp=*mbN;Sc5E&tJuR)j#|3Q<{jHdih;$F8xTR5?a+*x7@Cd=GMnK z8Pq|Pdv8SPEcv|V40!Pq80;gStj3|c2`>0_j~~#Vgm0l?gm#pbJZ(3krxwJ#{XF%< zU`e=DZ9nXr0-dnC@AK6yGdkTlHl@*rkD(r1M7|WcIOOE0AnMZ8+Rm|(cV@~wx(&(; zHU!2H@k>%3htVVz1-m_iEeI*|Qb1;5|l#M%)kAj}?{+Vur`(4ln z=eeO_%m;<4sw(kVYKgDq^NPtFZmqoD7Y*!RIziEdUN^wvOPiaa{vRnfHn-aQ`&IMY zt*1Uv$;m~;)vkk%G+-jsGSFw=`Ar8HLa*C^eKiTix3RU2>na=>bo&HD{Ih`c&$z^! zR8I+tlt{b{LxRdoWsKqd+m?5fVuEUR3>IzZAM+zpQB?vWR4$&MrKG!Tj^*y__Gin6 zHpNHslXPdW?=j*%aiU>Y^3;!URhTU-ttsg!Z;#SKUol1crs>`Bux(?eJAZ%^J(RUF zI%eicv@7C@Sh45{SeVBSgZ6QmUzzw4-$?oTUQv6b)z)%xj#MImNzky(I4q!ncoY8i zz1dc|5hqA(tncqfH#E3w>a0tv0FLbCT{*k|wOv9$k8#I&X^r%K?eNr&?&8wqMP)I4 zKT|983BX7_fhvf+t{)|NRpe6)G#g}0T|Lf!Vt}qeFQD$(oT1iFn*8R-|F?M=AhUZ2 zY~iI8RJZ;m()Y}2&||o1|CE2|r|(5l)NxI7eMVz6<7 z$Cjj!rjn@|_u z-q}j#X?qMts7(r<=T06U(CI(_<$Gc$tAh_6ElwZ3X1l!X7#`)q!-z~!(x_M&KCEcK zz5mPC>SCevSKYy{68Y=(`1evtFoCB#yDGW<*3)*g<;! z0pBZdC;C@7<;E|I;-KH5#CLSGiTqnnSKd7R&u9O4WF-2}n*0F||Fb54yiNbC$^V0E z60nR*L{1*Mu$R5DOBQw2x4u%&pqmOORK4W1@`Z~Pibmxq3m!x5T;`Br8 zDLy`p))bwFRxCV}LSDWBHv?>wPD9k;VQbK2~A)yaTphu{uX zf*i~y{&%nV2jbw-O%T0|QsaY!j2R`m<$YM7<7qsxQ_QB&_5fMr3DM zy^G}Rg!cmc6BSMEQWPPH;ZT$Uq2?dsi()!L#rcv}slAfzfg&HTUk@ z?}UNl=D2L4lapWT0_9B*Y`*muqz3rYJ2@2OnBJNnQS$yD3qOAQD5dLc>ZPit(Yd0S zq4YK*+O@)T>y68<8W}q$?|nsYCu`Noczqt)A=GhbyKAK#Oho%qyJp5CV>E-xyFm)9O@h>e^O(8jjDu%-F`jhs^sz~INbQeCq>Y-D{hp z%!N&i6iL^4%%rq#r?D7uzH80DI0eXe%(t_SZcDSOS(t#eEV7v4Y0@o^$IzyrWl{tE z>kqPd>s93}gprX6l@3a_Oxo&%vRKm`boIvpSEpC?+}%HZNTpxzcsNe=p5K~Q|I)@C zsSiJ5t?RlQZ5!IvcYJxqT6?R&bMc~SkaxE~<#}a-Hf$&97iaF_i(|yRlgnMbN=Tmt zPu5+CXmMf4$@T2gBh_W0`k6XYwu!mvv`fRI;mF}9CQf<=zVNiD)86$-vkFC{+;*4( zD6#=OnTdk~Z?Uyg<Hs>Fq#2f>)5-g+sX zydYqb5ix+Mnt&?D9nH#OqjWfh-A)v%@A)Sw<{16cs;=qJl}@cA)q-ri_X{$I`E7~I z&(*xfR5d!k6vVrg71#!lMf~+wlc1#Zuc7f*U3@{T=(U#allV|W3CCOzG9kRrxJK%G z-Bg}|n4{5Rsa2{g)CiLcM9hoJ;-w*AUL3bHcc;rK;h-q0?kB515_dBfUSlqdOltLX zWMXcpNcIVHvfPKqHW3R2eQ4zy=DzvT)(emHhJZi&`<=EEnu(f@zSbO%-0+& z;0ZS!O6pLfBr1zqUbUwh3-0As6&%{=pRbfTp7T{3Po!_%Z8P`mo}A+%`8IngL$iBh z?TZ>2wzeqDFc;n|Zho6fw#wvB6k5HQ9v*wr_+jbObb#6)?8omz7(*zy^}BZ{2Mkn6 z_piQ&ZR+puc=PBj5*wT!icOzX_`#_vw?1EDQMT-G$u6_g*6%DOLSecd9y%9?@HCa` z4Bp)xEk{^5VSM1e>Vqmaf0r4p>gDC69m^Cb1;nPW2cqHDi7&F43K7?Kwa{xmF6*m~ zbvK#(_D!l&Dq>rg3*U?7nClu4pElN{{YX-Ajpy~zods9BS1ql1<=jbuDP{xZQC)po z%ONHtxm7=h7S9<`Vs5@I0rKR`tDueOE z$i*$@Xq|Ple#Jujc;=7U3j642^ z4v2X*u4vGU6>z%wsq@*o1s2?6;!=C04rCbV;Z zF*P)NNQ#~W{i>>_$aETuq-@DLuO~KrrXr+bXiiPDGP3a2heSX_MI{!JTvQ@(nqmL= zzUYi4H}8Fe>k(+fQfkyrh5qH5spFVqu3ijURu(jG1wW#|!C!m2}!8Gt|vB2-Zlj!Utdb(-WR`ZS^eA`*~$ET8cNt8pWk2u_Om5tmla4xS+sMr1sIo3j(&`jO;+-hewZEfX~_+lxt6m_!c2FVb!?f0J=&U_A9)XqeOtqa*UbE;7Ch6>ibQx&)4 zOGnz;&I^=WlZMRm!(b7~rIg1cqB0d{pG8VTAy`0%G*Nu|u}32k9-`-Hsl)xmW@0Cm zceQ__#rFajMbi2!BbN$uVOwb_o8_gb?n&D54m>VvjO2iD zmw$2BZzr_LZa)+X?d3g~({h}&6{W0n`QH<3aICNNyTAca7V-qq(G&sUd+njBc!JbJ^w}`RLRd*zKvG zx6yA&*`LA1s@F|-W?FrC;YPbnZFgIkc!PO>8%5FUNG z(lUKgBqqBwKKF!V%VQlCGrV%2Vv~k#7(sdK{?RA5`HYOsALAIXAUE*SdG2{jL=mO%~eq0#ZQCayp)G>)26CRtY%}UDMDo zAZ`&lE2UA|?l0!Oq_19aR7|aulq{^#M`PCWlb4(zehdL~z>6ZXynh_Z|y7#upmd(@v&=>Yo<-4;xTzq}=d zHP<9`wR)L*Bz$1$yqrVW-`9G*GB6LD_M^af-10NBZ+WUG!u7e#Bo@o`gQP15XM*=$ zgXn}W>(Y2I#l=iml$lX)2UV}1G{mJ=n{x<_HW?I+PT2bGL|Io@MT=aNGosHungrcE zRJU@*bcqYHgapX_wj!7))!oS~geW#s<}c0ZZEB75{L*1#Um^`;xAAVrEWj%h*uC~L zscH`&crU47b?en1bvF_YMjp5!e;hGq6-uL*O5%5^t;aAUd%wn7N=6tjt2fLqpz^_% zj=>qHNI*+gR_*)aDIY6)g0^!Honx;@k%7NVsRg_SXuc*3F>mnLI}bq~tppg7(uUJV z=*NY!TSy0$CkwcibE93XTFDS5lBV)Ds8H)kq`X#7=GdmibT(vIFqfa(ekU5k8om;x z<5k7NK6Kp|=-9rZ?b=$-=Fe&W)Rud}c3dep!*j%3))bOxya$q>HZi=cJ8ZYPl2Kn8N8_pkQRrp!@ zTx_0z^{VD4RZ47^e|?K)`n2@>=x}DViGoE`+r@U5fPi$q#Khb%slj=_SnOc(4yMjH zc3IiD7PCs>qq3CeNg0Q4h+#PzOeB*>R(>MQaTt`Bd(f!UpXSeRISsk*UAXt6C3 z74Pfj95+!KH)dy+?;;tHcsfpun)RRzrSRDG&j*Y-SCwn%w=%Ewla;|=R)iwQjHI@m zZcdyo@bGkR<&v&I>~%6*2Dmip%ri!#gtPhJ^Nx$DyOm|k6oi7-b`PqmgqsR%`wSWc?US!pr7K*&$5Ma9A%<>m^krO|)egdKQm}vioB_`gGVgpG(-$wJy&LpCvFKYh`_|<$6S- zGA9OyuTWrlqdIO^-VP2AeJ|z*9M2T%UT$A@^%3^qDx|UR>Hgp7Zn5QRH@LI7#l_z9 zVMsf|NSs1eN?-w6MQrS6{&ig+2`|@7T*XWrLS0Utw~k8?O2jyheFR#>1G|>EfcEh)+AxJVyrETzUVN4O6R)Qp$XN&gs5i{qyx<5)iV0j#7dlw#P z*7_@8Wpt*utVTLZj7ja}YrTN);&63XhC*yc{Qs z?NfWXFVObRdi3fFEmaPI9;Kl50GLrYDStIiK!wjXK&XYoG1KC=q>7q`s|^|qb!}DT z{VrMtU`^Hn&$G&sn8RoVJtjAr16r3(w+JVDopb@^>uM%3W<~`d1m^ZRoB_bEzF&9O;0r~Wyji->TJck^b&KdL=+4z(42C` zw%BVXb?@7Tqk|Duv9t4He>^(B|IK5N3kOk3d}rbwx0SV97k&xJ$~5cTQS?bsnkyw_#ljCmoXv?TJ-<6S!zRQbXot5Aun|>%p3pgw*A37uj zvhehaRC*`dYFN-K^J+CpNm>fn{A(fBzTSoGDXtEe8#z#a0u z+Bhw%+%~)&b9|IX2C-3@xTFuPac#qV=pWB7(sEhr_HsU7*?j|rRj;AvoJV(*xUH#N zh`|Ym)wgSzi7mc-Fv|O48@?B5J9~TOw5P7K&!EX2tAo>NXHmr4pwz376@)@~6<)Jd z8QXprVkg}#ZuZe<;a1Cf6uNxn;fhep`PY}Q*Rs!hx-*=w(j5`=eLRHbh5*Ls$r`JY z>?z}PWsSVt+U)Hqt3T|-PqG_$CR7q_1rC*wmg;0wPzBgR&jFppp4gr}zGCaX_n)NJ z70L1vDYPllc39iENS1UgyqhT`F0^ zv|hWw*!an{od-f3hY!{z#YVrIr^ z0+#Z5;wLg*(nr!OQzKl?>hh|S5&Vm;vS__?Sw7drZCV79SszZ=b(6dyCc2K(6NM#x zU6R7Jp5_^?7uOSJdj2EHRbwa?(^N-VZUvQ;3RP3hmXtbel|>Bb$qzQvh(xV2v=SoS z7@LEFs0FM^rjNc}`rzQ>NRo}>t?RPUV$QSAFwTn%Ba8%#H1Z29E|gae8f-RRe!kYj zhaBaeT<+KNbeW1QyhYhwI2cZd&5Uf(qwp9K()w`Yk(br%NibahWjl`1vpnhakBhrv zOJ)wL>no1XZb^Yy`e!mnz1h0#i1Xs85WhAvM6{%+D5TAM{B4F_cQ=yMmyOq1nq1}9 zGk_(6g<{PUZpIx>+G`U8FBA9EXn@l5Z@Uu2jt3HKujWJS-6n#1m#{k58 z-O0n+`1=aNl?Pep-FCWlA=WZeBFz;!wxD& z2%D=FWFjP`KaTc2S2Ct~KLE@S+IE!&3wZsSSd<^2%i)pKZgGr#Z}_ycl`;HyNfXRD zA#g7$D+>X^;Zwn@nhD4GS2yNlVP}NGHnPvRnNPVs1z5YPa`uCfsp9Ezb}u`5GZ6wb zhMKdkp_yf`#i^Lf@zE?H)@{M+`t4(pqt1azH#3p*uWc1!Yl*|6@>|{s&(m$^1+fnZ zCc1}vxVv6;aj{#w%1><4M-CLvMH}|f_&5O*vf0Y1UG_u9=-t>7y-)r}6;E=Nll=Lf zHb5Ya;3wIJ%eF#=z2I&>4iZe7g8a{StFNyNFJNosHs8Mm4v{WGN*pR~U{xC+tXmg%R7OADO0aEx1#IH$k+m35`TXaxp z`ou}(7p5WpCPa!t7`c$Bh3#SOqKU~J;z;E-2%p8mtCS4w<#I>VdBjPA6R!3$n><(U z7Swhd;322+*R8Gy**7}p^qHUJ$LdtLUs{9|P9aG(xw$b2POU}WihZv4C^&_qWX1N{ zePa+?5qbkOi#mo)MP}~1%@323PMk=RuqU*$TH1_{h~31JGrTV)I>@Qg?Y%w3bT3Ql zq{qKy>>J>OCq~`2eg6FU@)qOF3Koy6X@)od?19!di%&UFO=vSqB2!7^qrR=G6NQyU z%x-XpwLcWP&NZ~v_?nusnDv*2O{9!Z`w_Afhjy>pP-CDzk8C(IJ>fKpq;dPIc~2{AW2qlJvS|mB$agOZhZkPnpwp)+)tbM zfMAcrE#ZnKQb5Of{kQ=>m^syKL|Iq6T3uoRf`xCN${>8i#1&D#8yI8vKU@7nU zo3%p0A9ro0XZC$&^``641dMmLr>8I1QEB{L4lwj!U5b&nWAxz@tB#%QxqxK4lJ&}PT?=idm#yx zVfJ@9*=>P+iCzGE*1kRvT|G35UwXRXwI2$yQJ@ftba`N(TqMfs-YR@?1~2LQrR;G} z3rY+c?A$ZhWPaw#=3-pz>8Z_*bMtaB&(>C((QX1~yb(s_ZP>ost74R(}QXFeyL-@Z9;2kp`>Eoo`ojpQdbAY=&H4?aYAq5}W`0R8KGEcr^=29!rRg-^>P9HRDLL zIvYhOXaf2o~4?5H-u)t8pnwWR@g6X}qVH#tlN7w!yE0FA6v>Wx~9UXq$-1 zR?}(np~K^arPFo4W?GO{4&iCp{_3dL^0?|mzZs>`nfUuEco}0J7ts2rXCMGcEC@e`t*he2}9Ah-T#eV zb&k(R0Goa~sKMQhFsb2i2%2Bj~ajiy=G6fBd6`AIkB@&zxoVMr!3|=kMEc7f-`Xe-I-JL+q>Xh zpPP=Im3^J-lF0dirzM=cJ*K@&wGzgyOSn%YxJPa!VBkE z!Qw3S=5D#1Hx$JzZ*8Zmh#@1cLNFd zPfY&K!{}>0Z%rXeYC>p9(6Nh6EIlpINjo~+po)$32=Z<+mrRs@dE>o%xAVHG3G%3O zN_xEwNdzOyJW1Qxc%9VHb@;WFwBT`QlT{XyIsH`K+TcPmUlTuw#XJD_(QRQ>-fNX^ zqkNLN?K@9?vN%ad2nH_>vIwIp4_#KulYRC*I;dlMP**>+K~u)3hM0GlX_>xq1JnQ} z?o3fkAuFpcF#)5Ix$TMik=!<`4|fQeLVAQPyYr$lCZvYEx4OC97QXik%5M$K$^D+0 z_(Oitl=e5O0*P0#(a<svFU3wlE!p;3b9we!ns^e)v?=_~dL`KHmn__G4vMmQ z_DSn*xl};xqrUAGX~0v(mW1F^Ojh+Y9v_F&-J>Lf=&BHWq;G~ogN$brKQ*kEFMM7P zHtE{x=7l%&IgZ(7Bu-T9Msph5>dYGUXX@Z8^44-`o2i?UMT2Ds5wP2is+q)-lF={% zjFWNe+3Jzw0Y(P9AfDreiO%we1c(wK;z96R=?Jnb*J;Nz%j>8Im#bX_-v z{a#o=z|xL$?D0(9BMSL}1i)gauIv;Hgl^5;eTahoVul;5bI5m$V94vs1=Gv*zZRW< zs1#KNr1~TgET0taUrzNh5q*Fkbt#JA@fa+oTpoD|H$Pbue)c1;!i~4Q)k$a0TTH`7 zcadrHO6 zSTu?N0AyiqYs*QCvY@i)$y$F|C`T4IV#sY-V{Zd|AP9R*xv}PE32*X4P3mYK`G%Tk-1&QkfF1+#h-v?@wpx?JitLnYFo!88U&c!RnE;kprW>*^Gw2$w6k1S*7 zWqXU#G9f%gp_lPG#X#0t?U;$;dTLGD%|>Qh-%CwWy(f-#n$I$4-HGB?!D#)BcE|TL zLw>j=*Lql-A}|=8Z0@QcGXpViz0GAwxF;q5qdc0&oR6d;!F%cXIAyoDV|tc@&8apu zT3h5Q?)ihlmZh3d1wIqA2_Z}SNw>C2cJyT?eQm;mjF$B~AI2^>l0+6sbht6a{T8Lh z#?(6jBTR|Tzq1WJ+1quTSKGz%(n0%{hoftmH=XkcGR(idH2I)6x|5j)(z9FdYOd!R z6!ct-Ws|E%=>4OgVl<8e__I|f&~G`7!-6Ik+C>5*`+k$eIy_jNb#>koWgeQ~tUZ&EsaC@ph)} z-gAxh7X&{F%lb@B`l5u(`3RWH=GQqsux?jo6*%8eO{RWwCU6DM_R~r?Ts?EZeLAkL z+D`8lWNWO8#wv+UbGD&`7W>1e``6Gc98a>j9 zqaGld2|KoZJv`4BkK(gM>a3-aXA)&U$nUjlJGTf5rCzI^Mod>fr#+!&L zIHM}5dCmol{3)WUDUgt|gHAQMkjn(U489V`3nET1wC zUf%m;v$w0UU#O^<63iv{CmqfFBG0gBNh^K&>AFxp4S^Til3^!OdbL2hEu^Q!zyG?o7wCaP#)UclLgP@ zlwm`8Fyy{fDsrOU{3|j=P9>lUfc3X)N+c94zdnC53_arSyaYuPoYGO)3t5gbzg6k)NQ^_NWopVdf57rfjsJ8N9Lk1^mtQ*qSHR(U}SQifZ6LYTqC)SGe6Nbg&gb+ZY)lzU=BN=o0z}E7^7(jABy_T`FJUo>dq%aAWpxkrU)n^ zz!pulGTO$aeDMV7WT8c9ug%=(X73Z~x$*9$PvymjbCCA;`O+{; z_bYo@=Z~OlIXSd<7n}DpQW5S*YLjPfW8)J{fo3Y8(h&Z<@9-nZdiF{3Zex=*o%(}? zcL&3tXzQqd@v(xhK*?#(Yv6vH?%K@oVgn#R7AfDT`JsXP#xK5K>&bEDCB#~ogp|?bxOvMW zLyL@=gMlWgWNJta>%z37gJyV|9dCE`wh7SbY-|#TX;qgPuWog}B_VeEc3=IkHy0tm z7zbOFAyH-u6%k2-J_brv*3!m`em=&f`rOBsB~_;+C!2~8a+IK?qJyKRhOcs8M#d-? zNWX8NOXh$;;_w(#5(ty)POu~m_FoW0Xg4=2Oy=fzJK2Q&wMJ=`2Iz2wYh9Ir4p-E0 zJMq(0gD)kw{YG@^jIQeB_Z%pcx|o5HI_Z72n%q;3yx3SG^03cXk${bmx`%nMNA%jE zU+)F!0pUdF4&OMc12yZTzrHsBDM^YVY<}a_4nzoefIg=^DhE3 zpa>hBcr-QZU#aB(^Dk4%6i3#hv!+1h@8#!mLD>XhGb9597!uWT7q z%cTGFBK}X_>zy1jD2&Ma8uhoHHe>)#zkfEK`kyEI_nZ0ef7B}us%7NRc9Z|s)9;@F zSN+dx|F=#0XSM&3oc!mI{O3*k|JEUSthE7xTfa8$YECYAJ;nyupA!lLr~fR?s>`+t0i ztbA(WWne(?thlr^qIV3XNRFQ>K@WjGFnS~bB1A$Gh9??E6#D+$Gb8%(L6IZte_r|j z!LNVh4H&yyYl(A$u*R#hwed03 z9K7*s)m>@4yiY7*wwtIm<;+;En>U;KP~&B#NR3x?ecOLQ7X3C5!Q%A1t(L2c;laWB z>Ka*hZ3et9>y0{#%N`35rN9A6Dn@!_VKsE*A6mPw`~}r@e8B;^cR6leHmh_m7Z(=` z$oidEo~d8K&-^|S5qLATr&U>$u8^%ur@d41Pp07y^FQ^ND04v{lWd@(qd~5rp`n8Z z1YOoOVNY2S@%pt-XI>VSHLAY)D!!2W{H&8P+uvG@LWC9NZlCPcL3fSKm}gSPMgy<% z3V^(_!H|e9EZitRJiDc3L0!)L`u`qa|F)_aZ}_jIq>N08y$yRpz4YvspSlELLZ_zc zPR3-Q&qaF&e|a2aX8{todzFqxeh;OR_YPCnIm;X~|b!c0FE+A9$V>BE(`@ zlZ@oY$|(NEH2YgJuJ!JRsp#oZBaz7Ke(LFjla%7XHXQUSgG1v!i4)PY=_;q+DS`Z* zi>Y38x0ndb%_|H8ladQ72-P>uIxn52T3757P8b=eikJ89M87J2rYcJo?r(4+vWin5 zT&P<>#cZaXu@2o4>{BMJCStA$^qV zuMOaJ0{J0@V@YnzFP68C1%a+*(j9n2=I3?+P@~@{!{MB(c1B)+2m27@I zB*F7%{u)C0Um^!YLf=PnTfM(Oq(6tT04M4kSJdx>X+` z#Q#jpKNIuMj`>T&^#5zeq!i(2Zg;N$=N=y3ox%j~4~;!at;hPkSy()}cUZn$qHVf0 zsVDt%(u?C&)Q#&eCC)yOSQU|wyMOE1{EW;E$yElySCd|PE;u;;Y$ZZW>o5_tG0FNpu;4}BGoFy$OT8V%%jdUQS3$a@ zsrEsbLj>OBr2luWazVT?>*N{wYW3=mD+S=evO9HiHK;3vPU)pTUXh0P(@|Fngg=RW zlCRRBok=%vZE9tLexi;Wnj?YqMK}R|=1b~+ z0)jIqiS099bL0Pf!XMZ6r_l1l5^&gJD65ywTo89m*6maKG1B&+o?E&|1J3CshsX3) z@|NFahljf2c>a&^ZFwAT|GMVZz7kcf`}imF+qEC`*nqQyIczUdR*w;bYho{Pi~qQ$ z`|&DFAv+ilW(Ch>U89G$uw3l~j=TvBkcs0f6c!ey6|eEyDN;-lwxm;fOYeW_mh=S& z>Eqj)UlFgd|6o+>x~d5doQvj_GyC@I0EyERosoEix3G;F6P%gbX`D}&+)#eD`~9*Q z)LSy-3fspFqw00iLY>OVih;bQoM2j!8r;T;h}Vqo_jJSV7`3xsTO-ySEb#p2*g*Eq zL(6aa=I=Ypua*eB$Mg0};=%40C%}xl9-PFCSIY%oQ6QQ0((>5&=7HCSMjPQyohymZ zDpZvrXNiuucZ-eQuaTRW-e+%%u`7jJOQ*!FuMz4K&3XO$14Xf4*K)YGVr_T;goLUHhON|bvih5`prli!kEXC@U z+2+5wqm<_{K&oClyQDNO${|4y1M?Zo9P^Iu$I_Ux{mlZTpNza=DB(s-kF*=n-k@Nu z;6%bShd#ptSqz0!m6fF;Xw@1XG-Hen1S_?$6_6X}WkA=FHNYR4#g<;mV)=|dN+&;h zvTRYWnIX9U?^XBX(^?YYjhlWZPz!CCzNt_NIE9tbb$MjFXSTs!T-;AsIGyR?{-)LX z2Q17WfA+)+dmEKJ+a;i-mR zz02c0PmlDP!zhP?cgK}|#pB2oHyO96OLc4?T(H`(lxl5?VU9*d%ZR#K2$VDZ-joah|bBGgf4~Mq4Ltk zzctVs8ZWl^sqGDH&y&Qot&?j_=|x9KI_Mac$*6H&@cEBB*xnMyYaR%|tzT{P znk!Rj2d^jXVGu-|SLKG05DB)=xYcC~r7`2RrD0I*h-?@ZPWNJT%9h{OdXv+W6tm}& z;n8E-Z8Di-sFIv?c>Nbgb;0!Hjp+tIC%im6xUJP_dbM%g?H6U18>v=(u&{7ip|o-M zxBjRAr-wM{wsE<8PO}Bc&7TiO$LHiySVvEEln0QDbM2&%Ub|-zF}Jjo|LGN3K*{+! zA3bx0u|o>-uA2!_w`I~ZW!~IdxlStt)%UL?pxL2KGV(hi0LE{Ua~e)xN!r!{8YjxjeNj^}_=XjdDi@sjKs!UcI5>w0{qQuZ-DQ z^rZNpik;%X2^7XVZE+m%N$-qUk;6K|5a-?AC)D=RNwhQyXlXa#!4`JuKF`e|NR~Nz?7F{Vg{S z&lcs7hy=^-aSv4Os-Mp!|7*DH6^?BFr9q-C(J&Ly1gT%Vd1DqKn8ik4!pH~jMKQfP zWOo!aI^PrfLgLtmjr1zo$2+)7=W=9YR0T@R)v}0YFr;T5R=g*W zc65@CjEt?B5EOT-&!@^*1Z*QKJ zpjW7}1trOdiv}mKJ1TYV-><=q-rK*1;vnHV@`}kq6{JXtIMK87-q94mAq!D#oxIH6K;8Zo+4vk}W=tw^ZfkgjFemSx-;zO<5DyS};J-{Qi;3qhURTH#lwrS03) z1!}4WE!uFo9@i=F$vyl4Y!q&i_Rz4 zjQT*6M0HF%VqFJeU$Asm-E{%72|UF^MY`E9j$UM^%impiId<%($NITAf5MIKW4Pgj zpVIQ>EWMd;NJQJhg#VK-X;HPU+0& z*9Hn=7AAIP*F-%BPo;N8+03-Z3>{@FRAAq}>(g81>uvBkt)4ESQm?lIVR%VrH0HKE z=!T9MXby?vnJ;bFZXu*5P|>01vgHX31sxZ(8&r8|?zwA+QP<~OPxZ~7VbNUda+_4+ zk#T|^=>!m6KGRHgNt}x`!HC(cBTA>Zt{w@KAvm5skqeV$gZ z;E9np1nm~n9)~Q;Bg^0t@Ok*lE~alkBhljHFU9u3_1kBa^4(oB$e~k6e~waqnDvqJ z<~*%#nMocMuPvqPmmDcT4X5BK22y*fL zt;$iW{<@gDI^iv7#*TV{{ga8aHTlhQu~w)}I!r{um7Y{KyVVh9o$Pd+|C6SjN#t5u z>X>f4PCxM#HYVRAIZtet;J(?nIF;-`=TErfP3s#*dHqMcBMjur(mk!XJlYsM)lUKw z0)Eb4KjX9M*GaJHRuq@Mb6yx!OojIAf}_fn0ia-#`L!=it_u|r z3A_$dim7`Uy5)TQHxDaHi%hzr7}W9d?7t8sV8>Wn59_h#dta)cQHY(TF*UX%6ig4E zc7;R7Z1_34bef~KXJ;C!JXa(#gQ1ZtJ>)kaB%ZB|NOV~cvl)6&I@hUHlr&oIIQsP^ zMbecEFD3jxSM^=(H=yQ&=_&Q4OAd30DQbQHD(QmTcOhrf(7dCxoA>0oCzYs)a&Dgh z=$crlmlaCMeJ_RX?m9J4-S?9kt7B3r+dAjksa=*Vn}fxonN1xb=n|$&%@$O@0DJtM zpNM4`rlnDs0N3H%Sl#jKh%5KlNKN>TNWIJwTd4~j{KRfYf4LCy6V6|abnwWx31I3e9W`^VvdKh(<8Cf_i-Ip#ikRz`g;A;kPOXc=| zEn#|$@TvtjNq!^G<_;%AaAwlY$kk28G}-|yc3>ANgYznkH;SxnY> zsA0G1dgDvY!T6C(RQu?Gv+w#VjazhV{d5YTC1-?)Ls_JtZHp?bBGf0w-wevM32PKQgI$1JiGF6G6*PN=hoFw=v7sTSh6y zY$mTnGl{_ZGi2X@p6VqaP*VDBlWrC*NEL2w6riI;4Ri|)s|QbQcMj#rxrNb)sW7SD z8fec7iDrBaHx|E+pLt8>lJ#mgTiU6`YvL$`d15{L3mc4(Yac0D4x#~zK)=BA&; z*y1M>abv&S_BYXs$U)uB8wiHd%^oVD;msZRHgjd!jwn?uLOV~TQcow zFZs_JCWdAap4Eq3l4Wq$zkBKYibctD6931_2@ zUQcydxM}F2D?YXGn+F}};5P4uNyHPm>Xao3IOV!U*oJq*^$>zvFlL`;CS(__s26JQkf@R!|S! zR>oBlHQ9I$@~0Y~IpJA$?o1EXqVZ})u}i!#CcTP+VCp;JnuUfLsnA2@3LS{kuy#_p zu7wHVPk|2{D$q?|9z7KDGqDw1o^TUCd+C!_OglN9?qJG9%}CZ52q$2_g%hDJUo-Re z%O0AyzE?QIWI|j=p8GuL{1_sZO|88_>Y>?1;EBxIiAzez2> ze4^h6w!hW|t)73Si!EZW5^fBLO%Otjx$UaMnX29#Q|zNdOdR-W|$q`lhY|R8=`o4D$lb^@Pl0E z5syK02)WOFUo?CDmeJk@4LD(r+sb2tXFvo|x0-g;6RyWR8pqPXWOj+Do4@4{o5jneKh-AZT%jmzr?ANco@ zDsQ%ij~ekO>=byAnzl#xi2^JeoWkf#{NEO&k?k0p|BSHql<*c1c&n!fnY_BkYn&HK zsOol#8m{hpgzQD_c?xJ1*`JQ=^r>MLFdRbW4 z!L^xc<@K`V+Y=Z{eQX!{XFO}6594b+kude4Wne#9=kv8#RSLAhNS0n-D*&cKgvB!a zXS~ai@EdJ-570jp-zCq|Q|7VhRlx2~-(KGkn)U-rAwKe9O998i{#K&rPIBdP*sH5| zpUy8Lt&hFYc8lL=wo?2%5QGQlK4VMgrpIwQlnA=6u#-kuQ1O{WTGbyWeEsyQ5S#|M z077pres&PL6agnHQ8!iQCw>boqp%i(+MQhQEUOwHY1>QA$>i{}0J&%u^YaQ<`5#eoWmaDD+QF!8 zTq(rhJ{gMCE|f27V5W!RMwK1!lkYjSUmscr1Mg+z{)&vqA7rfY7IYY$&i#b97mqQ~ zeI&rPU@fWwuSi03n01O2r)t)6vs8fN!?tcAf4LdPGfN>+H`RMw)xqcM;z%*saUgoa zbNsEH`o&8;80?Fi_LdLthb-I$%znUHE4+1UJ-d0XV{({;AB2F z8VPpF(_+hC@rgh7IP`CPb1IS#f+}cH(EHr8S_ac9wDDD*Uzj;w6m17rdpmh1^QE+S zZ3i2}Z`8@XkN43k(9bfX^C&%X2d zwC)Mdd2BJ+GP)~*b`Ez~x<1pSR*bS}BZ!Jz{Jx4+N_w!Cg z%8SLq^|Mx6!w0gv^e$hpmpMyzdoYGY3)2-j&VeXs38RM1S|e{7+UttbUpn5^#vjrd zoacL0u=yXeLS{!3ou5FSpCUy9_HQ))6BN7W3ve_iex`H*ufh)+@?WHsJp%Ao6z=_a zQPp+_6C>aE7jlfs?2e#VV+jrp4#HEXP;`eHMcRfmsB(=ucTv}?5oF?Aa&q z*s7rC=34+E*qwv{$=Dtx@Wx;pQ}4MtLK*j<0-&Hgt1#3y99Yo4USV@h>*=GY_m7aw%z71NHCywA)ckfZ`!oU&cdU?qIU+6&nks90BuPZnZ^4Eq;y9eQF>mUGi7cFs@9`wL-kRw`cizZOY3Y zt_6qnXUPTVb&m@fme^De6&d7koA-=mjgj5i)+kC6&;VpDApU18Iu1DdmaQv}qr9tq z8679X1f3VH=va&@LSY((%OmKrEo@S?>!U2EIgZaab$PM?ff7JO{vInB6ww({WKf&! z9Pc#quBkvd!QfK^PS&qwf?IbzpJ1!^ z3B&=PG_#brmUr}-gMO+cU1}#XLTSxL_6dZV-&Qk*s{Ar{cHEVtaQL(e`Xik73K zW8ZMc{d~wzE|lY#3zY?J1OGGK;GgTA@tW!nU`zZPk-gE>eY639WnZ)8_MzjX2|{*h zAUBYUh935viXFM&PJ0~w2MMZ|n=>R3{5jIL%*W$Hv}B8)S<~KTY&&ky1^cQ4~lS^k~&3?_D*yH;H&Tzhci#j}| zC4@}R&@hF)cjNQY*Ai0}=dDGJ<0S`g*k_i5OVaYP1*uJSPtY6Gcq5D5#z%@Tj|pks zGwQPa_e+xg20G^gK@v_AGM>$(=>N{#Al;l&oXePdi+D6$GGx0TsN*{ro=Yd)jid;e z6>j`|%O4k+A|UAAIaxI~M?jbn!jb9ck$Q@7g4o4K)YCbFqW{_bf7!$zi`!cfQhl6G z3J9SR@(B-57Q}W+4rIyRCKLB$keW4z+MIv+FZ=n+t^WJhs&s%*vA7)zjxd;B_VNzhB|kv+F!g1;0G^YclY^q2s^!A$t{IEZBAL&DoNHB-jnElRvG9G`|Gx(V~ zA9uZ}Umf7}*Ut%whll^UBmTo>*9HlJj3q@j5kOC(=?%!Ok`vNBZqCC!^#T>=`W$Iggo6B7SH+}``e|Xg22Sglb zl>nnfc1$H)qy$Q#|8;nNd^?p6fP{XTefPv|KrTv}{LSI@3pqx}eIEtf5&;mu{QCOe zxfgg$_G9ELNjx_WIp&!q{x009$_1}R!ItDydG?=|`)B6<#~u97viy^M_|KmE<9hi& zWl+X|asgkfXki^`UE@9DnW@DDt-H5#=#;@A3?8LZ2FdeY1d zleHX8RB^iGG4zJx|6k+oesDD#puvRD4HafT^Dy)@44SL@Tb{|!Pwp)6F?o&w6FT6* z6ynR3^Z9Ju&Q(Z!7za{kF+N+TYfY?8ni<~3(qgwy|A*)OBLw{S5RU)O9Q92QNH?`~ z_BJ%#xfVmoo%=9TE6;N7dblxg+0M4-{~p5lA^DT}Pnzp*fCGODz~V(w4<;Y8SGzX| zBMDBdE_5d&$hotFghetW&#bN@vaGdM7hAUIo7cmOcF#VQ$gP*E-(8tNxGW8cjM)wN z^8I_I>|Ckx*vRfY0+e>4cJb5d&Bu1*RcU&awoHmi_kt<8&0e%Pu1<;=)H%!G#}Ry+ zp3VR=)JMk`1ie-1>BVXLGFa{(>_(uH=YHjqejn2*LLfv2*x>iXqck{NMGwBX_JO+n zjaW*5)(-S(T_XRpG7OwW5aXs7$1$O@=sd}6B+y4zNs&U4B78AdjQ=E~LCZ+{o8ddc z98c(efAo}^Mvx6VriuZxiXDL%fzbW%zQ3P%H__?rr(qUe_C5M|%GLf=>IP z1nfqdp;!ALMj%bT*Lx^Yj#FU#2yzVs?5F2}{JS}VLdkL;uj$%D^)a+_e-bGL zhv8rsb>-zdwl-NZ$-0x1N{48R2p`o)-zf_8_kxBB^x#A@9We1jZlv zsPo)Pcoa&-W`sUzHOb=`L7sB>p7hTln1l$}Z7T5F5fe8Fq$3oz8;K1w@%lsCa>{MK z5b|0?={C!W91!hlB2n>i2Ak zvO4C02x?w7dSv4}m1k!?-4$_EH73DWGpB4p-^((*_0|)&lNaob{TbxqfG}2XS)LMJ zm(&p^wQm!&eXB^wb-5(%CK=aA)q)aatifP%ve?=EXvN-b@}xG~c-KU?xQjl_<>b+lXJNlqdmoF}74sisz zgxp3~S(bW&6wdS=#oWh-2f<|g;m0CWF_2Ef>#TN3+*7#eO{-m=-SCO_B(@4c z=blW99*h1oa3M2Afx;|e_S`1}gUp)iZRReX!i3w8fMd9!_0GC7W#4w0lirlR`4v_K8>ca!;5Kub+4_z3r-X7lzrbEva6G)>&X)|c+v3ZJ^;lT<8Yw9ca ze7`({zP+cD(XgkuHCdPs49bR{FiefE9K zu>5x0@L;k#6&N-3u_Hk1!?rzX;!V@`cQASV400osiF|g4at1X!gFx<`urb!t{`zJu z#uxl;ce0RGe?h=3n|47s4GOlB=n~EOIgmbJ+jwS?y(x31pp$-sIcIB9t0yyoLR8kUemgGJ92W&%yXJ z|LYuRio!N z&sr{Tv_-W>Z!lBg$E_?b#?*p@eoCFOl-9xSU0}I4m776nL<+k>-Ay|?QU>X(p4+#$ zJNhM$go4RgYA)@c+WB&br`nh$RO7LcVy3t~utGDeUhH{^T0EGVzsCD3Zfx~s>UAI* z7`(_?nx2H1f`i%?6sJW0Eg3M3&y;C+N^d3!d&qwOx;Mg67C|GHCKty(I;!!tB`hC@ zB^}{d9LQ1138COv=o~HOk|W#O-DOm330+^`-@#dk8h3`?xcwp$=+95n(<12xddTi5 zG6Tme8?%VcB2##n9Z|cptdSShe5?hL8k>E4`aG@r=KS4gCgW4B)BcrqOCx>8015_A zh&kLaI#%p;dtNFLfgx@eJdS-`ZoPt6^dHrD-V?>;mQ7RsIW*_W}#>VvH_QasIU2uo7Umd%TiqhQu7@HMyBFLg~e zgyUjmV;PHd#1t!WzJui$#H1FeLlJbIw|~28Jx~GkS@9datUBhgn?$3X<-Pi2og2mV z^_Y#eH$jHBI}Sw<@F_5FEykBM+Bj~~`6oPEx1In?CoyAd6gPO~a z#k4xNrVo3>jh~2aL@~%&p_RJF0*EP==M4L?ML07$GEnQnO*lzZ21)Sw+8nZz8dpAB zsB1l15~Ew}?5~MYt+w-&j@ zJsc6~eK()?;5y^E19ZFDr`AoaZA9fdxl(2~nzRM%hKr2i&`e@`pEf+Es+WfC<_GiC zwh@%7+2?47aloMJt2J%td=UL}a6aXE%oD-QNH^ae!`At5{7khwb_?(@t#5A3%KSyA zM=vO4vlf@z^|(Ovso$MaP|bJG^ED?1a^id%y`qt3h$cbXu z>CY$p%H)#Zi+C&q-Hg4`gLsV9s5w9wxyNrSrc`AGII%POJhF?)kN!+MeTdh;-h3$N zy_U^aNZecR<;&ur%3<1mNeVu@p)&a_4PhU=tMh>PjIa^@M)S9I^87N-E?pWS*9Tf~ z0~KH&Vy+u|B6Br)2m2L(xMw)dC``RbpJNoSpU!${u?%fVvR?o%G3|m~DHK)tdh6BZ zEOj}r@8koFn7cGUo@HNRqZmS#!ce?4MyC?(`~7-ED~*JAmM5FP6PmkYjZv7P(g@ts z3V=w`E_AIcA$sENfUDqDvTb#mCH}fnzt8-z#<`uD#^a|JU~#ZG*=PJk4A88kDmnB}ybsp-so;fRe+X7_cy46S(r8Ci0TYE?q&2UJ4PXUWaYCvVGNy#+xH zGONGyGYH0%Qz36GJ1h;c`K((?F-Tu9dM>&|VHJA&!j?m#YcS27ia=7zZ|>}MheuoN zca2gKXF56^MS{-L4I!4zD4GBxTK-TLr zz3-AG{rUkVccxkla|~u3`UG-YUma#68N+0tiuU2=8f2e!GTTn7MpYiYBOo9EB6T8i z#>gHhZBN2hZ3C64_SRDUw7Y_Poq|wu?e<{zL~Xfwk0=%eDvU~w#6YcH>s*DvQJ41i ziqN|d3LJ*Z4OBs@1~{4)2AzGI0KtaSY>TMBe=G`QVVvqmZWvVI#f%1IKD;5_OhME+ ziR=eH$GEWG7#<8Vu=cq2=h3yk9*C!{hkLeDeltzB$Ka;-C_ANZs4Fh!uAB|S=GNeZ zknh-NV37eye|4#U8YqIio$i5189-^oMRO^z3aRynQQsGYev1X(Hm!0oM|rlO7!qmO za!YIvi$?m_4@Wl+tBpE+DkjVwz86CRno)ael$xbfbd+Stt+K)2($-wFA3hc3IP)sU zjVfEY&e!Y?YI|5Cj0&nA5W=879dVef@TsiQ3Crsvvip+c60aRiop39#y5w>PF1w8+ zG=lCA=XxeJ1Lcb$y-Z4n?(N{6R6n69%Pz#6ob5Dt2R(7By{@{m;B;}lF(|d&*f-v? zis8^we13Xc3+!-6oEENFw|vsQwVT%DeL2fr`>^d&^D^tj zwl@sJU3!&e(56S!FYlJVQqR-xt0+HKoH10x?FiVCg!K9{yhd7^#^^RAJ^gPmU?|Xf z{BS4l?>^XVnwf4b<+*MH<-`av)t(gtnsH(Z_Muu5swG_D77)~k$HznVcj$842C|os z!X99~$O_5zg&WkmY%P+zUJAu;@*5WOs>S9%IizrLsF+C2nA?Bl>V|hbX#;dK( z-_~5d@GcxPwpj1m@c4XQ8w>oK>H(-07K*LBbW8rR`PW_JEnVzPSv<+7PH|q_0|{&v zV(-%4eBX3C*2-3>Rub7Bqn_c3JmdA&UT(jWgGn;j1XbTA<*4Xf5tHE~-HO?#Vw2wMpkWX$bIqdH+4zD$! z>Bz;2mag%X%967iK57oWGPH^~*lkP`-79+z3Zl}O$Xa2tGqSE%V4ug)&h%bI!p8+T zZ=^_HBouZC?@TL|4%V4c^V;qElN3la;}Usr(xuSE47BfFYA`w5rS`$j&L|co{Q9IN zhC=O{UQrN&^QWb{($u16h6T%yHvOG$_@;g3%XI(N$nG`aJyOnAx0)XqmN>Gs-MIi1 z`ZkoU-#g38Y?@($(1RDT9PxJW`?s|PmP6n?kd~uO)$nWEC=0W&u^OKsP=jd$j|(6s zKey#UmSU}!Q#1hqXEu#)n21t$&V_nCax3s# zdQNQ?a}y+Vy)Z=JYzrE$9#&Hgdhtg?QV&xXmM&)ST8Pu}?!7PevE4}Z&4bh`y1Lds z+uLXps;C6v54?P7DDT1E`r>d-5N^EMm%HPR>tesbQ6irXknB1OL%I>R_4}ZfWRF?< zogc|=ymfvmnrY9dF7u{TcmNT7!0=d|QP=vjJBlg&H7?Z41(FQ1Cr=l2nrFo?d}i$# zDwMLrj8!TDDW2-b7vEiG+Ii!3%0tI`d&HbhLMd4FGd)Q}d#PQl7HjPuy`W0bAy_MP zqssLxZ%?>)=!G4}anp+wT3flp>z9N6$Tc5R@+wAPI}gJ9ZhA@MExeY982PIm+1xcI47&8_oJcl6 zYOq=Aw&jiDIuYl?afm{yvuv@q#Mk#8V7FbR=76^ zi#(5mE=}e~Jn|LN9Jn2!l@(0B>}ZqKuBn>9ledZa43D`f4AdA z+jo~V6JPVcO7;%K5p^V1g*w%P>}n=vFWqYmPhbf>S5qc5%IvQbo|+s&Z7m@Sz3iIT zLeut2eaq#p2n>H4sGwWI@Gp)~`YP`2Na@CfQUa0Ah2>{PVS5G3?tb+2QsN! zOQt6@&AeM!(rF{8#XXazMV3WO%|2Rf-dDqnmrG65x~hj+Ax&aS#;UC=3%e>XLYo^A z1L_6RuUkbCAIHQt&~isJRFg{|G6DGRAh zW`0MHS#JX@)af$l4UJ}0yMIS9slVNnczLAGxK%UM*KFqOlFwDH6Hy+Z4!c3K?JKCK ziLG>9oyhlp9%#{p+M-(D!#mcSHyk*ekS^4%EFVTG(ug6kB$K7r`FU z2cAEa-$LWuDsDKPBTM(G@7Ck{RO|hnz{BfEsA+EaCPF0@dP1W@UdF{@SaY=0bYP2` zzs$Rt$Y(se9+RyOYK;PAMdL@OPX~3{cxAh$lF}5tdXv3a z$ZwTz`KCoSg*D!Q)_Jhf__#cE|GmRQAJO*+9|drXcEo-G(JF5Y?%%13|NPYG^IOV? z?Y9bnmCHa3p|ps!L>zF8t6*&@qwhK`D~qS{pbM#opFJRqJhdszh#>Q~LW^W@nXJ1m z`(n36NEUsh&6Uua=<7PsjKy(2-6ta-kqE&aG+(+0EVe`GIaGhZAz!X~MIGEAn9)S9 zlv$jtHAy)DCbKzP`=Mi5g2(T2f5lmPHUs{>Hf>Gp7-LG96hz(8 z#pezh&<1?mDz@1a{n*fpV5-oYq_j&h4bn^yUY)DE*t?)M_QiH%6Q}p$H#f)d%PiiKvM&$J)gj18rb5q6J|A3@I93`UKA9!S>f^~ zedZLxb{(rSl8Yuh7i){f?mb|-t|T!y4VK{-krW+{#b-wW%IELa*MX)5j%?cRrMIYk z-dSm|Sbwa#n-)ut{XVC&-Ld(IWXXDD%HMUgx>W$(G=_-WxXC&Uf_CAu)tywe2;kCO z&a>;(=-VO2#Cc&_#nTAirM}wK21=1f55?%-?NIY7o2=SBJCWCKTDpC9@jVBxPFgNlgn~x^;J$ZLnK)1qhpA|5mz*xxv zV^#X4QI@kEXX_SU7@K~XHmG>jcCkVf-OYH&f#X|>Lw|GIFPqF&>j;c^ zPywp&S?oSM^_+=G;^Q=}czG)F0SY=|uJQH`8|kmI=igI^etO4^bDppN-f)J=&l#f| ziSjAadlluLchpAJ7}4%7=+o6B>h0SS(>)tjv$ed=!Ir?^Cnr$Cq5)HPoju2`)|^oR zBI(8C{jg<6c>_Tg_s2U|=qmd_y(dPV)9le(9mHYw?JnIsg{3QtqlF<82&bJCDt_B% zW{QagC8n~Yt;ek~3ddY6dd+=e__`4H**C`fXF343zf}&IP@SlCaUJ~B(V>>3bnQGc z2vCR07=E-yR`3iDWc_5T6Dd}e|2IENwE7*42MiRF2T~=*m$!GaBJ?%wJ##T-6A3bn;dcQ2DI--VHr?5IJ7k+v?{^{Iz)XI zuf1kqvPxGnI0-y)xm81UF*>ZILn=ONFTEYS6nXw&Z>lc@*Mq0ppZTiXTxyM%6NW)o z9+~f3eVUg7Ra6BAX`S(00Uq&Q<2xGIaD&aJPfYs`EtkD$YC%IKm%DPF@$_0HnKF=f z+)HxR7#nLdR~piYDN6Q-6|d3W)(_Lfx{}gCurJbA&Q<^_g>MuYA(#e>W#ibN#A$)L zJYMU5?bNMRGAlYK+l0eOVxs;pCu%VT@&I;#M_?qO-48f!6EbX=pA85?7J~#H*?6wz z6CGrM*Ty?_`Akvg*bF&^)Wthb=E?i^XDWi@v4`%(vXOWgln+~3;{p}HvDed{iLRiT zib)E;{|BGb=jWH>oob-dH3lV{vt5bXf{C|eZeS*=k*+~_s&DOBX#c#onE#pM#z%OD z>K;shoz(i6Odx$nXzd1a%6I&v;tWNL=^8vKyoFE2iHpOScM}{Bvrn6o zR(sAA$-kSQ*ip-S{Cd|FaYRI)EaIe>IAqZO0mpCJP+CK=S?Lpo&J!opugN$Fn9jG% zFZ+NwaM8#Jq|^BF^el2WV=!&M=10K3ekBw>TlgF#cvebU!UT(=Ov&<~)6wC$czmX> ztN6Mv_Xz-H)z^bKNGGA@8+)8~_WoN4`tUj5TQV-SS01dO5;@&n1-I`sFD@2eg}yQ-EG8+eXtDpzKxAiiuRVk$oNK?Se% z&_el9s(*&}9#?{ZOVsrDqXdl`H)y{Q(>D*K`P9;O_+n5l`<^EDXV^`+xntipisJ&q zczr*7<+Qe6UG)<5-pw_5YKJVD@}b>1@VplBUjHZ`I66*HovB-CTMrv!7-6<(|1>p2 zl3W>XP-8v5`)c2IbJ#P(l}&Q{@V3ocK?5@(AynUUQuYsgUfsgH`|si>^N+$CuJMfo z(q!}>wy>KFC6LQe%+FwBAJ@8nk1F}*E=kFqJ&^5^&*q{&CT235hVSDA^C-PdnnX5R zQ{}b_#h@zMx^G+=+OAqFq zHBcCbaGd>oaq(>&0fBn9zNu+Lunngx3yj45OM#0MYDl-6%r1cbGJ>yq5#f z?Fx>19UkM^H3ChoDpP$AC0P!&A!61Fe#?{f^tCn%*$Tb^3FM*wFZU$$&Ix4g2Gq10 z9CGd@d{vx^d%cPejwlEODFE%f7lNMOvPG91lpTDY<~j?YgPt}-@u&2gpEFFsJi*Vb zbvqg=I}1fs@ilfkM>AR5?Cna#<>Ump+SbAth}mBV`IY%B@rGSH&bpiPvsFF!i?4t8 zFsEqWa)<+Fu;fC{sw^&Zi9xkM`);|#f};<=nf_b+Oqd3<)%pvEv@^8iDQ7^bv;)>y za1Hs=Zm{x5Tp@*16LhrFh_e+bd)y6gPu$UnkH2+WJ|TU}TGMK?5^Z_#qRmyNqCn>% zhmiBy)RRI3#AG*tY3!S0cKY#)c=GAPtc9f0B2Nb`wpOt5Acd}=k+1GK&(ycsL>kSg zvgWcjt%R~{>7>I%iJHYmwj&KIJI1$`XzDtX_m&mnm=b|O&Ksihy~Ux_!#URV`eu+- zgXY9{1|Mx+k0`t-fsr?!FYfpO+aPx*g7yq=wiHbdtJln&o8X|ed?SNFO&RE2;tzV4 z6agd8Y&0qSM7Q}BnN*=(`TJ5A;qE=!{*jD*3a&SDpm&+^`uhendQ_@D=p)tmA?~z# zri2bTo2oKEV|)Gv3FW)mVz`5RXdNEobSCk`TyK8G~)rrQeJJ-yz`5-_pSNip{gpCL{^)cV^7cz?@d5MB$)*0P2 zV`BD--`d|8QLyhWjil8d3z6Qg#c5xu4ac6{X|Ydy?Jv?LC3r8exYdm^#cN zqS$4LFD75zprPR`vZ~LEwBAq?rhfmpEnm8|o;H3a=n#O~==0dv+KP#9;{a56y|q+1 zWaIk@AFk5bWMLOstef2%k1pRXzsu`}py)ULR9C@$-vXU{kknc1O=}W_e$8}U9nAoF zxLhaf|I^-AhDDvWZxf<|A|hhYsDOZglG2DGEe+CALwCn8ASx;n(jC&>T_OU~Ilz!2 zFvQT!3=HqhKD*C<-`)N1`hI+mWBoLu%$8JZ3j=wpsXZr;1u#>z=Pg9V^Q!fMpIGgh`n(pIJUay7-4 zEb)J;>c9K+{q-+&JBDANRy8U(OUOV^ixI<0uODrSjr5xH^W+{&AG{|oEq*Q54#EkI znaOHy^Ni>lQGuQ{`KZy=hw)Fx6yXdYxO|u4n?A}JLD??4TrAHf7J~qVkUW{t=-DeG zy45Lrd3|s1Cx=9gilxITQ5-~wq?;kPyuAlsqAo-uUJzYp&0j&3F79iJj>Wi(SC|i) zE`uE{ItcFdy|x|awjN!V_L6ojt%m zgO@_)VLLz&kwiBTuBZ0o>&t+M~MrFx=)fSpSzQD?45DIA>i zzT=z1@x3auhpgrL5K=oYoaDOR8UsbfSD@_<6rrLvb+>NaD)VapGMa9A32=rTaTiu) zd%;-8H+q}={L3tME$aDp@C8f#_%*OchXrFRF{Q{wM3Aoa<;8K^A3c3!(~|~~5}tY1 zg0=T_R8`ZSLWR{!vGIoZDh2sHe#GV0m}mJqAW6jGn%~XC3J*L_$1u`-uSKQInEzB2 z%G1T_ol3=Fo}K-6?QOpx%xY;8!@dWS3T%eKXxoO7bIotE9NYFr%4AlKBSz7om=Yo2 zy7>6usMVnK=6hy=besBD!lP(RwRC|0>I$o+I-5%S6$Yo$eLiBdzO>h;ynPCV3^BCVCI-v8Jzz+Vy#&4d=|D=>Ge{n65$lwQPxi$gJ+L-Yj{P zBsV&jbgwkL;^gJ)0Qg?EDIDGJ&jT5XvR@W;U5h97h-O%?u+h4{zR&Prnsr3_4Rqt9 z$kBo0F}l+Yy0e+-l2HLJNEXZCJeTWS%}TF3)YOv2yuo~ovdg-sCRuA-pwmTB6yOYm zDU|g`CBYnC!xLO5SqpdcT!wkcxT%h&?nC!lq=;d#kQ%4u8>T0<>-j0o{=wgXxL;E$ zIDrWJ8u6#s*>(CVtjDeIy9}(Wz>m{yDEbP6g@a>A*OyEMEXMa%wQAfNPH(n_BVXS~ zEGj7_4hwtb^a*k?KhVO8jm20*jjp~IIwjG2`P~!GxPX@HVw}Ahp6j{kWK0U?Gj*CQ z`UO^2)&<|qL2aPSavXx$21-?9(~fM~-0&*fwy!taUcGC~!Q|SKiq#+|d;R~q-fdS6 z0AD0kOkf8zzDv9+#!1n9c4egLxXELt+%U&Mq3Z%eDZx3(rSgS}@kY_{@*)d(ceZR) z+CFAIO!Iwyr>unY{z(ZUeNZP(Q=t1^lX^*+!b9TJmN$dHpKW#6<*}M@1-Rvb z!t!ozWlv$=qdhNo8kZ5fI{VQK@E{sO_-EI>tmlNZyDR$2dU_8ql9IBiqM~W?D15TA zGvjmbn7CbA&4nkkLzagFVAnF|rkUdb`S*4aWL1F2x?g60B&&eA@%Rd*-hG3QK?$|y z*e@*}StgE9QI|5d4K0Hx)N7^!UXVe1WC@Xyj$&t2jUM&rdcdfvS`X3nOgz;gwT>I6 zz+yvSN%HhLN*G*ncGc&Uyf&%}LA@5=l5vYiMdhPG!&gZ5>TEu|!@7cS8&hcBDY;U@ zc>&qeH|g4Puh7N49#WC^w40`6B=OmT^_bI zHsyUE)Np70;!^pGse3@-vSn@1_VbHCDxRl>bw@9OAm{Ewj#z~xkyN%5t8!s3j^wZ_ zVifny-ah?sq(uw8uSz|B!bc|UApAi)hDgm=Ps&!WQR{GPK9G7aG`$m!-psUmXg*z# zzqjzNehS{e05Fj13pN}%PXm|H;NWz`F)6IrWau-#Qa|hwT(x3)SKoL=Ti>gI@YqFv zUC^oAvv0yQFizCUG%7}9)pb&%UTbB3Zv~P2Mqc}FWnp0?TOtQ7EiI)(RSYIBpq2`~ z=iPR86~uUl)LvMxWUWeQM-Mmec5izH2ebVNK+r6=n+kmE`ntV3qK}xL>w-Y;c~&aG zpqOK~TDj-`utq^=N9&0)30UFtJi&>?+D+&=UEDjRPmnqD0pSd?*J^VQ z6XoJ}YHJ$NKt={87RGpF1{2vEUx#l`dM}Sv`@JEirt%z1PQC!t;9f#AYC&$1|3KMw z`=}=?Sd{9Tr;TF>Yw}=GlV9w?`c^a}-J#eqqX)X3mx7lY={hb$ghaf|xE+6-` z>)t%tCgPq-KX#G_U)iv}yLgizq-$N1ON2SU!b*plGKlk8DWq2o^Q^ahgfG*@HkdFG zXpE_pFY~H`iZ-`tYZ9kyX^oD&nd@G>J0TGr!+s2f{`4XLU*mEgDu6Qbwug&E%bAeS z=_d*o=<@a4^+chbPhJO@D9<9RBW`PWyng)z^}XSRJa+dLfg>2r^hkj~VN#=Lj?f`z zmi@NSUL#Us?$D8xbK(8__v)Do0e7M3=%_F-@SdcAp=8(nQH1oEkgk~fw)^--|7NNI zs31iPS8PopcDCYLR;C|6{=~B}DO_{;Ms}v)HPxmsH&=U)df%XG`z2RI2xD-~+r41& z2Qyg~Lp|9ZfnE_jaZwJ3dQkqfxoI|V`n5UiwuYGD-pMIYgn+T4XatMWmmai{-kJU7 z$?0oDMURB#j}Fga_9+2@pUeJe3uW68!Iep4J2)mnc||)|b|sKTc(?h0a4$*4J1)7# z*`m#`0WnJ*NO@9vygw%`it?>-HWZEHvTe}wI^+@T>_ZhgX{5X)xJpjV)_7ZT2xLz% zAGGe9zDX#awb+d{RX)M><3WP$Q6_Cv?wOOnwa{rAmG)ra0J5@gF_h)2hr%kyBQH^K zswM@$jd_r*znosy$GF8beh|k0m6YE`QBd zq1ZmT#;Sp(X7UM-Mz!p(-Y_7QGp#*~R!QHqX_wA$>$2 zW9yzBSAX(`m=Y+jT{r0tl{iK>gNyxr`?v$VN?kvf4J8Js7fz0^ygbIo1 z*i+Aa>b?+PsPo9ATKH4*GRu7%Vk&OcsheD8t+Xm?-=|~xmB+Bi#6|2f<=!EBE1)+Q zM6A4CE=OMpp-1ftoR3nHTA`MH;*i=gXhigS1PaUBC>dL6mCl3W9i{C)GAdcCU2hbN zhn`hP$e^*^S1Mh9@^ob-@wwP}I($Do9_M9R zptD8dvAwG?lxs@j8DBjmndDV)bI@A@o^#Uuc0q65HY{^7uyRW2wC1qP3{89r*iNx`qq;|zX zS4zB*uGNJ4aPOjzJ&C)Ok479!I!6V9srYlQ*EZUq4 z=|?oWmEw3(BMdcbmlCgu5fKLt59tS6&eqV?rfD@C-87Db1Yl31HljYI<6~aV0*#{E z?B=ayVPj6IK^@vd>FLsg`TG@R-LixxD#wfG2g<10AlSRW2m&s@7E&$^f!!Y0VLHat!lN=|+y&NwEUby>4BSTJsCw>t=-lfYourqD32dH(T;J1n~ z1L;?tW?mBag|lcXL*6|g>(MMWZgMtFtI~7(v2`=V1xW6kqu{RKaL_m@=XPQI@O5bR z2Tv(;ndLMByS;%;*kdTP)Gl*-Hzq{N)!FMSJRZR|4lAvUiAqcI77>TZC8dK(@Oab= zsglxIy=8C$9JsIQrw4!K0{mcE7|0sEf$bDW-qSUR;$hLOi^k*7>6>}%AHip<=e&H7 zr&Z~3rzfGvy;C%2q$uVXF%4rIrcOVJqi-*t?ImMeUeIS;0{{P1*|cQg8@p)ZZQgr| zrB8`5S4O**!@{F>POK(#fD~=JDBb6?XJWanlJ|NmN;v|QvIIG)R?+xL0sQK(q)Hi(Ck^98A{x z#%=4fYjad9GNrms!+;--@{2~_J*HC*)|wm_dU6#`e6`Blvb*HiftG#oEFWehTTK>7 z9~a|tjkthNpeaB>gf>OGQC`W(DSRzYwS7vvHPfrwfmc-{7kON~n^tx#ZvZ))tRQlt z+!jO)7gk#@#!UlxLm>5M{ zja^(lR-2O=?+*2X+M7+4dMco?pQ7bqZ|vrI~w{1Xn$FxwE1oO=w(hF-M`%w`wXFrlux~p}cfXldd!bS4K+=X>a`= z0B2kDh3o^wfR)Ksdm2E~GvB|<*k=o&(FO9r@i@wcb=}TC0J*If8!hrU$$KxIm>QvZ zJdjr0Sr=p1UosCv_yDjNDxAZ%tIPFvBZakJCbC*RUB*PPHiGaevNuT|^0e+1a0@VU z@{6TW|4WgUZXOhSWDbREY(RRiK-OuJS_sIHowbV8##oyB1knXc$$7%56j`ei03N9A6BbFxss`vYt^V26D-f-E!Mr)!UO;?Q6m>q;osYPu!a?uyij2 z9{8ePut+(V)Q;PB`KU?k6_sGrEl4B*SLu6%$njldV?p;BS~T$oP)FM8HB2<8M?*VQ zR}r-uObUq=T=}39EWbakG1+4==dnqpUyr0hwk(6o!D2tLp6L8T)9$Ug?)+9_%B)r> z+^7wi!_3JdBka*&d2-n9-&}AoQJ$nJVm0%nMZkW!?2Gv02sTe%P`%g7r?Fwy*VnHd zFZVFi^9ZIoE{Bqy=qv;kGiGRrVvhoJZ?SW8%dr6nElSczT)BPs{z9UNV}-8aw^!6& zUYv;QyVkUp`45a5!=f}i>iHf*T=Ku_tb0sn0|=dE2Q6&*?KCOnr_NA>Bsl+r%_Ka* zWE-am14R;*>2jSyjYiagCDgybnS@5rMMov$IE7l!b^QM0$a^9z%2jU@J$`IHoVl$* z&F2Xt3qLSI%jN5pK7I7)`uu7_(^({qd)Di&6?~oU$NWYayjG)W0Fpim1(A9R1<#sF*i&M|s5;q>(r%&kMvCLW zy|qbGPeY&wKc&e$RhUGGV1Hird^O2^>}=U^o+c~o#IYlup_KTn@b(TFuM@HXu+E-0 z%EhPo`5#@qNlHIGQSH<_>57OMf%GP1O47PZm0Q9;2pczN$n#Q}S>g4bV-5=r1`yXz z$nMx2@k0j}QL^vuW!+>HdsB4I_MYnc`F5q@GYVwh#q=2g*Zo!pO!uc@jR34|h&g@| zH|(;=`IV*EgO&y4k0Xj3C* z{!zCTtmS>nJ%1|?gUDho`l^tS7nCnqb#v0n53&BV*7tkJ=Pq2lsCx2fompXR0*GEl z-%PRj%Ed$9tGoB;uQ{ovpu*+wG^s$70XJ=qub&KGik+ORR!K@q4yT<5)wc^mpeK;+ z_IUQDHXZMt#YI^V7nnSjTdFhs@^oZO?%?j7haK!mk$cm21G_=hM~aH=})@ED^O|MaHk7>t$Bzt-r)UxTZ%?2u9=t1VXcY5&(u{ z?6dfyb(7P$3ds;?>E;&gEHAa%dLJ1REzCQ*T=Q80I19a^ri+8YqQFNbAmdi$s{QCb zd}jxZOylbqKT724M926n?ecDaAiO9<>v-bsu4}uW>+Q{zlr7Ej*6;Tao&Q8bZnIzeKX&m-*%{{uDmx+&naGsm|6tXTZe7*8sq zGpb>J1o`r(-qFX0?DYGJEu-Ko<^U^R$0oREfqofLStbbZT}lZS5rnjp%FgURCk*!t z$dJzMT$0)PiW6!cKuAB?!KR<6(-9>&l3#acMEuXw@mGsRb*cil$+j~ACaUbs&x)zg zocrUyszk z%l1cW_wQ-@a4_?}q+E<^CN$ zQJHKV)^X3!IH2oRpE|qz$E{5F2fyUEzcPcq zBFZvr&+sg7`-F?#b@^lSn?FDDKi6&53^2Pk50~;q0oGI7wuk&@M{Edi*Y2MaPA}Sj zZxrBI+WqSY{J~ZC`PUKnU1amiar@VQ`=kB({;&V`NAB*w{@Wifz5gfw?bs?y?Kdm{ zP+xzCH#0Ny{=q?ufB283PB z661G{L1{EmER@a3q+*|mqF;3T;-6cYmJPrA==>7P{_DSMD;c~)s>u?b&>%Ug|fh3#+O6U{6ZyM3^Z8;O*JeZyR-ycmsKiH ze`BcA6brI(qcg~!1Z~iQP_`U5E_e6H;Wn9=Og{Y`5Q%T7f{6DYV%Cs5zS|4EouYJf zJ;g`~X8>$HrAC5IGQ83Y^K@ryDn4({w!zaRSV8~L+$ZmTu&9-!U7{99>(;K&sg{xKBBM?n=*W9fY82 z`LZxtAD)BW%I&8(-W|0(8*+(uPo-O6_v$s8lqD6WY>!>}rRGe~v&^<)$ z8eXyRHmD(O+jqfvYfjpvV=fg?=K%jky(3ZBGqj-T(fNsszTdxJrsnr};u_9QEto62 z?u}naDW+BQ;^Uaeq+Yed{Lmg7DgtN=v36L33f*wwbei7!PM>yRqoJt6vOS;CN<5c4 z9xm=enR(~6)v;<>OJGuBc~DMA2ZZ%}#y*MbWBl$+u7U#hgmg|%d}6&P&?#Wd*Q|X8 zxM50zjDb6|RkpUH&IK-?DilO>2`*gAZHp_+u}^)&`$Vgvi+WQJaK|eUef>zblT!-l zSe;1#r%6ZFac$-Fkbv;!+^gWwVKs57E4XQClY-34ck!f{bEvQW*3s&Z=J@muuT=a@ zE@m;gY~Tz#B=DSHTr@jH8P~e3_6Li~pY|-|`B54SuhR_Eeh!-M&~4%o64C{pY%R`sHq>-ZB$6ixU@RHG)pp7iMg{|^)>kAZE*IEUI*|vjoKYz&SgC* z{vm*Zse@*sH8Y?Nx`POq)E{Z`I|ophxF#LZN&xth4w1LD0er0_^%MuS({{Xql{RIa z-nzD^p7xF#^0J^$))JQ~tLI)GC_mM&WjM65(sjiY zSpy_$)M}jcfUSA5(%ZdziCQENfO*NG%v#k@XatA;)Wea0;J$k>kRrD*b?2155@Yg) zU8gGfw9%hp)60%<=0PTp+o4v5Y7Du0c?*9p&2C z0Y^*-&cFtV$nZH@ScX=)#bR)KSb;}LrAI?9VBBRzvc+0VR2r>_NDH$|;3|C}2gSRK z-3fyFYAn;{QEPD`Mj21Nc6W(M07kS@AGFrfXn8-s+nBSz_~Z2ji|K6bP<$!yR&cPo zPV|c}9p<_Uyxg~eh?bRojmP)zj}k;(^>!&M6(jG(@WA$*Fz!t+34TCnVtNIgmojci zyw9?L*E;v=s;Q=1fO^8aqlU{OGIyQ})n+0ugA+mNNHRHG3L^XbBc1<6XXO{)-X`^P zIi6I;sUb#T1$F_W&mM5%g({TErZmfpW7PA}+TM~uEZp3x5=7V4TZra|((G?jIz5O3 z?Jni+roHgRTOP+GJiJXUu!0XNi4{+whcRmnxiTxJ;N6sxuzsqaoS%j`CK%WlfSwtCTwT?XUb@ufmbt`dQADyrVu!dsdpO7tm*SX@`z^*22K`GYMNLm_)B^ zbEstl6(cb%?1&7tsEbZB=+$`WFkdh>nKon;X{%LZRSo)L-G-f%c6=1BB_d}}d$Wtm zR`13{u0T1o)3UKo+f?K*`Ac=&V z8mKeh5HFkm=oQr|_a1Sl$?Ba zdL*skws34fyPT@fSOu&K7k>ZYXEDqX*LcZ1>(FvB6B#uC0#bky^00^_35`%14h%Nm znWO+(7^ya&-E}rie_z~rd5`+Ol;ly+sA(>uYbyqV+S6Hh*o)}ISbb3jJ!Oj)9`pu- zp69?T`5NWBp>(V0N|B`nF+Hi6%;)Zp1FrNw;RAwCpnqjvfBl(V>6wD-zYx7B_+U>! z7sP>_d0DHQK%-|J(5AjA>9dw9K!Dnf`VyCM5q$#N$x$j3jctDAl$$*85-Xb0JS~Tc zWpS;wDp~ek!`$3n<*4RosATpm!q+&m^pjkT{LLVY?#qa0%cBhmB52pc+$8Uqtft3r z5Z4b)Iww-ffHr7f%Hy#T=iC}6vqHV~I%tcP$QndGRy401#hay9U-SMv37`}ZSXqx( ztoOBtG41}?*0t^~Q2ah`nzg$^zgcrna1=c-CWE!CT1oe0o1Ls$m(v!l09a z$EB?TGG8lIrN%vZ-cV13QDFr`1&xa7dINr$G8m zaB<{tY=#zXRWn`W<&UAC@)eGVGOPdz{3P zi`7u^WaFltYGO49iw+tz-bPOgyN)kpX;L`u#`4shU@4Xlhbehq8CQ7YpF@F$g|Pyw z3hC0f%iUL$Fxxh|0;qyo7N)G;Ga*;LUacjYN&V{1;Iuw;19&mveYLj<)+KS63Hba6 z{;Fj5>(9Pt5Pbz$KiQlZv3{nc;64sACWK;hG8Ssh7I%Wk;k8au4qD;gSrfF)1`SU; zqM5@gY^Mz#3tqU!J`nN8!;m-i5j!~1X8YhcF5N#Akan2O<=+>I=)e!)k zcB+8Tc3J-XT($}ty2VX=vO!jx1o*mN&K{q=I2>I(y>z%Bkn5Ov$wjmB)Que)nOZ74myGTs7UbS7*XK2w=$W@ zrw@||78?4}^9&McPsN8dg>vl)3Qv?&KlR$o=JfDH8QK*Q%c=uX4NxeS9_KBrG>JB0 z)Wz*Y5x>G2*nNe?99hiVE=@B-l$coGh+uQa%VztJ^CSE7&d>62K{lY)eo?n$Mu#Ra zn_~77(AHb3YS}u{w8XR-MC68%WS*Ew5%k_TEq4)&w*JU)B1WCp_E3_f+#+?Np#g1) z5U=g(FB8oKYF5{G(YYMiy|oWIi=gPk^U#fcQwzn$0%VVN7Er>>1`#6Xy7e-TMNYH- zQJKpw5#<~q35mw$!2D|Gc0}j37WQ>W&-VDbN*R#;E%#iOR?B-%uh#^)2WHK0y3xy> zy+APoHG15&JLO(kNSs!;Io7#6nVXrP@@N*xDH|%Ev{#!m<>JZ`as`SR!$fR07uxAQ-7#&NxqCant7`3^Sw!YyguQ{KTam!;V6G8?+Q(UKx&GLibnms zyu4J44JAyvV|iwn413)O023?A$H(Vk(zIi*aL3GEZHLOiX{5>Y&smf^(ja8uJQu}G z#^tBz_Ph%CwrR3a16l5+tuEK7xjH{u);2y92EK}9-0(^Mm)Ars)dj3#WhR}x`B=+$ zfDdG1I+d-S8q^V4|JczKYmP61Om{bv%F-3cJ8TZlb}g)2U3rxGENNI*ZN0RV>X>(P zVbrnGa$0&D$)T8eX%#@7UK3qY?KhMN=Tk^@Go`fWSSdJ}WD;#m)v7VGS&{V^S&)rl z8%ecJJk9Qy4RDBd)D3Z{q`Xf^Fz?DDm5JlEj(^~N2sg{uDc5Q|L5I>9xoT|3xLE-*E~3LEkj7Mz<6V}3MSt1HL= zTHnlP6~bYkFRRs`x=g~yIq5E*Sd(LsT@vTj0j1_CO5=U$W&8GGDXH`CHk2G*83onr z;k2K5u+Xg?l6)Z$Yw>CzJpj{XM;#pw)B%zCO1DJ^XM>NgCpskz@)%9D(KZxb0;pe6 zMzAH+CyNpacx8iQj-s+Wc@~~uhz9Jq zr3}%`-T_-Om~#C&)R>XB(1aFHPU~-V&b{jO;5mh$j+m-BUN9|4bSpujqAwY-s% zPfyrmZ#cKGF3r2%2omow%}zoWNWrBJMPK^hS|=Gqqhw}Qh1IH21fjfC-+|4@GjM^( zCvvLw)^EHgBJV0Bw$#vd)?{&PEI@k(0@j9$W zJWVqxsh8|X6pC`)Tbhtgc+<(VTfa!E)KpHb-1Ee>rMk`g#LuH)P_nRL>l1o+%|sM1 zK)}NHC7h?9RD9-HGIULZ(}>Z>X?8tT+xsbiC&kBK_-j#@UkX5CM5OP-;vqB4sTSDqicfJ3i)sw&fO)x!L15J zQ`Jf{0%!A2s@W4jH(A!-j&&Kd?GoH}NCy&=O)h&)$36+99Zv(v)y{Ro*+!tUM7&}b z>M;>mRHX&Zb$?(;Ab;B~o)q7*%;~D`%1NF!at55;JitcLRqEiI_AYdtTFo5M_tb=X z?3N{wiPb1k%TBnTr)TIvX?bS0ZhYLN1FO20GVa0%Ry8p3!hbuC{y8N1Y<=x5$1h3G z9$o#u51Nicm97f)dmWv{5yQ3!`8w&6T9t7BX8&c3{VX{tY0A9pwN`c`Cv?|M z>P^8MA@}VpyOfuEUYXAl1~R%&04`b-j_lUn86=s`-fVpIG54c-wwA7B5Y^xpcP?j* zhz@&~7-7?bvGR2Q>W^FWUvKyb*x=&xIj6!9>g?KHlO}DUS6HOvgcz2Bc1gt5cAS2kqa(|YHbpH0!Onuo=39RWhFe*HFKQ90!a{{T+Bz5@4A|GmM?5 zn;ga5eqeh0Rrrnk-1RuEX33|1ILce=N#ovU!MfEGT>V{s>JK$okf@WUYrqL*HtimB z_9)Rk2f8C@s=(s10MUPx#s62LbN$xVg=NGnu}N3-g9YPi$%u-{hv-i$8ULH%LU4B3N9P zS1YF!Yn+x0MJazHK(CBwb2RRF)x8gl6HWtW-(uslx5wsTK{1Bl+&NT!rRUPfbcWft zWN(iYZH*XL$zM5{yWNW)E&A)skYTJF$oFNkhJc)R89B?Jb1@*WrRpAyqtBoTC zS`bf<#XbWbi@lK5?yrQRS;_>C`O)1vv+Hn6tMi#%)akQTK-_*Jqm~!j$#}=dI0$S0 z<;m`<=(oLfc$B{CA<;abq&Y2a7yx~<@Mj4k`82i-86*0uGz*3x@&EKu%;kZ(Y*?0a zuK(`9g-M$sbCqLvWe1#7-tlr@c5aRaCaLuUP4*-vXz00M&l4h%DVl+0c-(s}LyUydc?19^+p@*#<>{O}*)aYCoGgF#H zRrjnA+}0h1Eb3)ua>Oc`*=^(XYI$x-V2cK@DAh$1=f*#@R}RaRQ*nDz^)&cMd}?Y+ z8W31BXa@a}B%(R++z}8=2wPu%B$EoXxR+56j8^3oW4JJjeGLgRVT^(Kji!YnF3hzR z)--A;;L4jVt!Tnx5Qz;ZGfcI?z&q)gh5>T*;hrat^}zGhM2p;`!!5-8B&f;s^BEaM z(tGA(WcRAlJJm-uGrJtD*Jd)by-<3)4ur&X;N>m8VVZetv9}Hnsny?IVaQ8JN=hm* z?JNK(s%(&xAJ;6Js7#}Ay={%`iYsIcq?r1ql7V(ip978V7iye5l>)AD6!(4-D7OT8 zYAXT9y}S&$m<)YbTpnPIz&-uWef2&85FxB~>}BO2o*`qv764hAAM0^fUC-dAT^+eX z^f$m-U+p_~`M?$!05Jg%AoU>e${)2aO=cLyfq`~#2V!QJ4yiX)mXy=O5p9x~ODh`2 z#${UEmU$qZq#MI|%*PeVKeEEVJyZZS6)b7SxUx7RyNXk}IY!2XHa{Hi|H5X|9)5^7 z9Y|S73yTGNm}5C;AbmceU)!IAdN`&O>#^ZGMlsYlP2N_O^qKr1TRSsSy!p#q9Bbyw z$KqwXN!B2miV|?OC}d9%_)2r-+O-T4QLJhyvMcPcmQXlm1K#x5-|Ao?D&>;S4iCq$|gK+-|pu> zuT};XS~cIJLleC<2;iTdSdFv8PH^%up4BqpcWw6h=nH3Pp?M6l8h1^5sD7o{6}7r= zBh@m4dUKp}C081NZ!RC^o%HEr66{k&u&7JG+R!Uq$XF6rnG?bRHIQ%FRex>TWo^P8 zki|5B5}%S{AAxSo9m*jVm5j1Px4J-wxo_U^#~_;f^7OM_LNQJ=;zeG-FRoY1 zUG7&8z6(Tr>@V@jCjFjD(<>*I{xlFXl&0i2n!a$VEbuFf%FOhZ^tVVL&2Z4|V9BbH zKMb-(D<4z<8-LI!xMHA}40ZvN_yz~(=CJgWN2+kpnQG;t1YoLkeK)Q+29?7fY3*Cv zf}a*;qr&-YZG?wDRqMGOl7o{AAC^a1^#Xwk^|i`&m2%HsJ%TgwVXkj3c7*X&Qt(+N zQ|zk+C+N;sPG>F_s^t!0V_a%51uTR&ZQf6{?I5P7_4}jJ>fE<<>}kJhc!7Jr|3-G% zL$aXw$`>g#?p5g^gW^1RHFPV_j@$^6$i-Dky}7NT*e3y`R34|xL=<&uqHEYXAwaOY z-PL2il&h9_aYV3x_JJ%%!=VcYcg3rkrb`tuS=|ItSHGYW*PQzaaF7c02#kh5t$VVoc*C$~1;;dfc7TFY0`1_~7YHH{}V%cJrK zpLLq=p^zf75ttUagVrwWS3ohSRR8R?-?ja1gwOS5vqNQtc!5i`59=#qZ@n}rQbAT~ zu zF4aQ6cC-VGB|`d^uC;}*D;?ErJfI%Mw!*jf39M}%lj+~0J(Xj{f?cQJJqfkDEvB^D zPd0y4*4M@9D%jb*^+ua~8qU+pCChx)es4T{d_)5nIi)t!aZ-_*RaYE>{}e(mYn<{_ zCP;q*VPEy;^WLi9DMuRcb00`_71-r2Kj4yCK~(AkXbOu~Rnhya%z2>Mj$`4kMOHB) zpzIX>g(5WE0Z7V71cF9`{~K2I|NaM?SD+g(Wr|x|?<1TgUOu1BzlmOc?-gf1KYbhS zq<=J&zyGEGf`k8MQ2c5>{>8igG3xpk@A{`$z}Oqzj)U_rrZ9D zcin^IOBt{z-)}tkA?T;cC~3yHjYdp!tE-?oJJ{LTDGQSHDzV(=E3^uT0jdo_Z)RA7 zP$Fj{Yb&WfPPUz|7Y$r0*qRRiZ=Hz$xs|C{UO2Ey1|{7AltOGv9q$gtfVmi!ObCOIZsdCX+H*^HdY4 z>M@Xsy??Ll=2ms<%P*lKCuGNE$|vi_pRCRQjPduMK9l05q!>B z$Konm;m*>$2^e~uI%x4%3==Yub^N@3kPb2kZXIp~H`?ZL)9#Xq5s^e`)Rc8ZF;%t& z#rgge(-+u}2|eGi%-*}4ELu0Yj1>hy=CAT*dv%Q`4l@8DD}DS(bb$l}?5~LCI#~;y zmO>ppf+YTQ)c%(R`A*~uGMUt(?lO9MOY_!vQhItIM@G#>$G@rf0Mrm9Bqcjf1?R0! zm-xa3l^`q42sX=WOqT*;f0iz!TqO3Qk09Z~MF4cG95?|LP>^crfDl$(ov6!=fQU;m z`WRzDLzF`!JSlUA`OYZGrGePS;-8%yx)3kltWyl4yH1tUVlOr}pgKf9>>d%lp^cAcTUT=iFK5 z?_SAu^;~x!zX1c!WN(C^<8tt%7~dL48gRkBG&ir(g3V=1qaVj>^pj@`;MMu@AZn-5 zIx^;iERWBUiFLeamP7)SUH2{%|A>@%TShESsIMzq^EFUddV8Mhf-*Q3%@OqVQ4Sbn z32OEKVaBfd0=#+xKdNx!tyKC6pI5j@y`~47?#h-WvX1f9oV0e%dAgW4)nKY#NK7`1|)CR3r0ZB7`l~A>uM!mHyD5+iAY4>@T*M7W`fUB;cn3 zunjgoYQF1y6|5s$7z5hv2wCsl7R8};0&=qu7M}lQpFv* zLXzZoUxhzNbi? zOI)AilfNhVX~>-}<%Oyzg(p6xP`trCrB8HJVw#ig9R7hegx*I(_f0<6q2L~7L%-?+&!c$sA&KM+Nv0Dt# zaZ%)Z$@#AQk=6&qYN+xm4V>&S;j1&L4?p243nUGOv(i%!Za(Z@9lhkQLvB8CaVaX8 zcF-TA`z)iS)$V7WmZweUz|e1w8gCXk-rlv0eYOzKdPpilG5FAC@CMt+8~WV|kNlNa zYHDIEVFlX7-SChMZuv|qq1fK@^xdn7H5=Ojn@ZW*+S;;O(CDw1*RokY_}>#%WmK^iEt%vw{l$&XpJ>a1poVAQlUFI0HY z7e1Csr`EQLRwgwUpcy>!l8G~r;;Zb0ybIT|iZqZ?QZQe1_m8IhhY1GL^1=jiQgf~O>AD7mN;c`=|*ArCP+gGY;A3kZT$Muo#gXwWL>|gnbO;tzgE4v$Gu=duOTI~G-cuW6}x4g=90v1SxVr}CdOIh-( z>*8kLb2#(neFjM{x2b`j+w|Av{S^-Wq|V3v?2)yyyOT z>3lw8uD*@5+*LXkvRq-Y3HtpoxLw^Jf(K76tmvQQ6ND736!=MPZUysNlRowMwAGI~ zo|OK_mAOo}(Mo{aQ1WW8F4xkXG3#zhEdlBfy;k3A>c^aO%TLPmJZDAm^n?F-TZ?_) z#T#LRClb-sV+W^CK%<9~(l-jP1J8s^owV}d|NY@VJ+hyBiFD3|{AkE5I9^%U-4nr0 zyIzpB(SM<-F5#b_-skJ{xqmz0uFp`@Sw4b)ei-=1{4-}lwAv}7{`t$re8kS-;Ym^7 z`^5Y+w&k)J1D(&1%v9Ci+PmMrN#OEjvE>(}^1eSc4pv;;H1(}khJUuFz|i;tp7pr^ zGoqgwhZf;^y7Evy_Mg~_IKG#_+zC8?qVf|*5R5}c3@1CH&YtkMSN^+ez=;mbUDK^t zyq_9}I#`zcxxjxmLEzEJ_+aiXUF*O4caQ(?5X-Jy7PDZm)%mHdM1B4|opaIKjGtK* zvE&G_EUfOcKVebMUtSagbLS_){qL6kWg-4Q-O?Kz61P@fby@3D;($NW63?C#h`)IG Fe*jZGp*;Wq literal 0 HcmV?d00001 diff --git a/src/content/docs/aws/services/events.md b/src/content/docs/aws/services/events.md index 53d50aab..74a82d07 100644 --- a/src/content/docs/aws/services/events.md +++ b/src/content/docs/aws/services/events.md @@ -150,6 +150,8 @@ At this time LocalStack supports the following [target types](https://docs.aws.a The LocalStack Web Application provides a Resource Browser for managing EventBridge Buses. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EventBridge** under the **App Integration** section. +![EventBridge Resource Browser](/images/aws/eventbridge-resource-browser.png) + The Resource Browser allows you to perform the following actions: - **View the Event Buses**: You can view the list of EventBridge Buses running locally, alongside their Amazon Resource Names (ARNs) and Policies. From 36cbea886cbce656488123ba1ea0b0699d9ac0da Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 17:49:10 +0530 Subject: [PATCH 45/80] revamp fis --- src/content/docs/aws/services/fis.md | 66 +++++++++++++++------------- 1 file changed, 35 insertions(+), 31 deletions(-) diff --git a/src/content/docs/aws/services/fis.md b/src/content/docs/aws/services/fis.md index 9f30687e..1e581b01 100644 --- a/src/content/docs/aws/services/fis.md +++ b/src/content/docs/aws/services/fis.md @@ -1,8 +1,6 @@ --- -title: "Fault Injection Service (FIS)" -linkTitle: "Fault Injection Service (FIS)" -description: > - Get started with Fault Injection Service (FIS) on LocalStack +title: Fault Injection Service (FIS) +description: Get started with Fault Injection Service (FIS) on LocalStack tags: ["Ultimate"] --- @@ -13,11 +11,11 @@ FIS simulates faults such as resource unavailability and service errors to asses The full list of such possible fault injections is available in the [AWS docs](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html). LocalStack allows you to use the FIS APIs in your local environment to introduce faults in other services, in order to check how your setup behaves when parts of it stop working locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_fis" >}}), which provides information on the extent of FIS API's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of FIS API's integration with LocalStack. -{{< callout "tip" >}} -LocalStack also features its own powerful chaos engineering tool, [Chaos API]({{< ref "chaos-api" >}}). -{{< /callout >}} +:::note +LocalStack also features its own powerful chaos engineering tool, [Chaos API](/aws/capabilities/chaos-engineering/chaos-api). +::: ## Concepts @@ -30,10 +28,10 @@ FIS defines the following elements: Together this is termed as an Experiment. After the designated time, running experiments restore systems to their original state and cease introducing faults. -{{< callout "note" >}} +:::note FIS experiment emulation is part of LocalStack Enterprise. If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo). -{{< /callout >}} +::: FIS actions can be categorized into two main types: @@ -89,9 +87,9 @@ Nonetheless, they are obligatory fields according to AWS specifications and must Run the following command to create an FIS experiment template using the configuration file we just created: -{{< command >}} -$ awslocal fis create-experiment-template --cli-input-json file://create-experiment.json -{{< /command >}} +```bash +awslocal fis create-experiment-template --cli-input-json file://create-experiment.json +``` The following output would be retrieved: @@ -132,24 +130,27 @@ The following output would be retrieved: You can list all the templates you have created using the [`ListExperimentTemplates`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperimentTemplates.html): -{{< command >}} -$ awslocal fis list-experiment-templates -{{< /command >}} +```bash +awslocal fis list-experiment-templates +``` ### Starting the experiment Now let us start an EC2 instance that will match the criteria we specified in the experiment template. -{{< command >}} -$ awslocal ec2 run-instances --image-id ami-024f768332f0 --count 1 --tag-specifications '{"ResourceType": "instance", "Tags": [{"Key": "foo", "Value": "bar"}]}' -{{< /command >}} +```bash +awslocal ec2 run-instances \ + --image-id ami-024f768332f0 \ + --count 1 \ + --tag-specifications '{"ResourceType": "instance", "Tags": [{"Key": "foo", "Value": "bar"}]}' +``` You can start the experiment using the [`StartExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_StartExperiment.html). Run the following command and specify the ID of the experiment template you created earlier: -{{< command >}} -$ awslocal fis start-experiment --experiment-template-id ad16589a-4a91-4aee-88df-c33446605882 -{{< /command >}} +```bash +awslocal fis start-experiment --experiment-template-id ad16589a-4a91-4aee-88df-c33446605882 +``` The following output would be retrieved: @@ -194,25 +195,28 @@ The following output would be retrieved: You can use the [`ListExperiments`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperiments.html) to check the status of your experiment. Run the following command: -{{< command >}} -$ awslocal fis list-experiments -{{< /command >}} +```bash +awslocal fis list-experiments +``` You can fetch the details of your experiment using the [`GetExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_GetExperiment.html) API. Run the following command and specify the ID of the experiment you created earlier: -{{< command >}} -$ awslocal fis get-experiment --id efee7c02-8733-4d7c-9628-1b60bbec9759 -{{< /command >}} +```bash +awslocal fis get-experiment --id efee7c02-8733-4d7c-9628-1b60bbec9759 +``` ### Verifying the outcome You can now test that the experiment is working as expected by trying to obtain the state of the EC2 instance using [`DescribeInstanceStatus`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceStatus.html). Run the following command: -{{< command >}} -$ awslocal ec2 describe-instance-status --instance-ids i-3c40b52ab72f99c63 --output json --query InstanceStatuses[0].InstanceState -{{< /command >}} +```bash +awslocal ec2 describe-instance-status \ + --instance-ids i-3c40b52ab72f99c63 \ + --output json \ + --query InstanceStatuses[0].InstanceState +``` If everything happened as expected, the following output would be retrieved: From e1912023ce4d979ffd2e8569248bcfdfb753fe70 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 18:00:16 +0530 Subject: [PATCH 46/80] revamp glacier --- src/content/docs/aws/services/glacier.md | 94 ++++++++++++++---------- 1 file changed, 57 insertions(+), 37 deletions(-) diff --git a/src/content/docs/aws/services/glacier.md b/src/content/docs/aws/services/glacier.md index 836f7ff7..c9051546 100644 --- a/src/content/docs/aws/services/glacier.md +++ b/src/content/docs/aws/services/glacier.md @@ -1,6 +1,5 @@ --- -title: "Glacier" -linkTitle: "Glacier" +title: Glacier description: Get started with S3 Glacier on LocalStack tags: ["Ultimate"] persistence: supported @@ -16,7 +15,7 @@ Glacier uses Jobs to retrieve the data in an Archive or list the inventory of a LocalStack allows you to use the Glacier APIs in your local environment to manage Vaults and Archives. You can use the Glacier API to configure and set up vaults where you can store archives and manage them. -The supported APIs are available on our [API coverage page]({{< ref "coverage_glacier" >}}), which provides information on the extent of Glacier's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Glacier's integration with LocalStack. ## Getting started @@ -30,16 +29,16 @@ We will demonstrate how to create a vault, upload an archive, initiate a job to You can create a vault using the [`CreateVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-put.html) API. Run the follow command to create a Glacier Vault named `sample-vault`. -{{< command >}} -$ awslocal glacier create-vault --vault-name sample-vault --account-id - -{{< /command >}} +```bash +awslocal glacier create-vault --vault-name sample-vault --account-id - +``` You can get the details from your vault using the [`DescribeVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-get.html) API. Run the following command to describe your vault. -{{< command >}} -$ awslocal glacier describe-vault --vault-name sample-vault --account-id - -{{< /command >}} +```bash +awslocal glacier describe-vault --vault-name sample-vault --account-id - +``` On successful creation of the Glacier vault, you will see the following output: @@ -60,9 +59,9 @@ You can upload an archive or an individual file to a vault using the [`UploadArc Download a random image from the internet and save it as `image.jpg`. Run the following command to upload the file to your Glacier vault: -{{< command >}} -$ awslocal glacier upload-archive --vault-name sample-vault --account-id - --body image.jpg -{{< /command >}} +```bash +awslocal glacier upload-archive --vault-name sample-vault --account-id - --body image.jpg +``` On successful upload of the Glacier archive, you will see the following output: @@ -79,9 +78,13 @@ On successful upload of the Glacier archive, you will see the following output: You can initiate the retrieval of an archive from a vault using the [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API. To download an archive, you will need to initiate an `archive-retrieval` job first to make the Archive available for download. -{{< command >}} -$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"archive-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}' -{{< /command >}} + +```bash +awslocal glacier initiate-job \ + --vault-name sample-vault \ + --account-id - \ + --job-parameters '{"Type":"archive-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}' +``` On successful execution of the job, you will see the following output: @@ -96,9 +99,9 @@ On successful execution of the job, you will see the following output: You can list the current and previous processes, called Jobs, to monitor the requests sent to the Glacier API using the [`ListJobs`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-jobs-get.html) API. -{{< command >}} -$ awslocal glacier list-jobs --vault-name sample-vault --account-id - -{{< /command >}} +```bash +awslocal glacier list-jobs --vault-name sample-vault --account-id - +``` On successful execution of the command, you will see the following output: @@ -130,22 +133,30 @@ The data download process can be verified through the previous `ListJobs` call t Once the `ArchiveRetrieval` Job is complete, the data can be downloaded. You can use the `JobId` of the Job to download your archive with the following command: -{{< command >}} -$ awslocal glacier get-job-output --vault-name sample-vault --account-id - --job-id 25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW my-archive.jpg -{{< /command >}} +```bash +awslocal glacier get-job-output \ + --vault-name sample-vault \ + --account-id - \ + --job-id 25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW \ + my-archive.jpg +``` -{{< callout >}} +:::danger Please not that currently, this operation is only mocked, and will create an empty file named `my-archive.jpg`, not containing the contents of your archive. -{{< /callout >}} +::: ### Retrieve the inventory information You can also initiate the retrieval of the inventory of a vault using the same [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API. Initiate a job of the specified type to get the details of the individual inventory items inside a Vault using the `initiate-job` command: -{{< command >}} -$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"inventory-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}' -{{< /command >}} + +```bash +awslocal glacier initiate-job \ + --vault-name sample-vault \ + --account-id - \ + --job-parameters '{"Type":"inventory-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}' +``` On successful execution of the command, you will see the following output: @@ -157,10 +168,14 @@ On successful execution of the command, you will see the following output: ``` In the same fashion as the archive retrieval, you can now download the result of the inventory retrieval job using `GetJobOutput` using the `JobId` from the result of the previous command: -{{< command >}} -$ awslocal glacier get-job-output \ - --vault-name sample-vault --account-id - --job-id P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F inventory.json -{{< /command >}} + +```bash +awslocal glacier get-job-output \ + --vault-name sample-vault \ + --account-id - \ + --job-id P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F \ + inventory.json +``` Inspecting the content of the `inventory.json` file, we can find an inventory of the vault: @@ -186,16 +201,21 @@ You can delete a Glacier archive using the [`DeleteArchive`](https://docs.aws.am Run the following command to delete the previously created archive: -{{< command >}} -$ awslocal glacier delete-archive \ - --vault-name sample-vault --account-id - --archive-id d41d8cd98f00b204e9800998ecf8427e -{{< /command >}} +```bash +awslocal glacier delete-archive \ + --vault-name sample-vault \ + --account-id - \ + --archive-id d41d8cd98f00b204e9800998ecf8427e +``` ### Delete a vault You can delete a Glacier vault with the [`DeleteVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-delete.html) API. Run the following command to delete the vault: -{{< command >}} -$ awslocal glacier delete-vault --vault-name sample-vault --account-id - -{{< /command >}} + +```bash +awslocal glacier delete-vault \ + --vault-name sample-vault \ + --account-id - +``` From 912d749056299e03e796e3ed7f6c5a8bb597697e Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 18:06:47 +0530 Subject: [PATCH 47/80] revamp glue --- src/content/docs/aws/services/glue.md | 242 ++++++++++++++------------ 1 file changed, 134 insertions(+), 108 deletions(-) diff --git a/src/content/docs/aws/services/glue.md b/src/content/docs/aws/services/glue.md index 9b148398..f174f552 100644 --- a/src/content/docs/aws/services/glue.md +++ b/src/content/docs/aws/services/glue.md @@ -1,6 +1,5 @@ --- title: Glue -linkTitle: Glue description: Get started with Glue on LocalStack tags: ["Ultimate"] --- @@ -10,9 +9,9 @@ tags: ["Ultimate"] The Glue API in LocalStack Pro allows you to run ETL (Extract-Transform-Load) jobs locally, maintaining table metadata in the local Glue data catalog, and using the Spark ecosystem (PySpark/Scala) to run data processing workflows. LocalStack allows you to use the Glue APIs in your local environment. -The supported APIs are available on our [API coverage page](/references/coverage/coverage_glue/), which provides information on the extent of Glue's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Glue's integration with LocalStack. -{{< callout >}} +:::note LocalStack now includes a container-based Glue Job executor, enabling Glue jobs to run within a Docker environment. Previously, LocalStack relied on a pre-packaged binary that included Spark and other required components. The new executor leverages the `aws-glue-libs` Docker image, provides better production parity, faster startup times, and more reliable execution. @@ -27,7 +26,7 @@ Key enhancements include: To use it, set `GLUE_JOB_EXECUTOR=docker` and `GLUE_JOB_EXECUTOR_PROVIDER=v2` in your LocalStack configuration. The new executor additionally deprecates older versions of Glue (`0.9`, `1.0`, `2.0`). -{{< /callout >}} +::: ## Getting started @@ -36,20 +35,20 @@ This guide is designed for users new to Glue and assumes basic knowledge of the Start your LocalStack container using your preferred method. We will demonstrate how to create databases and table metadata in Glue, run Glue ETL jobs, import databases from Athena, and run Glue Crawlers with the AWS CLI. -{{< callout >}} -In order to run Glue jobs, some additional dependencies have to be fetched from the network, including a Docker image of apprx. -1.5GB which includes Spark, Presto, Hive and other tools. +:::note +In order to run Glue jobs, some additional dependencies have to be fetched from the network, including a Docker image of approximately 1.5GB which includes Spark, Presto, Hive and other tools. These dependencies are automatically fetched when you start up the service, so please make sure you're on a decent internet connection when pulling the dependencies for the first time. -{{< /callout >}} +::: ### Creating Databases and Table Metadata The commands below illustrate the creation of some very basic entries (databases, tables) in the Glue data catalog: -{{< command >}} -$ awslocal glue create-database --database-input '{"Name":"db1"}' -$ awslocal glue create-table --database db1 --table-input '{"Name":"table1"}' -$ awslocal glue get-tables --database db1 -{{< /command >}} + +```bash +awslocal glue create-database --database-input '{"Name":"db1"}' +awslocal glue create-table --database db1 --table-input '{"Name":"table1"}' +awslocal glue get-tables --database db1 +``` You should see the following output: @@ -87,27 +86,32 @@ if __name__ == '__main__': ``` You can now copy the script to an S3 bucket: -{{< command >}} -$ awslocal s3 mb s3://glue-test -$ awslocal s3 cp job.py s3://glue-test/job.py -{{< / command >}} + +```bash +awslocal s3 mb s3://glue-test +awslocal s3 cp job.py s3://glue-test/job.py +``` Next, you can create a job definition: -{{< command >}} -$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/glue-role \ - --command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}' -{{< / command >}} +```bash +awslocal glue create-job \ + --name job1 \ + --role arn:aws:iam::000000000000:role/glue-role \ + --command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}' +``` You can finally start the job execution: -{{< command >}} -$ awslocal glue start-job-run --job-name job1 -{{< / command >}} +```bash +awslocal glue start-job-run --job-name job1 +``` + The returned `JobRunId` can be used to query the status job the job execution, until it becomes `SUCCEEDED`: -{{< command >}} -$ awslocal glue get-job-run --job-name job1 --run-id -{{< / command >}} + +```bash +awslocal glue get-job-run --job-name job1 --run-id +``` You should see the following output: @@ -136,16 +140,17 @@ CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://tes ``` Then this command will import these DB/table definitions into the Glue data catalog: -{{< command >}} -$ awslocal glue import-catalog-to-glue -{{< /command >}} + +```bash +awslocal glue import-catalog-to-glue +``` Afterwards, the databases and tables will be available in Glue. You can query the databases with the `get-databases` operation: -{{< command >}} -$ awslocal glue get-databases -{{< /command >}} +```bash +awslocal glue get-databases +``` You should see the following output: @@ -166,9 +171,11 @@ You should see the following output: ``` And you can query the databases with the `get-databases` operation: -{{< command >}} -$ awslocal glue get-tables --database-name db2 -{{< / command >}} + +```bash +awslocal glue get-tables --database-name db2 +``` + You should see the following output: ```json @@ -203,28 +210,33 @@ The example below illustrates crawling tables and partition metadata from S3 buc You can first create an S3 bucket with a couple of items: -{{< command >}} -$ awslocal s3 mb s3://test -$ printf "1, 2, 3, 4\n5, 6, 7, 8" > /tmp/file.csv -$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=1/file.csv -$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=2/file.csv -$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=1/file.csv -$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=2/file.csv -{{< / command >}} +```bash +awslocal s3 mb s3://test +printf "1, 2, 3, 4\n5, 6, 7, 8" > /tmp/file.csv +awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=1/file.csv +awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=2/file.csv +awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=1/file.csv +awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=2/file.csv +``` You can then create and trigger the crawler: -{{< command >}} -$ awslocal glue create-database --database-input '{"Name":"db1"}' -$ awslocal glue create-crawler --name c1 --database-name db1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"S3Targets": [{"Path": "s3://test/table1"}]}' -$ awslocal glue start-crawler --name c1 -{{< / command >}} +```bash +awslocal glue create-database --database-input '{"Name":"db1"}' +awslocal glue create-crawler \ + --name c1 \ + --database-name db1 \ + --role arn:aws:iam::000000000000:role/glue-role \ + --targets '{"S3Targets": [{"Path": "s3://test/table1"}]}' +awslocal glue start-crawler --name c1 +``` Finally, you can query the table metadata that has been created by the crawler: -{{< command >}} -$ awslocal glue get-tables --database-name db1 -{{< / command >}} +```bash +awslocal glue get-tables --database-name db1 +``` + You should see the following output: ```json @@ -237,9 +249,11 @@ You should see the following output: ``` You can also query the created table partitions: -{{< command >}} -$ awslocal glue get-partitions --database-name db1 --table-name table1 -{{< / command >}} + +```bash +awslocal glue get-partitions --database-name db1 --table-name table1 +``` + You should see the following output: ```json @@ -257,9 +271,16 @@ When using JDBC crawlers, you can point your crawler towards a Redshift database Below is a rough outline of the steps required to get the integration for the JDBC crawler working. You can first create the local Redshift cluster via: -{{< command >}} -$ awslocal redshift create-cluster --cluster-identifier c1 --node-type dc1.large --master-username test --master-user-password test --db-name db1 -{{< / command >}} + +```bash +awslocal redshift create-cluster \ + --cluster-identifier c1 \ + --node-type dc1.large \ + --master-username test \ + --master-user-password test \ + --db-name db1 +``` + The output of this command contains the endpoint address of the created Redshift database: ```json @@ -275,18 +296,23 @@ Then you can use any JDBC or Postgres client to create a table `mytable1` in the Next, you're creating the Glue database, the JDBC connection, as well as the crawler: -{{< command >}} -$ awslocal glue create-database --database-input '{"Name":"gluedb1"}' -$ awslocal glue create-connection --connection-input \ +```bash +awslocal glue create-database --database-input '{"Name":"gluedb1"}' +awslocal glue create-connection --connection-input \ {"Name":"conn1","ConnectionType":"JDBC","ConnectionProperties":{"USERNAME":"test","PASSWORD":"test","JDBC_CONNECTION_URL":"jdbc:redshift://localhost.localstack.cloud:4510/db1"}}' -$ awslocal glue create-crawler --name c1 --database-name gluedb1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"JdbcTargets":[{"ConnectionName":"conn1","Path":"db1/%/mytable1"}]}' -$ awslocal glue start-crawler --name c1 -{{< / command >}} +awslocal glue create-crawler \ + --name c1 \ + --database-name gluedb1 \ + --role arn:aws:iam::000000000000:role/glue-role \ + --targets '{"JdbcTargets":[{"ConnectionName":"conn1","Path":"db1/%/mytable1"}]}' +awslocal glue start-crawler --name c1 +``` Once the crawler has started, you have to wait until the `State` turns to `READY` when querying the current state: -{{< command >}} -$ awslocal glue get-crawler --name c1 -{{< /command >}} + +```bash +awslocal glue get-crawler --name c1 +``` Once the crawler has finished running and is back in `READY` state, the Glue table within the `gluedb1` DB should have been populated and can be queried via the API. @@ -296,21 +322,27 @@ The Glue Schema Registry allows you to centrally discover, control, and evolve d With the Schema Registry, you can manage and enforce schemas and schema compatibilities in your streaming applications. It integrates nicely with [Managed Streaming for Kafka (MSK)](../managed-streaming-for-kafka). -{{< callout >}} +:::note Currently, LocalStack supports the AVRO dataformat for the Glue Schema Registry. Support for other dataformats will be added in the future. -{{< /callout >}} +::: You can create a schema registry with the following command: -{{< command >}} -$ awslocal glue create-registry --registry-name demo-registry -{{< /command >}} + +```bash +awslocal glue create-registry --registry-name demo-registry +``` You can create a schema in the newly created registry with the `create-schema` command: -{{< command >}} -$ awslocal glue create-schema --schema-name demo-schema --registry-id RegistryName=demo-registry --data-format AVRO --compatibility FORWARD \ - --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}]}' -{{< /command >}} + +```bash +awslocal glue create-schema --schema-name demo-schema \ + --registry-id RegistryName=demo-registry \ + --data-format AVRO \ + --compatibility FORWARD \ + --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}]}' +``` + You should see the following output: ```json @@ -331,10 +363,12 @@ You should see the following output: ``` Once the schema has been created, you can create a new version: -{{< command >}} -$ awslocal glue register-schema-version --schema-id SchemaName=demo-schema,RegistryName=demo-registry \ - --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}, {"name":"Address","type":"string"}]}' -{{< /command >}} + +```bash +awslocal glue register-schema-version \ + --schema-id SchemaName=demo-schema,RegistryName=demo-registry \ + --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}, {"name":"Address","type":"string"}]}' +``` You should see the following output: @@ -352,9 +386,9 @@ You can find a more advanced sample in our [localstack-pro-samples repository on LocalStack Glue supports [Delta Lake](https://delta.io), an open-source storage framework that extends Parquet data files with a file-based transaction log for ACID transactions and scalable metadata handling. -{{< callout >}} +:::note Please note that Delta Lake tables are only [supported for Glue versions `3.0` and `4.0`](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-delta-lake.html). -{{< /callout >}} +::: To illustrate this feature, we take a closer look at a Glue sample job that creates a Delta Lake table, puts some data into it, and then queries data from the table. @@ -390,18 +424,16 @@ print("SQL result:", result.toJSON().collect()) You can now run the following commands to create and start the Glue job: -{{< command >}} -$ awslocal s3 mb s3://test -$ awslocal s3 cp job.py s3://test/job.py -$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/test \ - --glue-version 4.0 --command '{"Name": "pythonshell", "ScriptLocation": "s3://test/job.py"}' -$ awslocal glue start-job-run --job-name job1 - -{ - "JobRunId": "c9471f40" -} - -{{< / command >}} +```bash +awslocal s3 mb s3://test +awslocal s3 cp job.py s3://test/job.py +awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/test \ + --glue-version 4.0 \ + --command '{"Name": "pythonshell", "ScriptLocation": "s3://test/job.py"}' +awslocal glue start-job-run --job-name job1 +``` + +Retrieve the job run ID from the output of the `start-job-run` command. The execution of the Glue job can take a few moments - once the job has finished executing, you should see a log line with the query results in the LocalStack container logs, similar to the output below: @@ -411,20 +443,20 @@ SQL result: ['{"name":"test1","key":123}', '{"name":"test2","key":456}'] ``` In order to see the logs above, make sure to enable `DEBUG=1` in the LocalStack container environment. -Alternatively, you can also retrieve the job logs programmatically via the CloudWatch Logs API - for example, using the job run ID `c9471f40` from above: -{{< command >}} -$ awslocal logs get-log-events --log-group-name /aws-glue/jobs/logs-v2 --log-stream-name c9471f40 - -{ "events": [ ... ] } - -{{< / command >}} +Alternatively, you can also retrieve the job logs programmatically via the CloudWatch Logs API - for example, using the job run ID from the above command. + +```bash +awslocal logs get-log-events \ + --log-group-name /aws-glue/jobs/logs-v2 \ + --log-stream-name +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for Glue. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Glue** under the **Analytics** section. -Glue Resource Browser +![Glue Resource Browser](/images/aws/glue-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -438,12 +470,6 @@ The Resource Browser allows you to perform the following actions: ## Examples -The following Developer Hub applications are using Glue: -{{< applications service_filter="glu">}} - -The following tutorials are using Glue: -{{< tutorials "/tutorials/schema-evolution-glue-msk">}} - The following code snippets and sample applications provide practical examples of how to use Glue in LocalStack for various use cases: - [localstack-pro-samples/glue-etl-jobs](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs) From cb868afb9fef8274d060eb338e0a1ebb3e683ab0 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:38:32 +0530 Subject: [PATCH 48/80] revamp iam --- src/content/docs/aws/services/iam.md | 44 +++++++++++++++------------- 1 file changed, 24 insertions(+), 20 deletions(-) diff --git a/src/content/docs/aws/services/iam.md b/src/content/docs/aws/services/iam.md index a209b42f..88e29df5 100644 --- a/src/content/docs/aws/services/iam.md +++ b/src/content/docs/aws/services/iam.md @@ -1,6 +1,5 @@ --- title: "Identity and Access Management (IAM)" -linkTitle: "Identity and Access Management (IAM)" description: Get started with AWS Identity and Access Management (IAM) on LocalStack persistence: supported tags: ["Free"] @@ -13,8 +12,8 @@ IAM allows organizations to create and manage AWS users, groups, and roles, defi By centralizing access control, administrators can enforce the principle of least privilege, ensuring users have only the necessary permissions for their tasks. LocalStack allows you to use the IAM APIs in your local environment to create and manage users, groups, and roles, granting permissions that adhere to the principle of least privilege. -The supported APIs are available on our [API coverage page]({{< ref "references/coverage/coverage_iam" >}}), which provides information on the extent of IAM's integration with LocalStack. -The policy coverage is documented in the [IAM coverage documentation]({{< ref "iam-coverage" >}}). +The supported APIs are available on our [API coverage page](), which provides information on the extent of IAM's integration with LocalStack. +The policy coverage is documented in the [IAM coverage documentation](). ## Getting started @@ -26,9 +25,9 @@ We will demonstrate how you can create a new user named `test`, create an access By default, in the absence of custom credentials configuration, all requests to LocalStack run under the administrative root user. Run the following command to use the [`GetCallerIdentity`](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) API to confirm that the request is running under the root user: -{{< command >}} -$ awslocal sts get-caller-identity -{{< / command >}} +```bash +awslocal sts get-caller-identity +``` You can see an output similar to the following: @@ -43,16 +42,16 @@ You can see an output similar to the following: You can now create a new user named `test` using the [`CreateUser`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-user.html) API. Run the following command: -{{< command >}} -$ awslocal iam create-user --user-name test -{{< / command >}} +```bash +awslocal iam create-user --user-name test +``` You can now create an access key pair for the user using the [`CreateAccessKey`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-access-key.html) API. Run the following command: -{{< command >}} -$ awslocal iam create-access-key --user-name test -{{< / command >}} +```bash +awslocal iam create-access-key --user-name test +``` You can see an output similar to the following: @@ -72,15 +71,20 @@ You can see an output similar to the following: You can save the `AccessKeyId` and `SecretAccessKey` values, and export them in the environment to run commands under the `test` user. Run the following command: -{{< command >}} -$ export AWS_ACCESS_KEY_ID=LKIAQAAAAAAAGFWKCM5F AWS_SECRET_ACCESS_KEY=DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5 -$ awslocal sts get-caller-identity +```bash +export AWS_ACCESS_KEY_ID=LKIAQAAAAAAAGFWKCM5F AWS_SECRET_ACCESS_KEY=DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5 +awslocal sts get-caller-identity +``` + +You can see an output similar to the following: + +```bash { "UserId": "b2yxf5g824zklfx5ry8o", "Account": "000000000000", "Arn": "arn:aws:iam::000000000000:user/test" } -{{< / command >}} +``` You can see that the request is now running under the `test` user. @@ -89,7 +93,7 @@ You can see that the request is now running under the `test` user. The LocalStack Web Application provides a Resource Browser for managing IAM users, groups, and roles. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **IAM** under the **Security Identity Compliance** section. -IAM Resource Browser +![IAM Resource Browser](/images/aws/iam-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -103,11 +107,11 @@ The Resource Browser allows you to perform the following actions: LocalStack provides various tools to help you generate, test, and enforce IAM policies more efficiently. - **IAM Policy Stream**: IAM Policy Stream provides a real-time view of API calls and the corresponding IAM policies they generate, simplifying permission management and ensuring correct permissions are assigned. - Learn more in the [IAM Policy Stream documentation]({{< ref "user-guide/security-testing/iam-policy-stream" >}}). + Learn more in the [IAM Policy Stream documentation](/aws/capabilities/security-testing/iam-policy-stream). - **IAM Policy Enforcement**: This configuration enforces IAM policies when interacting with local cloud APIs, simulating a real AWS environment. - For additional information, refer to the [IAM Policy Enforcement documentation]({{< ref "iam-enforcement" >}}). + For additional information, refer to the [IAM Policy Enforcement documentation](/aws/capabilities/security-testing/iam-policy-enforcement). - **Explainable IAM**: Explainable IAM logs outputs related to failed policy evaluations directly to LocalStack logs, aiding in the identification of necessary policies for successful requests. - More details are available in the [Explainable IAM documentation]({{< ref "explainable-iam" >}}). + More details are available in the [Explainable IAM documentation](/aws/capabilities/security-testing/explainable-iam). ## Examples From 1dc96417cf2504a4583e28bccf0703a47eb4ff14 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:45:41 +0530 Subject: [PATCH 49/80] revamp identitystore --- .../docs/aws/services/identitystore.md | 42 +++++++++++-------- 1 file changed, 25 insertions(+), 17 deletions(-) diff --git a/src/content/docs/aws/services/identitystore.md b/src/content/docs/aws/services/identitystore.md index a5503799..781077d8 100644 --- a/src/content/docs/aws/services/identitystore.md +++ b/src/content/docs/aws/services/identitystore.md @@ -1,6 +1,5 @@ --- title: "Identity Store" -linkTitle: "Identity Store" description: Get started with Identity Store on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ Identity Store is a managed service that enables the creation and management of Groups are used to manage access to AWS resources, and Identity Store provides a central location to create and manage groups across your AWS accounts. LocalStack allows you to use the Identity Store APIs to create and manage groups in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_identitystore" >}}), which provides information on the extent of Identity Store integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Identity Store integration with LocalStack. ## Getting started @@ -26,15 +25,18 @@ This guide will demonstrate how to create a group within Identity Store, list al You can create a new group in the Identity Store using the [`CreateGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_CreateGroup.html) API. Execute the following command to create a group with an identity store ID of `testls`: -{{< command >}} -$ awslocal identitystore create-group --identity-store-id testls - +```bash +awslocal identitystore create-group --identity-store-id testls +``` + +You can see an output similar to the following: + +```bash { "GroupId": "38cec731-de22-45bf-9af7-b74457bba884", "IdentityStoreId": "testls" } - -{{< / command >}} +``` Copy the `GroupId` value from the output, as it will be needed in subsequent steps. @@ -43,9 +45,13 @@ Copy the `GroupId` value from the output, as it will be needed in subsequent ste After creating groups, you might want to list all groups within the Identity Store to manage or review them. Run the following command to list all groups using the [`ListGroups`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_ListGroups.html) API: -{{< command >}} -$ awslocal identitystore list-groups --identity-store-id testls - +```bash +awslocal identitystore list-groups --identity-store-id testls +``` + +You can see an output similar to the following: + +```bash { "Groups": [ { @@ -55,8 +61,7 @@ $ awslocal identitystore list-groups --identity-store-id testls } ] } - -{{< / command >}} +``` This command returns a list of all groups, including the group you created in the previous step. @@ -65,15 +70,18 @@ This command returns a list of all groups, including the group you created in th To view details about a specific group, use the [`DescribeGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_DescribeGroup.html) API. Run the following command to describe the group you created in the previous step: -{{< command >}} -$ awslocal describe-group --identity-store-id testls --group-id 38cec731-de22-45bf-9af7-b74457bba884 - +```bash +awslocal describe-group --identity-store-id testls --group-id 38cec731-de22-45bf-9af7-b74457bba884 +``` + +You can see an output similar to the following: + +```bash { "GroupId": "38cec731-de22-45bf-9af7-b74457bba884", "ExternalIds": [], "IdentityStoreId": "testls" } - -{{< / command >}} +``` This command provides detailed information about the specific group, including its ID and any external IDs associated with it. From 131ce9b04d56733f87f17ce1b2fc3ad0635da7ce Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:46:32 +0530 Subject: [PATCH 50/80] revamp iot --- src/content/docs/aws/services/iot.md | 47 ++++++++++++++-------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/src/content/docs/aws/services/iot.md b/src/content/docs/aws/services/iot.md index 49c1c789..a8f1169f 100644 --- a/src/content/docs/aws/services/iot.md +++ b/src/content/docs/aws/services/iot.md @@ -1,9 +1,7 @@ --- title: "IoT" -linkTitle: "IoT" tags: ["Base"] -description: > - Get started with AWS IoT on LocalStack +description: Get started with AWS IoT on LocalStack --- ## Introduction @@ -12,7 +10,7 @@ AWS IoT provides cloud services to manage IoT devices and integrate them with ot LocalStack supports IoT Core, IoT Data, IoT Analytics. Common operations for creating and updating things, groups, policies, certificates and other entities are implemented with full CloudFormation support. -The supported APIs are available on our [API coverage page]({{< ref "coverage_iot" >}}). +The supported APIs are available on our [API coverage page](). LocalStack ships a [Message Queuing Telemetry Transport (MQTT)](https://mqtt.org/) broker powered by [Eclipse Mosquitto](https://mosquitto.org/) which supports both pure MQTT and MQTT-over-WSS (WebSockets Secure) protocols. @@ -24,42 +22,45 @@ Start LocalStack using your preferred method. To retrieve the MQTT endpoint, use the [`DescribeEndpoint`](https://docs.aws.amazon.com/iot/latest/apireference/API_DescribeEndpoint.html) operation. -{{< command >}} -$ awslocal iot describe-endpoint - +```bash +awslocal iot describe-endpoint +``` + +You can see an output similar to the following: + +```bash { "endpointAddress": "000000000000.iot.eu-central-1.localhost.localstack.cloud:4510" } - -{{< / command >}} +``` -{{< callout "tip" >}} +:::note LocalStack lazy-loads services by default. The MQTT broker may not be automatically available on a fresh launch of LocalStack. You can make a `DescribeEndpoint` call to start the broker and identify the port. -{{< /callout >}} +::: This endpoint can then be used with any MQTT client to publish and subscribe to topics. In this example, we will use the [Hive MQTT CLI](https://hivemq.github.io/mqtt-cli/docs/installation/). Run the following command to subscribe to an MQTT topic. -{{< command >}} -$ mqtt subscribe \ +```bash +mqtt subscribe \ --host 000000000000.iot.eu-central-1.localhost.localstack.cloud \ --port 4510 \ --topic climate -{{< /command >}} +``` In a separate terminal session, publish a message to this topic. -{{< command >}} -$ mqtt publish \ +```bash +mqtt publish \ --host 000000000000.iot.eu-central-1.localhost.localstack.cloud \ --port 4510 \ --topic climate \ -m "temperature=30°C;humidity=60%" -{{< /command >}} +``` This message will be pushed to all subscribers of this topic, including the one in the first terminal session. @@ -68,10 +69,10 @@ This message will be pushed to all subscribers of this topic, including the one LocalStack IoT maintains its own root certificate authority which is regenerated at every run. The root CA certificate can be retrieved from . -{{< callout "tip" >}} +:::note AWS provides its root CA certificate at . [This section](https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certs) contains information about CA certificates. -{{< /callout >}} +::: When connecting to the endpoints, you will need to provide this root CA certificate for authentication. This is illustrated below with Python [AWS IoT SDK](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sdks.html), @@ -168,10 +169,10 @@ Currently the `principalIdentifier` and `sessionIdentifier` fields in event payl LocalStack can publish the [registry events](https://docs.aws.amazon.com/iot/latest/developerguide/registry-events.html), if [you enable it](https://docs.aws.amazon.com/iot/latest/developerguide/iot-events.html#iot-events-enable). -{{< command >}} -$ awslocal iot update-event-configurations \ - --event-configurations '{"THING":{"Enabled": true}}' -{{< / command >}} +```bash +awslocal iot update-event-configurations \ + --event-configurations '{"THING":{"Enabled": true}}' +``` You can then subscribe or use topic rules on the follow topics: From 08ddde552fce583f5c140358c0dea23873853cbd Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:47:09 +0530 Subject: [PATCH 51/80] revamp iotanalytics --- src/content/docs/aws/services/iotanalytics.md | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/src/content/docs/aws/services/iotanalytics.md b/src/content/docs/aws/services/iotanalytics.md index 51b77372..766428f5 100644 --- a/src/content/docs/aws/services/iotanalytics.md +++ b/src/content/docs/aws/services/iotanalytics.md @@ -1,14 +1,13 @@ --- title: "IoT Analytics" -linkTitle: "IoT Analytics" tags: ["Ultimate"] description: Get started with IoT Analytics on LocalStack --- -{{< callout "warning" >}} +:::danger IoT Analytics will be [retired on 15 December 2025](https://docs.aws.amazon.com/iotanalytics/latest/userguide/iotanalytics-end-of-support.html). It will be removed from LocalStack soon after this date. -{{< /callout >}} +::: ## Introduction @@ -16,7 +15,7 @@ IoT Analytics is a managed service that enables you to collect, store, process, It provides a set of tools to build IoT applications without having to manage the underlying infrastructure. LocalStack allows you to use the IoT Analytics APIs to create and manage channels, data stores, and pipelines in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iotanalytics" >}}), which provides information on the extent of IoT Analytics integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of IoT Analytics integration with LocalStack. ## Getting started @@ -30,15 +29,15 @@ We will demonstrate how to create a channel, data store, and pipeline within IoT You can create a channel using the [`CreateChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateChannel.html) API. Run the following command to create a channel named `mychannel`: -{{< command >}} -$ awslocal iotanalytics create-channel --channel-name mychannel -{{< /command >}} +```bash +awslocal iotanalytics create-channel --channel-name mychannel +``` You can use the [`DescribeChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeChannel.html) API to check the status of the channel: -{{< command >}} -$ awslocal iotanalytics describe-channel --channel-name mychannel -{{< /command >}} +```bash +awslocal iotanalytics describe-channel --channel-name mychannel +``` The following output is displayed: @@ -56,15 +55,15 @@ The following output is displayed: You can create a data store using the [`CreateDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateDatastore.html) API. Run the following command to create a data store named `mydatastore`: -{{< command >}} -$ awslocal iotanalytics create-datastore --datastore-name mydatastore -{{< /command >}} +```bash +awslocal iotanalytics create-datastore --datastore-name mydatastore +``` You can use the [`DescribeDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeDatastore.html) API to check the status of the data store: -{{< command >}} -$ awslocal iotanalytics describe-datastore --datastore-name mydatastore -{{< /command >}} +```bash +awslocal iotanalytics describe-datastore --datastore-name mydatastore +``` The following output is displayed: @@ -82,9 +81,9 @@ The following output is displayed: You can create a pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreatePipeline.html) API. Run the following command to create a pipeline named `mypipeline`: -{{< command >}} -$ awslocal iotanalytics create-pipeline --cli-input-json file://mypipeline.json -{{< /command >}} +```bash +awslocal iotanalytics create-pipeline --cli-input-json file://mypipeline.json +``` The `mypipeline.json` file contains the following content: @@ -111,9 +110,9 @@ The `mypipeline.json` file contains the following content: You can use the [`DescribePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribePipeline.html) API to check the status of the pipeline: -{{< command >}} -$ awslocal iotanalytics describe-pipeline --pipeline-name mypipeline -{{< /command >}} +```bash +awslocal iotanalytics describe-pipeline --pipeline-name mypipeline +``` The following output is displayed: From dae069ab4ba6d73fc4350b441f41b27353613e69 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:48:40 +0530 Subject: [PATCH 52/80] revamp iotdata --- src/content/docs/aws/services/iotdata.md | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/src/content/docs/aws/services/iotdata.md b/src/content/docs/aws/services/iotdata.md index 175d8d04..4d547337 100644 --- a/src/content/docs/aws/services/iotdata.md +++ b/src/content/docs/aws/services/iotdata.md @@ -1,6 +1,5 @@ --- title: "IoT Data" -linkTitle: "IoT Data" tags: ["Ultimate"] description: Get started with IoT Data on LocalStack --- @@ -11,7 +10,7 @@ IoT Data provides secure, bi-directional communication between Internet-connecte It allows you to connect your devices to the cloud and interact with them using the AWS Management Console, AWS CLI, or AWS SDKs. LocalStack allows you to use the IoT Data APIs to update, get, and delete the shadow of a thing in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iot-data" >}}), which provides information on the extent of IoT Data integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of IoT Data integration with LocalStack. ## Getting started @@ -25,12 +24,12 @@ We will demonstrate how to create a thing, update its shadow, get its shadow, an You can update the shadow of a thing using the [`UpdateThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_UpdateThingShadow.html) API. Run the following command to update the shadow of a thing named `MyRPi`: -{{< command >}} -$ awslocal iot-data update-thing-shadow \ +```bash +awslocal iot-data update-thing-shadow \ --thing-name "MyRPi" \ --payload "{\"state\":{\"reported\":{\"moisture\":\"okay\"}}}" \ output.txt --cli-binary-format raw-in-base64-out -{{< /command >}} +``` The `output.txt` file contains the following output: @@ -58,11 +57,11 @@ The `output.txt` file contains the following output: You can get the shadow of a thing using the [`GetThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_GetThingShadow.html) API. Run the following command to get the shadow: -{{< command >}} -$ awslocal iot-data get-thing-shadow \ +```bash +awslocal iot-data get-thing-shadow \ --thing-name "MyRPi" \ output.txt -{{< /command >}} +``` The `output.txt` will contain the same output as the previous command. @@ -71,11 +70,11 @@ The `output.txt` will contain the same output as the previous command. You can delete the shadow of a thing using the [`DeleteThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_DeleteThingShadow.html) API. Run the following command to delete the shadow: -{{< command >}} -$ awslocal iot-data delete-thing-shadow \ +```bash +awslocal iot-data delete-thing-shadow \ --thing-name "MyRPi" \ output.txt -{{< /command >}} +``` The `output.txt` will contain the following output: From 6c9c8bf550775502f4426513926a725b027c45a4 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:49:12 +0530 Subject: [PATCH 53/80] revamp iotwireless --- src/content/docs/aws/services/iotwireless.md | 39 ++++++++++---------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/src/content/docs/aws/services/iotwireless.md b/src/content/docs/aws/services/iotwireless.md index ccae3523..36074e56 100644 --- a/src/content/docs/aws/services/iotwireless.md +++ b/src/content/docs/aws/services/iotwireless.md @@ -1,6 +1,5 @@ --- title: "IoT Wireless" -linkTitle: "IoT Wireless" description: Get started with IoT Wireless on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ AWS IoT Wireless is a managed service that enables customers to connect and mana The service provides a set of APIs to manage wireless devices, gateways, and destinations. LocalStack allows you to use the IoT Wireless APIs in your local environment from creating wireless devices and gateways. -The supported APIs are available on our [API coverage page]({{< ref "coverage_iotwireless" >}}), which provides information on the extent of IoT Wireless's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of IoT Wireless's integration with LocalStack. ## Getting started @@ -25,9 +24,9 @@ We will demonstrate how to use IoT Wireless to create wireless devices and gatew You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API. Run the following command to create a wireless device: -{{< command >}} -$ awslocal iotwireless create-device-profile -{{< / command >}} +```bash +awslocal iotwireless create-device-profile +``` The following output would be retrieved: @@ -40,9 +39,9 @@ The following output would be retrieved: You can list the device profiles using the [`ListDeviceProfiles`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListDeviceProfiles.html) API. Run the following command to list the device profiles: -{{< command >}} -$ awslocal iotwireless list-device-profiles -{{< / command >}} +```bash +awslocal iotwireless list-device-profiles +``` The following output would be retrieved: @@ -61,10 +60,10 @@ The following output would be retrieved: You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API. Run the following command to create a wireless device: -{{< command >}} -$ awslocal iotwireless create-wireless-device \ +```bash +awslocal iotwireless create-wireless-device \ --cli-input-json file://input.json -{{< / command >}} +``` The `input.json` file contains the following content: @@ -90,9 +89,9 @@ The `input.json` file contains the following content: You can list the wireless devices using the [`ListWirelessDevices`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessDevices.html) API. Run the following command to list the wireless devices: -{{< command >}} -$ awslocal iotwireless list-wireless-devices -{{< / command >}} +```bash +awslocal iotwireless list-wireless-devices +``` The following output would be retrieved: @@ -117,12 +116,12 @@ The following output would be retrieved: You can create a wireless gateway using the [`CreateWirelessGateway`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessGateway.html) API. Run the following command to create a wireless gateway: -{{< command >}} -$ awslocal iotwireless create-wireless-gateway \ +```bash +awslocal iotwireless create-wireless-gateway \ --lorawan GatewayEui="a1b2c3d4567890ab",RfRegion="US915" \ --name "myFirstLoRaWANGateway" \ --description "Using my first LoRaWAN gateway" -{{< / command >}} +``` The following output would be retrieved: @@ -135,9 +134,9 @@ The following output would be retrieved: You can list the wireless gateways using the [`ListWirelessGateways`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessGateways.html) API. Run the following command to list the wireless gateways: -{{< command >}} -$ awslocal iotwireless list-wireless-gateways -{{< / command >}} +```bash +awslocal iotwireless list-wireless-gateways +``` The following output would be retrieved: From 4937e673a65510b575978163aa26520bea46391f Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:50:51 +0530 Subject: [PATCH 54/80] revamp kinesis --- src/content/docs/aws/services/kinesis.md | 49 +++++++++++------------- 1 file changed, 23 insertions(+), 26 deletions(-) diff --git a/src/content/docs/aws/services/kinesis.md b/src/content/docs/aws/services/kinesis.md index d3725daf..ddb55012 100644 --- a/src/content/docs/aws/services/kinesis.md +++ b/src/content/docs/aws/services/kinesis.md @@ -1,6 +1,5 @@ --- title: "Kinesis Data Streams" -linkTitle: "Kinesis Data Streams" description: Get started with Kinesis Data Streams on LocalStack persistence: supported tags: ["Free"] @@ -12,7 +11,7 @@ Kinesis Data Streams is an AWS service for ingesting, buffering, and processing It is used for applications that require real-time processing and deriving insights from data streams such as logs, metrics, user interactions, and sensor readings. LocalStack allows you to use the Kinesis Data Streams APIs in your local environment from setting up data streams and configuring data processing to building real-time applications. -The supported APIs are available on our [API coverage page]({{< ref "coverage_kinesis" >}}). +The supported APIs are available on our [API coverage page](). Emulation for Kinesis is powered by [Kinesis Mock](https://github.com/etspaceman/kinesis-mock). @@ -42,15 +41,15 @@ export const handler = (event, context) => { You can create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API. Run the following command to create a Lambda function named `ProcessKinesisRecords`: -{{< command >}} -$ zip function.zip index.mjs -$ awslocal lambda create-function \ +```bash +zip function.zip index.mjs +awslocal lambda create-function \ --function-name ProcessKinesisRecords \ --zip-file fileb://function.zip \ --handler index.handler \ --runtime nodejs18.x \ --role arn:aws:iam::000000000000:role/lambda-kinesis-role -{{< / command >}} +``` The following output would be retrieved: @@ -96,30 +95,30 @@ The JSON contains a sample Kinesis event. You can use the [`Invoke`](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) API to invoke the Lambda function with the Kinesis event as input. Execute the following command: -{{< command >}} -$ awslocal lambda invoke \ +```bash +awslocal lambda invoke \ --function-name ProcessKinesisRecords \ --payload file://input.txt outputfile.txt -{{< / command >}} +``` ### Create a Kinesis Stream You can create a Kinesis Stream using the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API. Run the following command to create a Kinesis Stream named `lambda-stream`: -{{< command >}} -$ awslocal kinesis create-stream \ +```bash +awslocal kinesis create-stream \ --stream-name lambda-stream \ --shard-count 1 -{{< / command >}} +``` You can retrieve the Stream ARN using the [`DescribeStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html) API. Execute the following command: -{{< command >}} -$ awslocal kinesis describe-stream \ +```bash +awslocal kinesis describe-stream \ --stream-name lambda-stream -{{< / command >}} +``` The following output would be retrieved: @@ -149,25 +148,25 @@ You can save the `StreamARN` value for later use. You can add an Event Source to your Lambda function using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API. Run the following command to add the Kinesis Stream as an Event Source to your Lambda function: -{{< command >}} -$ awslocal lambda create-event-source-mapping \ +```bash +awslocal lambda create-event-source-mapping \ --function-name ProcessKinesisRecords \ --event-source arn:aws:kinesis:us-east-1:000000000000:stream/lambda-stream \ --batch-size 100 \ --starting-position LATEST -{{< / command >}} +``` ### Test the Event Source mapping You can test the event source mapping by adding a record to the Kinesis Stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API. Run the following command to add a record to the Kinesis Stream: -{{< command >}} -$ awslocal kinesis put-record \ +```bash +awslocal kinesis put-record \ --stream-name lambda-stream \ --partition-key 1 \ --data "Hello, this is a test." -{{< / command >}} +``` You can fetch the CloudWatch logs for your Lambda function reading records from the stream, using AWS CLI or LocalStack Resource Browser. @@ -183,19 +182,17 @@ Additionally, the following parameters can be tuned: Refer to our [Kinesis configuration documentation](https://docs.localstack.cloud/references/configuration/#kinesis) for more details on these parameters. -{{< callout "note" >}} +:::note `KINESIS_MOCK_MAXIMUM_HEAP_SIZE` and `KINESIS_MOCK_INITIAL_HEAP_SIZE` are only applicable when using the Scala engine. Future versions of LocalStack will likely default to using the `scala` engine over the less-performant `node` version currently in use. -{{< /callout >}} +::: ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing Kinesis Streams & Kafka Clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kinesis** under the **Analytics** section. -Kinesis Resource Browser -
-
+![Kinesis Resource Browser](/images/aws/kinesis-resource-browser.png) The Resource Browser allows you to perform the following actions: From 6323d44f2cdbae5cbfeb0d7ae4ac7fbf5f225925 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:51:19 +0530 Subject: [PATCH 55/80] revamp kinesis analytics --- .../docs/aws/services/kinesisanalytics.md | 34 +++++++++---------- 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/src/content/docs/aws/services/kinesisanalytics.md b/src/content/docs/aws/services/kinesisanalytics.md index 53e715ad..dd722d73 100644 --- a/src/content/docs/aws/services/kinesisanalytics.md +++ b/src/content/docs/aws/services/kinesisanalytics.md @@ -1,15 +1,13 @@ --- title: "Kinesis Data Analytics for SQL Applications" -linkTitle: "Kinesis Data Analytics for SQL Applications" -description: > - Get started with Kinesis Data Analytics for SQL Applications on LocalStack +description: Get started with Kinesis Data Analytics for SQL Applications on LocalStack tags: ["Ultimate"] --- -{{< callout "warning" >}} +:::danger Amazon Kinesis Data Analytics for SQL Applications will be [retired on 27 January 2026](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/discontinuation.html). It will be removed from LocalStack soon after this date. -{{< /callout >}} +::: ## Introduction @@ -17,7 +15,7 @@ Kinesis Data Analytics for SQL Applications is a service offered by Amazon Web S It allows you to apply transformations, filtering, and enrichment to streaming data using standard SQL syntax. LocalStack allows you to use the Kinesis Data Analytics APIs in your local environment. -The supported APIs is available on our [API coverage page]({{< ref "coverage_kinesisanalytics" >}}). +The supported APIs is available on our [API coverage page](). ## Getting started @@ -30,10 +28,10 @@ We will demonstrate how to create a Kinesis Analytics application using AWS CLI. You can create a Kinesis Analytics application using the [`CreateApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_CreateApplication.html) API by running the following command: -{{< command >}} -$ awslocal kinesisanalytics create-application \ +```bash +awslocal kinesisanalytics create-application \ --application-name test-analytics-app -{{< /command >}} +``` The following output would be retrieved: @@ -51,10 +49,10 @@ The following output would be retrieved: You can describe the application using the [`DescribeApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_DescribeApplication.html) API by running the following command: -{{< command >}} -$ awslocal kinesisanalytics describe-application \ +```bash +awslocal kinesisanalytics describe-application \ --application-name test-analytics-app -{{< /command >}} +``` The following output would be retrieved: @@ -78,18 +76,18 @@ The following output would be retrieved: Add tags to the application using the [`TagResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_TagResource.html) API by running the following command: -{{< command >}} -$ awslocal kinesisanalytics tag-resource \ +```bash +awslocal kinesisanalytics tag-resource \ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app \ --tags Key=test,Value=test -{{< /command >}} +``` You can list the tags for the application using the [`ListTagsForResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_ListTagsForResource.html) API by running the following command: -{{< command >}} -$ awslocal kinesisanalytics list-tags-for-resource \ +```bash +awslocal kinesisanalytics list-tags-for-resource \ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app -{{< /command >}} +``` The following output would be retrieved: From fd350e79fbbbb5a4ffed731f6ad5d2d9d885d1e9 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:52:30 +0530 Subject: [PATCH 56/80] revamp kms --- src/content/docs/aws/services/kms.md | 71 ++++++++++++++++------------ 1 file changed, 40 insertions(+), 31 deletions(-) diff --git a/src/content/docs/aws/services/kms.md b/src/content/docs/aws/services/kms.md index 47de0f4b..d2246a05 100644 --- a/src/content/docs/aws/services/kms.md +++ b/src/content/docs/aws/services/kms.md @@ -1,6 +1,5 @@ --- title: "Key Management Service (KMS)" -linkTitle: "Key Management Service (KMS)" description: Get started with Key Management Service (KMS) on LocalStack persistence: supported tags: ["Free"] @@ -14,7 +13,7 @@ KMS allows you to create, delete, list, and update aliases, friendly names for y You can check [the official AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) to understand the basic terms and concepts used in the KMS. LocalStack allows you to use the KMS APIs in your local environment to create, edit, and view symmetric and asymmetric KMS keys, including HMAC keys. -The supported APIs are available on our [API coverage page]({{< ref "coverage_kms" >}}), which provides information on the extent of KMS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of KMS's integration with LocalStack. ## Getting started @@ -28,24 +27,24 @@ We will demonstrate how to create a simple symmetric encryption key and use it t To generate a new key within the KMS, you can use the [`CreateKey`](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateKey.html) API. Execute the following command to create a new key: -{{< command >}} -$ awslocal kms create-key -{{}} +```bash +awslocal kms create-key +``` By default, this command generates a symmetric encryption key, eliminating the need for any additional arguments. You can take a look at the `KeyId` of the freshly generated key in the output, and save it for future use. In case the key ID is misplaced, it is possible to retrieve a comprehensive list of IDs and [Amazon Resource Names](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) (ARNs) for all available keys through the following command: -{{< command >}} -$ awslocal kms list-keys -{{}} +```bash +awslocal kms list-keys +``` Additionally, if needed, you can obtain extensive details about a specific key by providing its key ID or ARN using the subsequent command: -{{< command >}} -$ awslocal kms describe-key --key-id -{{}} +```bash +awslocal kms describe-key --key-id +``` ### Encrypt the data @@ -54,14 +53,14 @@ For instance, let's consider encrypting "_some important stuff_". To do so, you can use the [`Encrypt`](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html) API. Execute the following command to encrypt the data: -{{< command >}} -$ awslocal kms encrypt \ +```bash +awslocal kms encrypt \ --key-id 010a4301-4205-4df8-ae52-4c2895d47326 \ --plaintext "some important stuff" \ --output text \ --query CiphertextBlob \ | base64 --decode > my_encrypted_data -{{}} +``` You will notice that a new file named `my_encrypted_data` has been created in your current directory. This file contains the encrypted data, which can be decrypted using the same key. @@ -74,13 +73,13 @@ However, with asymmetric keys the `KEY_ID` has to be specified. Execute the following command to decrypt the data: -{{< command >}} -$ awslocal kms decrypt \ +```bash +awslocal kms decrypt \ --ciphertext-blob fileb://my_encrypted_data \ --output text \ --query Plaintext \ | base64 --decode -{{}} +``` Similar to the previous `Encrypt` operation, to retrieve the actual data, it's necessary to decode the Base64-encoded output. To achieve this, employ the `output` and `query` parameters along with the `base64` tool as before. @@ -95,9 +94,8 @@ some important stuff The LocalStack Web Application provides a Resource Browser for managing KMS keys. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **KMS** under the **Security Identity Compliance** section. -KMS Resource Browser -
-
+![KMS Resource Browser](/images/aws/kms-resource-browser.png) + The Resource Browser allows you to perform the following actions: - **Create Key**: Create a new KMS key by specifying the **Policy**, **Key Usage**, **Tags**, **Multi Region**, **Customer Master Key Spec**, and more. @@ -113,9 +111,9 @@ This can be useful to pre-seed a test environment and use a static `KeyId` for y Below is a simple example to create a key with a custom `KeyId` (note that the `KeyId` should have the format of a UUID): -{{< command >}} -$ awslocal kms create-key --tags '[{"TagKey":"_custom_id_","TagValue":"00000000-0000-0000-0000-000000000001"}]' -{{< / command >}} +```bash +awslocal kms create-key --tags '[{"TagKey":"_custom_id_","TagValue":"00000000-0000-0000-0000-000000000001"}]' +``` The following output will be displayed: @@ -135,21 +133,32 @@ This can be useful to pre-seed a development environment so values encrypted wit Here is an example of using custom key material with the value being base64 encoded: -{{< command >}} -$ echo 'dGhpc2lzYXNlY3VyZWtleQ==' | base64 -d - +```bash +echo 'dGhpc2lzYXNlY3VyZWtleQ==' | base64 -d +``` + +The following output will be displayed: + +```text thisisasecurekey - -$ awslocal kms create-key --tags '[{"TagKey":"_custom_key_material_","TagValue":"dGhpc2lzYXNlY3VyZWtleQ=="}]' - +``` + +You can create a key with custom key material using the following command: + +```bash +awslocal kms create-key --tags '[{"TagKey":"_custom_key_material_","TagValue":"dGhpc2lzYXNlY3VyZWtleQ=="}]' +``` + +The following output will be displayed: + +```json { "KeyMetadata": { "AWSAccountId": "000000000000", "KeyId": "00000000-0000-0000-0000-000000000001", .... } - -{{< / command >}} +``` ## Current Limitations From f1963ce9b2ea02e2259c8483d99c0cf0e8872c59 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 22:52:59 +0530 Subject: [PATCH 57/80] revamp lakeformation --- .../docs/aws/services/lakeformation.md | 29 +++++++++---------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/src/content/docs/aws/services/lakeformation.md b/src/content/docs/aws/services/lakeformation.md index feac678b..1bc004a4 100644 --- a/src/content/docs/aws/services/lakeformation.md +++ b/src/content/docs/aws/services/lakeformation.md @@ -1,6 +1,5 @@ --- title: "Lake Formation" -linkTitle: "Lake Formation" description: Get started with Lake Formation on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ Lake Formation is a managed service that allows users to build, secure, and mana Lake Formation allows users to define and enforce fine-grained access controls, manage metadata, and discover and share data across multiple data sources. LocalStack allows you to use the Lake Formation APIs in your local environment to register resources, grant permissions, and list resources and permissions. -The supported APIs are available on our [API coverage page]({{< ref "coverage_lakeformation" >}}), which provides information on the extent of Lake Formation's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Lake Formation's integration with LocalStack. ## Getting started @@ -24,9 +23,9 @@ We will demonstrate how to register an S3 bucket as a resource in Lake Formation Create a new S3 bucket named `test-bucket` using the `mb` command: -{{< command >}} -$ awslocal s3 mb s3://test-bucket -{{}} +```bash +awslocal s3 mb s3://test-bucket +``` You can now register the S3 bucket as a resource in Lake Formation using the [`RegisterResource`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_RegisterResource.html) API. Create a file named `input.json` with the following content: @@ -40,19 +39,19 @@ Create a file named `input.json` with the following content: Run the following command to register the resource: -{{< command >}} +```bash awslocal lakeformation register-resource \ --cli-input-json file://input.json -{{}} +``` ### List resources You can list the registered resources using the [`ListResources`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListResources.html) API. Execute the following command to list the resources: -{{< command >}} +```bash awslocal lakeformation list-resources -{{}} +``` The following output is displayed: @@ -94,16 +93,16 @@ Create a file named `permissions.json` with the following content: Run the following command to grant permissions: -{{< command >}} -$ awslocal lakeformation grant-permissions \ +```bash +awslocal lakeformation grant-permissions \ --cli-input-json file://check.json -{{}} +``` ### List permissions You can list the permissions granted to a user or group using the [`ListPermissions`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListPermissions.html) API. Execute the following command to list the permissions: -{{< command >}} -$ awslocal lakeformation list-permissions -{{}} +```bash +awslocal lakeformation list-permissions +``` From 7ce429a0a4d13a811de6d8ff8b48a2ff00e0588e Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:14:43 +0530 Subject: [PATCH 58/80] revamp lambda --- .../aws/services/{lambda.md => lambda.mdx} | 482 +++++++++++++----- 1 file changed, 366 insertions(+), 116 deletions(-) rename src/content/docs/aws/services/{lambda.md => lambda.mdx} (60%) diff --git a/src/content/docs/aws/services/lambda.md b/src/content/docs/aws/services/lambda.mdx similarity index 60% rename from src/content/docs/aws/services/lambda.md rename to src/content/docs/aws/services/lambda.mdx index 05cf56ba..553c0bda 100644 --- a/src/content/docs/aws/services/lambda.md +++ b/src/content/docs/aws/services/lambda.mdx @@ -1,11 +1,12 @@ --- title: "Lambda" -linkTitle: "Lambda" description: Get started with Lambda on LocalStack tags: ["Free"] persistence: supported with limitations --- +import { Tabs, TabItem } from '@astrojs/starlight/components'; + ## Introduction AWS Lambda is a Serverless Function as a Service (FaaS) platform that lets you run code in your preferred programming language on the AWS ecosystem. @@ -13,7 +14,7 @@ AWS Lambda automatically scales your code to meet demand and handles server prov AWS Lambda allows you to break down your application into smaller, independent functions that integrate seamlessly with AWS services. LocalStack allows you to use the Lambda APIs to create, deploy, and test your Lambda functions. -The supported APIs are available on our [Lambda coverage page]({{< ref "coverage_lambda" >}}), which provides information on the extent of Lambda's integration with LocalStack. +The supported APIs are available on our [Lambda coverage page](), which provides information on the extent of Lambda's integration with LocalStack. ## Getting started @@ -41,113 +42,123 @@ exports.handler = async (event) => { Enter the following command to create a new Lambda function: -{{< command >}} -$ zip function.zip index.js -$ awslocal lambda create-function \ +```bash +zip function.zip index.js +awslocal lambda create-function \ --function-name localstack-lambda-url-example \ --runtime nodejs18.x \ --zip-file fileb://function.zip \ --handler index.handler \ --role arn:aws:iam::000000000000:role/lambda-role -{{< / command >}} +``` -{{< callout "note">}} +:::note To create a predictable URL for the function, you can assign a custom ID by specifying the `_custom_id_` tag on the function itself. -{{< command >}} -$ awslocal lambda create-function \ +```bash +awslocal lambda create-function \ --function-name localstack-lambda-url-example \ --runtime nodejs18.x \ --zip-file fileb://function.zip \ --handler index.handler \ --role arn:aws:iam::000000000000:role/lambda-role \ --tags '{"_custom_id_":"my-custom-subdomain"}' -{{< / command >}} +``` You must specify the `_custom_id_` tag **before** creating a Function URL. After the URL configuration is set up, any modifications to the tag will not affect it. LocalStack supports assigning custom IDs to both the `$LATEST` version of the function or to an existing version alias. -{{< /callout >}} +::: -{{< callout >}} +:::note In the old Lambda provider, you could create a function with any arbitrary string as the role, such as `r1`. However, the new provider requires the role ARN to be in the format `arn:aws:iam::000000000000:role/lambda-role` and validates it using an appropriate regex. However, it currently does not check whether the role exists. -{{< /callout >}} +::: ### Invoke the Function To invoke the Lambda function, you can use the [`Invoke` API](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html). Run the following command to invoke the function: -{{< tabpane text=true persist=false >}} - {{% tab header="AWS CLI v1" lang="shell" %}} - {{< command >}} - $ awslocal lambda invoke --function-name localstack-lambda-url-example \ + + +```bash +awslocal lambda invoke --function-name localstack-lambda-url-example \ --payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt - {{< /command >}} - {{% /tab %}} - {{% tab header="AWS CLI v2" lang="shell" %}} - {{< command >}} - $ awslocal lambda invoke --function-name localstack-lambda-url-example \ +``` + + +```bash +awslocal lambda invoke --function-name localstack-lambda-url-example \ --cli-binary-format raw-in-base64-out \ --payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt - {{< /command >}} - {{% /tab %}} -{{< /tabpane >}} +``` + + ### Create a Function URL -{{< callout >}} +:::note [Response streaming](https://docs.aws.amazon.com/lambda/latest/dg/configuration-response-streaming.html) is currently not supported, so it will still return a synchronous/full response instead. -{{< /callout >}} +::: With the Function URL property, there is now a new way to call a Lambda Function via HTTP API call using the [`CreateFunctionURLConfig` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunctionUrlConfig.html). To create a URL for invoking the function, run the following command: -{{< command >}} -$ awslocal lambda create-function-url-config \ +```bash +awslocal lambda create-function-url-config \ --function-name localstack-lambda-url-example \ --auth-type NONE -{{< / command >}} +``` This will generate a HTTP URL that can be used to invoke the Lambda function. The URL will be in the format `http://.lambda-url.us-east-1.localhost.localstack.cloud:4566`. -{{< callout "note">}} +:::note As previously mentioned, when a Lambda Function has a `_custom_id_` tag, LocalStack sets this tag's value as the subdomain in the Function's URL. -{{< command >}} -$ awslocal lambda create-function-url-config \ +```bash +awslocal lambda create-function-url-config \ --function-name localstack-lambda-url-example \ --auth-type NONE +``` + +The following output would be retrieved: + +```json { "FunctionUrl": "http://my-custom-subdomain.lambda-url....", .... } -{{< / command >}} +``` In addition, if you pass an an existing version alias as a `Qualifier` to the request, the created URL will combine the custom ID and the alias in the form `-`. -{{< command >}} -$ awslocal lambda create-function-url-config \ +```bash +awslocal lambda create-function-url-config \ --function-name localstack-lambda-url-example \ --auth-type NONE --qualifier test-alias +``` + +The following output would be retrieved: + +```json { "FunctionUrl": "http://my-custom-subdomain-test-alias.lambda-url....", .... } -{{< / command >}} -{{< /callout >}} +``` +::: ### Trigger the Lambda function URL You can now trigger the Lambda function by sending a HTTP POST request to the URL using [curl](https://curl.se/) or your REST HTTP client: -{{< command >}} -$ curl -X POST \ +```bash +curl -X POST \ 'http://.lambda-url.us-east-1.localhost.localstack.cloud:4566/' \ -H 'Content-Type: application/json' \ -d '{"num1": "10", "num2": "10"}' -{{< / command >}} +``` The following output would be retrieved: @@ -170,48 +181,259 @@ The following event sources are supported in LocalStack: The table below shows feature coverage for all supported event sources for the latest version of LocalStack. -Unlike [API operation coverage]({{< ref "coverage_lambda" >}}), this table illustrates the **functional and behavioural coverage** of LocalStack's Lambda Event Source Mapping implementation. +Unlike [API operation coverage](), this table illustrates the **functional and behavioural coverage** of LocalStack's Lambda Event Source Mapping implementation. Where necessary, footnotes are used to provide additional context. -{{< callout >}} +:::note Feature availability and coverage is categorized with the following system: - ⭐️ Only Available in LocalStack licensed editions - 🟢 Fully Implemented - 🟡 Partially Implemented - 🟠 Not Implemented - ➖ Not Applicable (Not Supported by AWS) -{{}} - -| | SQS Stream Kafka ⭐️ -|--------------------------------|-------------------------------------------------|:--------:|:----:|:---------:|:----------:|:----------:|:------------:| -| **Parameter** | **Description** | **Standard** | **FIFO** | **Kinesis** | **DynamoDB** | **Amazon MSK** | **Self-Managed** | -| BatchSize | Batching events by count. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | -| *Not Configurable* | Batch when ≥ 6 MB limit. | 🟠 | 🟠 | 🟠 | 🟠 | 🟢 | 🟢 | -| MaximumBatchingWindowInSeconds | Batch by Time Window. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | -| MaximumRetryAttempts | Discard after N retries. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ | -| MaximumRecordAgeInSeconds | Discard records older than time `t`. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ | -| Enabled | Enabling/Disabling. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | -| FilterCriteria | Filter pattern evaluating. [^1] [^2] | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | -| FunctionResponseTypes | Enabling ReportBatchItemFailures. | 🟢 | 🟢 | 🟢 | 🟢 | ➖ | ➖ | -| BisectBatchOnFunctionError | Bisect a batch on error and retry. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ | -| ScalingConfig | The scaling configuration for the event source. | 🟠 | 🟠 | ➖ | ➖ | ➖ | ➖ | -| ParallelizationFactor | Parallel batch processing by shard. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ | -| DestinationConfig.OnFailure | SQS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 | -| | SNS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 | -| | S3 Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 | -| DestinationConfig.OnSuccess | Success Destinations. | ➖ | ➖ | ➖ | ➖ | ➖ | ➖ | -| MetricsConfig | CloudWatch metrics. | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 | -| ProvisionedPollerConfig | Control throughput via min-max limits. | ➖ | ➖ | ➖ | ➖ | 🟠 | 🟠 | -| StartingPosition | Position to start reading from. | ➖ | ➖ | 🟢 | 🟢 | 🟢 | 🟢 | -| StartingPositionTimestamp | Timestamp to start reading from. | ➖ | ➖ | 🟢 | ➖ | 🟢 | 🟢 | -| TumblingWindowInSeconds | Duration (seconds) of a processing window. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ | -| Topics ⭐️ | Kafka topics to read from. | ➖ | ➖ | ➖ | ➖ | 🟢 | 🟢 | +::: + +import { Table, TableHeader, TableBody, TableHead, TableRow, TableCell } from '@/components/ui/table'; + + + + + Parameter + Description + SQS + Stream + Kafka ⭐️ + + + + + Standard + FIFO + Kinesis + DynamoDB + Amazon MSK + Self-Managed + + + + + BatchSize + Batching events by count. + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + + + Not Configurable + Batch when ≥ 6 MB limit. + 🟠 + 🟠 + 🟠 + 🟠 + 🟢 + 🟢 + + + MaximumBatchingWindowInSeconds + Batch by Time Window. + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + + + MaximumRetryAttempts + Discard after N retries. + + + 🟢 + 🟢 + + + + + MaximumRecordAgeInSeconds + Discard records older than time `t`. + + + 🟢 + 🟢 + + + + + Enabled + Enabling/Disabling. + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + + + FilterCriteria + Filter pattern evaluating. [^1] [^2] + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + 🟢 + + + FunctionResponseTypes + Enabling ReportBatchItemFailures. + 🟢 + 🟢 + 🟢 + 🟢 + + + + + BisectBatchOnFunctionError + Bisect a batch on error and retry. + + + 🟠 + 🟠 + + + + + ScalingConfig + The scaling configuration for the event source. + 🟠 + 🟠 + + + + + + + ParallelizationFactor + Parallel batch processing by shard. + + + 🟠 + 🟠 + + + + + DestinationConfig.OnFailure + SQS Failure Destination. + + + 🟢 + 🟢 + 🟠 + 🟠 + + + + SNS Failure Destination. + + + 🟢 + 🟢 + 🟠 + 🟠 + + + + S3 Failure Destination. + + + 🟢 + 🟢 + 🟠 + 🟠 + + + DestinationConfig.OnSuccess + Success Destinations. + + + + + + + + + MetricsConfig + CloudWatch metrics. + 🟠 + 🟠 + 🟠 + 🟠 + 🟠 + 🟠 + + + ProvisionedPollerConfig + Control throughput via min-max limits. + + + + + 🟠 + 🟠 + + + StartingPosition + Position to start reading from. + + + 🟢 + 🟢 + 🟢 + 🟢 + + + StartingPositionTimestamp + Timestamp to start reading from. + + + 🟢 + + 🟢 + 🟢 + + + TumblingWindowInSeconds + Duration (seconds) of a processing window. + + + 🟠 + 🟠 + + + + + Topics ⭐️ + Kafka topics to read from. + + + + + 🟢 + 🟢 + + +
[^1]: Read more at [Control which events Lambda sends to your function](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html) [^2]: The available Metadata properties may not have full parity with AWS depending on the event source (read more at [Understanding event filtering basics](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-basics)). -Create a [GitHub issue](https://github.com/localstack/localstack/issues/new/choose) or reach out to [LocalStack support]({{< ref "/getting-started/help-and-support" >}}) if you experience any challenges. +Create a [GitHub issue](https://github.com/localstack/localstack/issues/new/choose) or reach out to [LocalStack support](/aws/getting-started/help-support) if you experience any challenges. ## Lambda Layers (Pro) @@ -224,22 +446,22 @@ The Community image also allows creating, updating, and deleting Lambda Layers, To create a Lambda Layer locally, you can use the [`PublishLayerVersion` API](https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html) in LocalStack. Here's a simple example using Python: -{{< command >}} -$ mkdir -p /tmp/python/ -$ echo 'def util():' > /tmp/python/testlayer.py -$ echo ' print("Output from Lambda layer util function")' >> /tmp/python/testlayer.py -$ (cd /tmp; zip -r testlayer.zip python) -$ LAYER_ARN=$(awslocal lambda publish-layer-version --layer-name layer1 --zip-file fileb:///tmp/testlayer.zip | jq -r .LayerVersionArn) -{{< / command >}} +```bash +mkdir -p /tmp/python/ +echo 'def util():' > /tmp/python/testlayer.py +echo ' print("Output from Lambda layer util function")' >> /tmp/python/testlayer.py +(cd /tmp; zip -r testlayer.zip python) +LAYER_ARN=$(awslocal lambda publish-layer-version --layer-name layer1 --zip-file fileb:///tmp/testlayer.zip | jq -r .LayerVersionArn) +``` Next, define a Lambda function that uses our layer: -{{< command >}} -$ echo 'def handler(*args, **kwargs):' > /tmp/testlambda.py -$ echo ' import testlayer; testlayer.util()' >> /tmp/testlambda.py -$ echo ' print("Debug output from Lambda function")' >> /tmp/testlambda.py -$ (cd /tmp; zip testlambda.zip testlambda.py) -$ awslocal lambda create-function \ +```bash +echo 'def handler(*args, **kwargs):' > /tmp/testlambda.py +echo ' import testlayer; testlayer.util()' >> /tmp/testlambda.py +echo ' print("Debug output from Lambda function")' >> /tmp/testlambda.py +(cd /tmp; zip testlambda.zip testlambda.py) +awslocal lambda create-function \ --function-name func1 \ --runtime python3.8 \ --role arn:aws:iam::000000000000:role/lambda-role \ @@ -247,7 +469,7 @@ $ awslocal lambda create-function \ --timeout 30 \ --zip-file fileb:///tmp/testlambda.zip \ --layers $LAYER_ARN -{{< / command >}} +``` Here, we've defined a Lambda function called `handler()` that imports the `util()` function from our `layer1` Lambda Layer. We then used the [`CreateFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) to create this Lambda function in LocalStack, specifying the `layer1` Lambda Layer as a dependency. @@ -269,14 +491,14 @@ This account is managed by LocalStack on AWS. To grant access to your layer, run the following command: -{{< command >}} -$ aws lambda add-layer-version-permission \ +```bash +aws lambda add-layer-version-permission \ --layer-name test-layer \ --version-number 1 \ --statement-id layerAccessFromLocalStack \ --principal 886468871268 \ --action lambda:GetLayerVersion -{{< / command >}} +``` Replace `test-layer` and `1` with the name and version number of your layer, respectively. @@ -287,13 +509,13 @@ After granting access, the next time you reference the layer in one of your loca LocalStack uses a [custom implementation](https://github.com/localstack/lambda-runtime-init/) of the [AWS Lambda Runtime Interface Emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator) to match the behavior of AWS Lambda as closely as possible while providing additional features -such as [hot reloading]({{< ref "hot-reloading" >}}). +such as [hot reloading](/aws/tooling/lambda-tools/hot-reloading). We ship our custom implementation as a Golang binary, which gets copied into each Lambda container under `/var/rapid/init`. This init binary is used as the entry point for every Lambda container. Our custom implementation offers additional configuration options, but these configurations are primarily intended for LocalStack developers and could change in the future. -The LocalStack [configuration]({{< ref "configuration" >}}) `LAMBDA_DOCKER_FLAGS` can be used to configure all Lambda containers, +The LocalStack [configuration](/aws/capabilities/config/configuration) `LAMBDA_DOCKER_FLAGS` can be used to configure all Lambda containers, for example `LAMBDA_DOCKER_FLAGS=-e LOCALSTACK_INIT_LOG_LEVEL=debug`. Some noteworthy configurations include: - `LOCALSTACK_INIT_LOG_LEVEL` defines the log level of the Golang binary. @@ -309,23 +531,23 @@ The full list of configurations is defined in the Golang function LocalStack provides various tools to help you develop, debug, and test your AWS Lambda functions more efficiently. - **Hot reloading**: With Lambda hot reloading, you can continuously apply code changes to your Lambda functions without needing to redeploy them manually. - To learn more about how to use hot reloading with LocalStack, check out our [hot reloading documentation]({{< ref "hot-reloading" >}}). + To learn more about how to use hot reloading with LocalStack, check out our [hot reloading documentation](/aws/capabilities/lambda-tools/hot-reloading). - **Remote debugging**: LocalStack's remote debugging functionality allows you to attach a debugger to your Lambda function using your preferred IDE. - To get started with remote debugging in LocalStack, see our [debugging documentation]({{< ref "debugging" >}}). + To get started with remote debugging in LocalStack, see our [debugging documentation](/aws/capabilities/lambda-tools/remote-debugging). - **Lambda VS Code Extension**: LocalStack's Lambda VS Code Extension supports deploying and invoking Python Lambda functions through AWS SAM or AWS CloudFormation. - To get started with the Lambda VS Code Extension, see our [Lambda VS Code Extension documentation]({{< ref "user-guide/lambda-tools/vscode-extension" >}}). + To get started with the Lambda VS Code Extension, see our [Lambda VS Code Extension documentation](/aws/tooling/lambda-tools/vscode-extension). - **API for querying Lambda runtimes**: LocalStack offers a metadata API to query the list of Lambda runtimes via `GET http://localhost.localstack.cloud:4566/_aws/lambda/runtimes`. It returns the [Supported Runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) matching AWS parity (i.e., excluding deprecated runtimes) and offers additional filters for `deprecated` runtimes and `all` runtimes (`GET /_aws/lambda/runtimes?filter=all`). ## Resource Browser -The LocalStack Web Application provides a [Resource Browser]({{< ref "/user-guide/web-application/resource-browser/" >}}) for managing Lambda resources. +The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing Lambda resources. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Lambda** under the **Compute** section. The Resource Browser displays [Functions](https://app.localstack.cloud/resources/lambda/functions) and [Layers](https://app.localstack.cloud/resources/lambda/layers) resources. You can click on individual resources to view their details. -Lambda Resource Browser +![Lambda Resource Browser](/images/aws/lambda-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -336,16 +558,16 @@ The Resource Browser allows you to perform the following actions: ## Migrating to Lambda v2 -{{< callout >}} +:::note The legacy Lambda implementation has been removed since LocalStack 3.0 (Docker `latest` since 2023-11-09). -{{}} +::: As part of the [LocalStack 2.0 release](https://discuss.localstack.cloud/t/new-lambda-implementation-in-localstack-2-0/258), the Lambda provider has been migrated to `v2` (formerly known as `asf`). With the new implementation, the following changes have been introduced: - To run Lambda functions in LocalStack, mount the Docker socket into the LocalStack container. Add the following Docker volume mount to your LocalStack startup configuration: `/var/run/docker.sock:/var/run/docker.sock`. - You can find an example of this configuration in our official [`docker-compose.yml` file]({{< ref "/getting-started/installation/#starting-localstack-with-docker-compose" >}}). + You can find an example of this configuration in our official [`docker-compose.yml` file](/aws/getting-started/installation/#starting-localstack-with-docker-compose). - The `v2` provider discontinues Lambda Executor Modes such as `LAMBDA_EXECUTOR=local`. Previously, this mode was used as a fallback when the Docker socket was unavailable in the LocalStack container, but many users unintentionally used it instead of the configured `LAMBDA_EXECUTOR=docker`. The new provider now behaves similarly to the old `docker-reuse` executor and does not require such configuration. @@ -358,7 +580,7 @@ With the new implementation, the following changes have been introduced: The ARM containers for compatible runtimes are based on Amazon Linux 2, and ARM-compatible hosts can create functions with the `arm64` architecture. - Lambda functions in LocalStack resolve AWS domains, such as `s3.amazonaws.com`, to the LocalStack container. This domain resolution is DNS-based and can be disabled by setting `DNS_ADDRESS=0`. - For more information, refer to [Transparent Endpoint Injection]({{< ref "user-guide/tools/transparent-endpoint-injection" >}}). + For more information, refer to [Transparent Endpoint Injection](/aws/capabilities/networking/transparent-endpoint-injection). Previously, LocalStack provided patched AWS SDKs to redirect AWS API calls transparently to LocalStack. - The new provider may generate more exceptions due to invalid input. For instance, while the old provider accepted arbitrary strings (such as `r1`) as Lambda roles when creating a function, the new provider validates role ARNs using a regular expression that requires them to be in the format `arn:aws:iam::000000000000:role/lambda-role`. @@ -369,7 +591,7 @@ With the new implementation, the following changes have been introduced: The configuration `LAMBDA_SYNCHRONOUS_CREATE=1` can force synchronous function creation, but it is not recommended. - LocalStack's Lambda implementation, allows you to customize the Lambda execution environment using the [Lambda Extensions API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html). This API allows for advanced monitoring, observability, or developer tooling, providing greater control and flexibility over your Lambda functions. - Lambda functions can also be run on hosts with [multi-architecture support]({{< ref "/references/arm64-support/#lambda-multi-architecture-support" >}}), allowing you to leverage LocalStack's Lambda API to develop and test Lambda functions with high parity. + Lambda functions can also be run on hosts with [multi-architecture support](), allowing you to leverage LocalStack's Lambda API to develop and test Lambda functions with high parity. The following configuration options from the old provider are discontinued in the new provider: @@ -416,21 +638,27 @@ However, many users inadvertently used the local executor mode instead of the in If you encounter the following error message, you may be using the local executor mode: -{{< tabpane lang="bash" >}} -{{< tab header="LocalStack Logs" lang="shell" >}} + + +```bash Lambda 'arn:aws:lambda:us-east-1:000000000000:function:my-function:$LATEST' changed to failed. Reason: Docker not available ... raise DockerNotAvailable("Docker not available") -{{< /tab >}} -{{< tab header="AWS CLI" lang="shell" >}} +``` + + +```bash An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0): The operation cannot be performed at this time. The function is currently in the following state: Failed -{{< /tab >}} -{{< tab header="SAM" lang="shell" >}} +``` + + +```bash Error: Failed to create/update the stack: sam-app, Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression "Stacks[].StackStatus" we matched expected path: "CREATE_FAILED" at least once -{{< /tab >}} -{{< /tabpane >}} +``` + + To fix this issue, add the Docker volume mount `/var/run/docker.sock:/var/run/docker.sock` to your LocalStack startup. Refer to our [sample `docker-compose.yml` file](https://github.com/localstack/localstack/blob/master/docker-compose.yml) as an example. @@ -438,21 +666,34 @@ Refer to our [sample `docker-compose.yml` file](https://github.com/localstack/lo ### Function in Pending state If you receive a `ResourceConflictException` when trying to invoke a function, it is currently in a `Pending` state and cannot be executed yet. -To wait until the function becomes `active`, you can use the following command: -{{< command >}} -$ awslocal lambda get-function --function-name my-function +```bash +awslocal lambda get-function --function-name my-function +``` + +The output will be similar to the following: + +```bash An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0): The operation cannot be performed at this time. The function is currently in the following state: Pending +``` + +To wait until the function becomes `active`, you can use the following command: -$ awslocal lambda wait function-active-v2 --function-name my-function -{{< / command >}} +```bash +awslocal lambda wait function-active-v2 --function-name my-function +``` Alternatively, you can check the function state using the [`GetFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_GetFunction.html): -{{< command >}} -$ awslocal lambda get-function --function-name my-function +```bash +awslocal lambda get-function --function-name my-function +``` + +The output will be similar to the following: + +```json { "Configuration": { ... @@ -463,8 +704,17 @@ $ awslocal lambda get-function --function-name my-function ... } } +``` -$ awslocal lambda get-function --function-name my-function +When the function is active, the output will be similar to the following: + +```bash +awslocal lambda get-function --function-name my-function +``` + +The output will be similar to the following: + +```json { "Configuration": { ... @@ -474,7 +724,7 @@ $ awslocal lambda get-function --function-name my-function ... } } -{{< / command >}} +``` If the function is still in the `Pending` state, the output will include a `"State": "Pending"` field and a `"StateReason": "The function is being created."` message. Once the function is active, the `"State"` field will change to `"Active"` and the `"LastUpdateStatus"` field will indicate the status of the last update. From 411592cc106b29c8e88d1233c29ff3b4b47567b9 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:25:28 +0530 Subject: [PATCH 59/80] revamp amb --- .../docs/aws/services/managedblockchain.md | 47 +++++++++++-------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/src/content/docs/aws/services/managedblockchain.md b/src/content/docs/aws/services/managedblockchain.md index c87ea6f0..065d0c8c 100644 --- a/src/content/docs/aws/services/managedblockchain.md +++ b/src/content/docs/aws/services/managedblockchain.md @@ -1,16 +1,16 @@ --- title: "Managed Blockchain (AMB)" -linkTitle: "Managed Blockchain (AMB)" -description: > - Get started with Managed Blockchain (AMB) on LocalStack +description: Get started with Managed Blockchain (AMB) on LocalStack tags: ["Ultimate"] --- +## Introduction + Managed Blockchain (AMB) is a managed service that enables the creation and management of blockchain networks, such as Hyperledger Fabric, Bitcoin, Polygon and Ethereum. Blockchain enables the development of applications in which multiple entities can conduct transactions and exchange data securely and transparently, eliminating the requirement for a central, trusted authority. LocalStack allows you to use the AMB APIs to develop and deploy decentralized applications in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_managedblockchain" >}}), which provides information on the extent of AMB integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of AMB integration with LocalStack. ## Getting started @@ -24,8 +24,8 @@ We will demonstrate how to create a blockchain network, a node, and a proposal. You can create a blockchain network using the [`CreateNetwork`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNetwork.html) API. Run the following command to create a network named `OurBlockchainNet` which uses the Hyperledger Fabric with the following configuration: -{{< command >}} -$ awslocal managedblockchain create-network \ +```bash +awslocal managedblockchain create-network \ --cli-input-json '{ "Name": "OurBlockchainNet", "Description": "OurBlockchainNetDesc", @@ -63,13 +63,16 @@ $ awslocal managedblockchain create-network \ } } }' - +``` + +The output will be similar to the following: + +```json { "NetworkId": "n-X24AF1AK2GC6MDW11HYW5I5DQC", "MemberId": "m-6VWBWHP2Y15F7TQ2DS093RTCW2" } - -{{< / command >}} +``` Copy the `NetworkId` and `MemberId` values from the output of the above command, as we will need them in the next step. @@ -78,8 +81,8 @@ Copy the `NetworkId` and `MemberId` values from the output of the above command, You can create a node using the [`CreateNode`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNode.html) API. Run the following command to create a node with the following configuration: -{{< command >}} -$ awslocal managedblockchain create-node \ +```bash +awslocal managedblockchain create-node \ --node-configuration '{ "InstanceType": "bc.t3.small", "AvailabilityZone": "us-east-1a", @@ -100,12 +103,15 @@ $ awslocal managedblockchain create-node \ }' \ --network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \ --member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2 - +``` + +The output will be similar to the following: + +```json { "NodeId": "nd-77K8AI0O5BEQD1IW4L8OGKMXV7" } - -{{< / command >}} +``` Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step. @@ -114,16 +120,19 @@ Replace the `NetworkId` and `MemberId` values in the above command with the valu You can create a proposal using the [`CreateProposal`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateProposal.html) API. Run the following command to create a proposal with the following configuration: -{{< command >}} -$ awslocal managedblockchain create-proposal \ +```bash +awslocal managedblockchain create-proposal \ --actions "Invitations=[{Principal=000000000000}]" \ --network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \ --member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2 - +``` + +The output will be similar to the following: + +```json { "ProposalId": "p-NK0PSLDPETJQX01Q4OLBRHP8CZ" } - -{{< / command >}} +``` Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step. From 429e839ba83bba15d82c21beaeefb7d9c3877c03 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:28:15 +0530 Subject: [PATCH 60/80] revamp mediastore --- src/content/docs/aws/services/mediastore.md | 23 ++++++++++----------- 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/src/content/docs/aws/services/mediastore.md b/src/content/docs/aws/services/mediastore.md index 1e4c0704..eff6dfa9 100644 --- a/src/content/docs/aws/services/mediastore.md +++ b/src/content/docs/aws/services/mediastore.md @@ -1,6 +1,5 @@ --- title: Elemental MediaStore -linkTitle: Elemental MediaStore description: Get started with Elemental MediaStore on LocalStack tags: ["Ultimate"] --- @@ -12,7 +11,7 @@ It provides a reliable way to store, manage, and serve media assets, such as aud MediaStore seamlessly integrates with other AWS services like Elemental MediaConvert, Elemental MediaLive, Elemental MediaPackage, and CloudFront. LocalStack allows you to use the Elemental MediaStore APIs as a high-performance storage solution for media content in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mediastore" >}}), which provides information on the extent of Elemental MediaStore integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Elemental MediaStore integration with LocalStack. ## Getting started @@ -26,9 +25,9 @@ We will demonstrate how you can create a MediaStore container, upload an asset, You can create a container using the [`CreateContainer`](https://docs.aws.amazon.com/mediastore/latest/apireference/API_CreateContainer.html) API. Run the following command to create a container and retrieve the the `Endpoint` value which should be used in subsequent requests: -{{< command >}} -$ awslocal mediastore create-container --container-name mycontainer -{{< / command >}} +```bash +awslocal mediastore create-container --container-name mycontainer +``` You should see the following output: @@ -50,13 +49,13 @@ This action will transfer the file to the specified path, `/myfolder/myfile.txt` Provide the `endpoint` obtained in the previous step for the operation to be successful. Run the following command to upload the file: -{{< command >}} -$ awslocal mediastore-data put-object \ +```bash +awslocal mediastore-data put-object \ --endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \ --body myfile.txt \ --path /myfolder/myfile.txt \ --content-type binary/octet-stream -{{< / command >}} +``` You should see the following output: @@ -74,12 +73,12 @@ In this process, you need to specify the endpoint, the path for downloading the The downloaded file will then be accessible at the specified output path. Run the following command to download the file: -{{< command >}} -$ awslocal mediastore-data get-object \ +```bash +awslocal mediastore-data get-object \ --endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \ --path /myfolder/myfile.txt \ /tmp/out.txt -{{< / command >}} +``` You should see the following output: @@ -96,4 +95,4 @@ You should see the following output: ## Troubleshooting The Elemental MediaStore service requires the use of a custom HTTP/HTTPS endpoint. -In case you encounter any issues, please consult our [Networking documentation]({{< ref "references/network-troubleshooting" >}}) for assistance. +In case you encounter any issues, please consult our [Networking documentation](/aws/capabilities/networking/) for assistance. From 144704f741d16b633b4711627a0b94627e9c33fc Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:31:27 +0530 Subject: [PATCH 61/80] revamp memorydb --- src/content/docs/aws/services/memorydb.md | 39 ++++++++++++----------- 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/src/content/docs/aws/services/memorydb.md b/src/content/docs/aws/services/memorydb.md index 18222654..81b44ea1 100644 --- a/src/content/docs/aws/services/memorydb.md +++ b/src/content/docs/aws/services/memorydb.md @@ -1,6 +1,5 @@ --- title: "MemoryDB for Redis" -linkTitle: "MemoryDB for Redis" tags: ["Ultimate"] description: Get started with MemoryDB on LocalStack --- @@ -11,7 +10,7 @@ MemoryDB is a fully managed, Redis-compatible, in-memory database tailored for w It streamlines the deployment and management of in-memory databases within the AWS cloud environment, acting as a replacement for using a cache in front of a database for improved durability and performance. LocalStack provides support for the main MemoryDB APIs surrounding cluster creation, allowing developers to utilize the MemoryDB functionalities in their local development environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_memorydb" >}}), which provides information on the extent of MemoryDB's integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of MemoryDB's integration with LocalStack. ## Getting started @@ -25,42 +24,46 @@ We will demonstrate how you can create a MemoryDB cluster and connect to it. You can create a MemoryDB cluster using the [`CreateCluster`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_CreateCluster.html) API. Run the following command to create a cluster: -{{< command >}} -$ awslocal memorydb create-cluster \ +```bash +awslocal memorydb create-cluster \ --cluster-name my-redis-cluster \ --node-type db.t4g.small \ --acl-name open-access -{{< /command>}} +``` Once it becomes available, you will be able to use the cluster endpoint for Redis operations. Run the following command to retrieve the cluster endpoint using the [`DescribeClusters`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_DescribeClusters.html) API: -{{< command >}} -$ awslocal memorydb describe-clusters --query "Clusters[0].ClusterEndpoint" +```bash +awslocal memorydb describe-clusters --query "Clusters[0].ClusterEndpoint" +``` + +The output will be similar to the following: + +```json { "Address": "127.0.0.1", "Port": 36739 } -{{< /command >}} +``` -The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}) in regular execution and a port between 36739 and 46738 in container mode. +The cache cluster uses a random port of the [external service port range]() in regular execution and a port between 36739 and 46738 in container mode. Use this port number to connect to the Redis instance using the `redis-cli` command line tool: -{{< command >}} -$ redis-cli -p 4510 ping +```bash +redis-cli -p 4510 ping PONG -$ redis-cli -p 4510 set foo bar +redis-cli -p 4510 set foo bar OK -$ redis-cli -p 4510 get foo +redis-cli -p 4510 get foo "bar" -{{< / command >}} +``` You can also check the cluster configuration using the [`cluster nodes`](https://redis.io/commands/cluster-nodes) command: -{{< command >}} -$ redis-cli -c -p 4510 cluster nodes -... -{{< / command >}} +```bash +redis-cli -c -p 4510 cluster nodes +``` ## Container mode From e9da30769d3f89737c344b6ee9245626749ea2df Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:32:00 +0530 Subject: [PATCH 62/80] revamp mq --- src/content/docs/aws/services/mq.md | 39 ++++++++++++++++------------- 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/src/content/docs/aws/services/mq.md b/src/content/docs/aws/services/mq.md index 86ab8fe5..1940836b 100644 --- a/src/content/docs/aws/services/mq.md +++ b/src/content/docs/aws/services/mq.md @@ -1,6 +1,5 @@ --- title: "MQ" -linkTitle: "MQ" description: Get started with MQ on LocalStack tags: ["Base"] --- @@ -12,7 +11,7 @@ It facilitates the exchange of messages between various components of distribute AWS MQ supports popular messaging protocols like MQTT, AMQP, and STOMP, making it suitable for a wide range of messaging use cases. LocalStack allows you to use the MQ APIs to implement pub/sub messaging, request/response patterns, or distributed event-driven architectures in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mq" >}}), which provides information on the extent of MQ integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of MQ integration with LocalStack. ## Getting started @@ -26,8 +25,8 @@ We will demonstrate how to create an MQ broker and send a message to a sample qu You can create a broker using the [`CreateBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokerspost) API. Run the following command to create a broker named `test-broker` with the following configuration: -{{< command >}} -$ awslocal mq create-broker \ +```bash +awslocal mq create-broker \ --broker-name test-broker \ --deployment-mode SINGLE_INSTANCE \ --engine-type ACTIVEMQ \ @@ -36,22 +35,29 @@ $ awslocal mq create-broker \ --auto-minor-version-upgrade \ --publicly-accessible \ --users='{"ConsoleAccess": true, "Groups": ["testgroup"],"Password": "QXwV*$iUM9USHnVv&!^7s3c@", "Username": "admin"}' - +``` + +The output will be similar to the following: + +```json { "BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545", "BrokerId": "b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545" } - -{{< / command >}} +``` ### Describe the broker You can use the [`DescribeBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokersget) API to get more detailed information about the broker. Run the following command to get information about the broker we created above: -{{< command >}} -$ awslocal mq describe-broker --broker-id - +```bash +awslocal mq describe-broker --broker-id +``` + +The output will be similar to the following: + +```json b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545 { "BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545", @@ -73,26 +79,23 @@ b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545 "HostInstanceType": "mq.t2.micro", "Tags": {} } - -{{< / command >}} +``` ### Send a message Now that the broker is actively listening, we can use curl to send a message to a sample queue. Run the following command to send a message to the `orders.input` queue: -{{< command >}} -$ curl -XPOST -d "body=message" http://admin:admin@localhost:4513/api/message\?destination\=queue://orders.input -{{< / command >}} +```bash +curl -XPOST -d "body=message" http://admin:admin@localhost:4513/api/message\?destination\=queue://orders.input +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing MQ brokers. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MQ** under the **App Integration** section. -MQ Resource Browser -
-
+![MQ Resource Browser](/images/aws/mq-resource-browser.png) The Resource Browser allows you to perform the following actions: From 04426f75db750b03ae50535639ca364b29d3b11e Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:33:47 +0530 Subject: [PATCH 63/80] revamp msl --- src/content/docs/aws/services/msk.md | 77 +++++++++++++--------------- 1 file changed, 37 insertions(+), 40 deletions(-) diff --git a/src/content/docs/aws/services/msk.md b/src/content/docs/aws/services/msk.md index e1c7e33b..aa4845fb 100644 --- a/src/content/docs/aws/services/msk.md +++ b/src/content/docs/aws/services/msk.md @@ -1,6 +1,5 @@ --- title: "Managed Streaming for Kafka (MSK)" -linkTitle: "Managed Streaming for Kafka (MSK)" description: Get started with Managed Streaming for Kafka (MSK) on LocalStack tags: ["Ultimate"] persistence: supported with limitations @@ -13,7 +12,7 @@ MSK offers a centralized platform to facilitate seamless communication between v MSK also features automatic scaling and built-in monitoring, allowing users to build robust, high-throughput data pipelines. LocalStack allows you to use the MSK APIs in your local environment to spin up Kafka clusters on the local machine, create topics for exchanging messages, and define event source mappings that trigger Lambda functions when messages are received on a certain topic. -The supported APIs are available on our [API coverage page]({{< ref "coverage_kafka" >}}), which provides information on the extent of MSK's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of MSK's integration with LocalStack. ## Getting started @@ -43,13 +42,13 @@ Create the file and add the following content to it: Run the following command to create the cluster: -{{< command >}} -$ awslocal kafka create-cluster \ +```bash +awslocal kafka create-cluster \ --cluster-name "EventsCluster" \ --broker-node-group-info file://brokernodegroupinfo.json \ --kafka-version "2.8.0" \ --number-of-broker-nodes 3 -{{< / command >}} +``` The output of the command looks similar to this: @@ -65,10 +64,10 @@ The cluster creation process might take a few minutes. You can describe the cluster using the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API. Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you obtained above when you created cluster. -{{< command >}} -$ awslocal kafka describe-cluster \ +```bash +awslocal kafka describe-cluster \ --cluster-arn "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster/b154d18a-8ecb-4691-96b2-50348357fc2f-25" -{{< / command >}} +``` The output of the command looks similar to this: @@ -104,22 +103,22 @@ To use LocalStack MSK, you can download and utilize the Kafka command line inter To download Apache Kafka, execute the following commands. -{{< command >}} -$ wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz -$ tar -xzf kafka_2.12-2.8.0.tgz -{{< / command >}} +```bash +wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz +tar -xzf kafka_2.12-2.8.0.tgz +``` Navigate to the **kafka_2.12-2.8.0** directory. Execute the following command, replacing `ZookeeperConnectString` with the value you saved after running the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API: -{{< command >}} -$ bin/kafka-topics.sh \ +```bash +bin/kafka-topics.sh \ --create \ --zookeeper localhost:4510 \ --replication-factor 1 \ --partitions 1 \ --topic LocalMSKTopic -{{< / command >}} +``` After executing the command, your output should resemble the following: @@ -135,13 +134,13 @@ Create a folder named `/tmp` on the client machine, and navigate to the bin fold Run the following command, replacing `java_home` with the path of your `java_home`. For this instance, the java_home path is `/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home`. -{{< callout >}} +:::note The following step is optional and may not be required, depending on the operating system environment being used. -{{< /callout >}} +::: -{{< command >}} -$ cp java_home/lib/security/cacerts /tmp/kafka.client.truststore.jks -{{< / command >}} +```bash +cp java_home/lib/security/cacerts /tmp/kafka.client.truststore.jks +``` While you are still in the `bin` folder of the Apache Kafka installation on the client machine, create a text file named `client.properties` with the following contents: @@ -151,10 +150,10 @@ ssl.truststore.location=/tmp/kafka.client.truststore.jks Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you have. -{{< command >}} -$ awslocal kafka get-bootstrap-brokers \ +```bash +awslocal kafka get-bootstrap-brokers \ --cluster-arn ClusterArn -{{< / command >}} +``` To proceed with the following commands, save the value associated with the string named `BootstrapBrokerStringTls` from the JSON result obtained from the previous command. It should look like this: @@ -167,12 +166,12 @@ It should look like this: Now, navigate to the bin folder and run the next command, replacing `BootstrapBrokerStringTls` with the value you obtained: -{{< command >}} -$ ./kafka-console-producer.sh \ +```bash +./kafka-console-producer.sh \ --broker-list BootstrapBrokerStringTls \ --producer.config client.properties \ --topic LocalMSKTopic -{{< / command >}} +``` To send messages to your Apache Kafka cluster, enter any desired message and press Enter. You can repeat this process twice or thrice, sending each line as a separate message to the Kafka cluster. @@ -182,13 +181,13 @@ Keep the connection to the client machine open, and open a separate connection t In this new connection, navigate to the `bin` folder and run a command, replacing `BootstrapBrokerStringTls` with the value you saved earlier. This command will allow you to interact with the Apache Kafka cluster using the saved value for secure communication. -{{< command >}} -$ ./kafka-console-consumer.sh \ +```bash +./kafka-console-consumer.sh \ --bootstrap-server BootstrapBrokerStringTls \ --consumer.config client.properties \ --topic LocalMSKTopic \ --from-beginning -{{< / command >}} +``` You should start seeing the messages you entered earlier when you used the console producer command. These messages are TLS encrypted in transit. @@ -201,13 +200,13 @@ The configuration for this mapping sets the starting position of the topic to `L Run the following command to use the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API by specifying the Event Source ARN, the topic name, the starting position, and the Lambda function name. -{{< command >}} -$ awslocal lambda create-event-source-mapping \ +```bash +awslocal lambda create-event-source-mapping \ --event-source-arn arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster \ --topics LocalMSKTopic \ --starting-position LATEST \ --function-name my-kafka-function -{{< / command >}} +``` Upon successful completion of the operation to create the Lambda Event Source Mapping, you can expect the following response: @@ -240,24 +239,22 @@ You can delete the local MSK cluster using the [`DeleteCluster`](https://docs.aw To do so, you must first obtain the ARN of the cluster you want to delete. Run the following command to list all the clusters in the region: -{{< command >}} -$ awslocal kafka list-clusters --region us-east-1 -{{< / command >}} +```bash +awslocal kafka list-clusters --region us-east-1 +``` To initiate the deletion of a cluster, select the corresponding `ClusterARN` from the list of clusters, and then execute the following command: -{{< command >}} +```bash awslocal kafka delete-cluster --cluster-arn ClusterArn -{{< / command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing MSK clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kafka** under the **Analytics** section. -MSK Resource Browser -
-
+![MSK Resource Browser](/images/aws/msk-resource-browser.png) The Resource Browser allows you to perform the following actions: From f2352f29bb3d90396107bc51d57bf746f6c453b3 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:34:36 +0530 Subject: [PATCH 64/80] revamp mwaa --- src/content/docs/aws/services/mwaa.md | 46 ++++++++++++--------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/src/content/docs/aws/services/mwaa.md b/src/content/docs/aws/services/mwaa.md index 457b9801..f15a2794 100644 --- a/src/content/docs/aws/services/mwaa.md +++ b/src/content/docs/aws/services/mwaa.md @@ -1,8 +1,6 @@ --- title: "Managed Workflows for Apache Airflow (MWAA)" -linkTitle: "Managed Workflows for Apache Airflow (MWAA)" -description: > - Get started with Managed Workflows for Apache Airflow (MWAA) on LocalStack +description: Get started with Managed Workflows for Apache Airflow (MWAA) on LocalStack tags: ["Ultimate"] --- @@ -12,7 +10,7 @@ Managed Workflows for Apache Airflow (MWAA) is a fully managed service by AWS th MWAA leverages the familiar Airflow features and integrations while integrating with S3, Glue, Redshift, Lambda, and other AWS services to build data pipelines and orchestrate data processing workflows in the cloud. LocalStack allows you to use the MWAA APIs in your local environment to allow the setup and operation of data pipelines. -The supported APIs are available on the [API coverage page]({{< ref "coverage_mwaa" >}}). +The supported APIs are available on the [API coverage page](). ## Getting started @@ -26,34 +24,34 @@ We will demonstrate how to create an Airflow environment and access the Airflow Create a S3 bucket that will be used for Airflow resources. Run the following command to create a bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command. -{{< command >}} -$ awslocal s3 mb s3://my-mwaa-bucket -{{< /command >}} +```bash +awslocal s3 mb s3://my-mwaa-bucket +``` ### Create an Airflow environment You can now create an Airflow environment, using the [`CreateEnvironment`](https://docs.aws.amazon.com/mwaa/latest/API/API_CreateEnvironment.html) API. Run the following command, by specifying the bucket ARN we created earlier: -{{< command >}} -$ awslocal mwaa create-environment --dag-s3-path /dags \ +```bash +awslocal mwaa create-environment --dag-s3-path /dags \ --execution-role-arn arn:aws:iam::000000000000:role/airflow-role \ --network-configuration {} \ --source-bucket-arn arn:aws:s3:::my-mwaa-bucket \ --airflow-version 2.10.1 \ --airflow-configuration-options agent.code=007,agent.name=bond \ --name my-mwaa-env -{{< /command >}} +``` ### Access the Airflow UI The Airflow UI can be accessed via the URL in the `WebserverUrl` attribute of the response of the `GetEnvironment` operation. The username and password are always set to `localstack`. -{{< command >}} -$ awslocal mwaa get-environment --name my-mwaa-env --query Environment.WebserverUrl +```bash +awslocal mwaa get-environment --name my-mwaa-env --query Environment.WebserverUrl "http://localhost.localstack.cloud:4510" -{{< /command >}} +``` LocalStack also prints this information in the logs: @@ -91,9 +89,9 @@ Just upload your DAGs to the designated S3 bucket path, configured by the `DagS3 For example, the command below uploads a sample DAG named `sample_dag.py` to your S3 bucket named `my-mwaa-bucket`: -{{< command >}} -$ awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags -{{< /command >}} +```bash +awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags +``` LocalStack syncs new and changed objects in the S3 bucket to the Airflow container every 30 seconds. The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`]({{< ref "configuration#mwaa" >}}) config option. @@ -105,9 +103,9 @@ LocalStack seamlessly supports plugins packaged according to [AWS specifications To integrate your custom plugins into the MWAA environment, upload the packaged `plugins.zip` file to the designated S3 bucket path: -{{< command >}} -$ awslocal s3 cp plugins.zip s3://my-mwaa-bucket/plugins.zip -{{< /command >}} +```bash +awslocal s3 cp plugins.zip s3://my-mwaa-bucket/plugins.zip +``` ## Installing Python dependencies @@ -124,9 +122,9 @@ botocore==1.20.54 Once you have your `requirements.txt` file ready, upload it to the designated S3 bucket, configured for use by the MWAA environment. Make sure to upload the file to `/requirements.txt` in the bucket: -{{< command >}} -$ awslocal s3 cp requirements.txt s3://my-mwaa-bucket/requirements.txt -{{< /command >}} +```bash +awslocal s3 cp requirements.txt s3://my-mwaa-bucket/requirements.txt +``` After the upload, the environment will be automatically updated, and your Apache Airflow setup will be equipped with the new dependencies. It is important to note that, unlike [AWS](https://docs.aws.amazon.com/mwaa/latest/userguide/connections-packages.html), LocalStack does not install any provider packages by default. @@ -143,9 +141,7 @@ This information must be explicitly passed in operators, hooks, and sensors. The LocalStack Web Application provides a Resource Browser for managing MWAA Environments. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MWAA** under the **App Integration** section. -

-MWAA Resource Browser -

+![MWAA Resource Browser](/images/aws/mwaa-resource-browser.png) The Resource Browser allows you to perform the following actions: From 3523070f8e9eeb84e3685a0e109bdb7f84c8a984 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:41:32 +0530 Subject: [PATCH 65/80] revamp mwaa --- src/content/docs/aws/services/mwaa.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/aws/services/mwaa.md b/src/content/docs/aws/services/mwaa.md index f15a2794..abeaf3e8 100644 --- a/src/content/docs/aws/services/mwaa.md +++ b/src/content/docs/aws/services/mwaa.md @@ -94,7 +94,7 @@ awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags ``` LocalStack syncs new and changed objects in the S3 bucket to the Airflow container every 30 seconds. -The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`]({{< ref "configuration#mwaa" >}}) config option. +The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`](/aws/capabilities/config/configuration/#mwaa) config option. ## Installing custom plugins From 098a9e76ba076d66b9aea47329a0c65dd676598b Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:45:20 +0530 Subject: [PATCH 66/80] revamp neptune --- .../{filesystem-layout.mdx => filesystem.mdx} | 0 src/content/docs/aws/services/neptune.md | 69 ++++++++++--------- 2 files changed, 37 insertions(+), 32 deletions(-) rename src/content/docs/aws/capabilities/config/{filesystem-layout.mdx => filesystem.mdx} (100%) diff --git a/src/content/docs/aws/capabilities/config/filesystem-layout.mdx b/src/content/docs/aws/capabilities/config/filesystem.mdx similarity index 100% rename from src/content/docs/aws/capabilities/config/filesystem-layout.mdx rename to src/content/docs/aws/capabilities/config/filesystem.mdx diff --git a/src/content/docs/aws/services/neptune.md b/src/content/docs/aws/services/neptune.md index 58f84699..c9c8a2c8 100644 --- a/src/content/docs/aws/services/neptune.md +++ b/src/content/docs/aws/services/neptune.md @@ -1,8 +1,6 @@ --- title: "Neptune" -linkTitle: "Neptune" -description: > - Get started with Neptune on LocalStack +description: Get started with Neptune on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ It is designed for storing and querying highly connected data for applications t Neptune supports popular graph query languages like Gremlin and SPARQL, making it compatible with a wide range of graph applications and tools. LocalStack allows you to use the Neptune APIs in your local environment to support both property graph and RDF graph models. -The supported APIs are available on our [API coverage page]({{< ref "coverage_neptune" >}}), which provides information on the extent of Neptune's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Neptune's integration with LocalStack. The following versions of Neptune engine are supported by LocalStack: @@ -52,11 +50,11 @@ We will demonstrate the following with AWS CLI & Python: To create a Neptune cluster you can use the [`CreateDBCluster`](https://docs.aws.amazon.com/neptune/latest/userguide/api-clusters.html#CreateDBCluster) API. Run the following command to create a Neptune cluster: -{{< command >}} -$ awslocal neptune create-db-cluster \ +```bash +awslocal neptune create-db-cluster \ --engine neptune \ --db-cluster-identifier my-neptune-db -{{< / command >}} +``` You should see the following output: @@ -77,13 +75,13 @@ You should see the following output: To add an instance you can use the [`CreateDBInstance`](https://docs.aws.amazon.com/neptune/latest/userguide/api-instances.html#CreateDBInstance) API. Run the following command to create a Neptune instance: -{{< command >}} -$ awslocal neptune create-db-instance \ +```bash +awslocal neptune create-db-instance \ --db-cluster-identifier my-neptune-db \ --db-instance-identifier my-neptune-instance \ --engine neptune \ --db-instance-class db.t3.medium -{{< / command >}} +``` In LocalStack the `Endpoint` for the `DBCluster` and the `Endpoint.Address` of the `DBInstance` will be the same and can be used to connect to the graph database. @@ -159,37 +157,44 @@ When LocalStack starts with [IAM enforcement enabled]({{< ref "/user-guide/secur Start LocalStack with `LOCALSTACK_ENFORCE_IAM=1` to create a Neptune cluster with IAM DB authentication enabled. -{{< command >}} -$ LOCALSTACK_ENFORCE_IAM=1 localstack start -{{< /command >}} +```bash +LOCALSTACK_ENFORCE_IAM=1 localstack start +``` You can then create a cluster. -{{< command >}} -$ awslocal neptune create-db-cluster \ +```bash +awslocal neptune create-db-cluster \ --engine neptune \ --db-cluster-identifier myneptune-db \ --enable-iam-database-authentication -{{< /command >}} +``` After the cluster is deployed, the Gremlin server will reject unsigned queries. -{{< command >}} -$ curl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V()" -v -... +```bash +curl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V()" -v +``` + +The output will be similar to the following: + +```bash - Request completely sent off < HTTP/1.1 403 Forbidden - no chunk, no close, no size. Assume close to signal end ... - -{{< /command >}} +``` Use the Python package [awscurl](https://pypi.org/project/awscurl/) to make your first signed query. -{{< command >}} -$ awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()" -H "Accept: application/json" | jq . - +```bash +awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()" -H "Accept: application/json" | jq . +``` + +The output will be similar to the following: + +```json { "requestId": "729c3e7b-50b3-4df7-b0b6-d1123c4e81df", "status": { @@ -216,16 +221,16 @@ $ awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count() } } } - -{{< /command >}} +``` -{{< callout "note" >}} +:::note If Gremlin Server is installed in your LocalStack environment, you must delete it and restart LocalStack. -You can find your LocalStack volume location on the [LocalStack filesystem documentation]({{< ref "/references/filesystem/#localstack-volume" >}}). -{{< command >}} -$ rm -rf /lib/tinkerpop -{{< /command >}} -{{< /callout >}} +You can find your LocalStack volume location on the [LocalStack filesystem documentation](/aws/capabilities/config/filesystem/#localstack-volume). + +```bash +rm -rf /lib/tinkerpop +``` +::: ## Resource Browser From a7ccef5cfa1f2801a2b51e467bfcf875e0b41509 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:48:48 +0530 Subject: [PATCH 67/80] revamp opensearch --- .../{opensearch.md => opensearch.mdx} | 146 +++++++++--------- 1 file changed, 72 insertions(+), 74 deletions(-) rename src/content/docs/aws/services/{opensearch.md => opensearch.mdx} (83%) diff --git a/src/content/docs/aws/services/opensearch.md b/src/content/docs/aws/services/opensearch.mdx similarity index 83% rename from src/content/docs/aws/services/opensearch.md rename to src/content/docs/aws/services/opensearch.mdx index 9a94b81d..0bb8b7e1 100644 --- a/src/content/docs/aws/services/opensearch.md +++ b/src/content/docs/aws/services/opensearch.mdx @@ -1,8 +1,6 @@ --- title: "OpenSearch Service" -linkTitle: "OpenSearch Service" -description: > - Get started with OpenSearch Service on LocalStack +description: Get started with OpenSearch Service on LocalStack tags: ["Free"] --- @@ -12,7 +10,7 @@ OpenSearch Service is an open-source search and analytics engine, offering devel OpenSearch Service also offers log analytics, real-time application monitoring, and clickstream analysis. LocalStack allows you to use the OpenSearch Service APIs in your local environment to create, manage, and operate the OpenSearch clusters. -The supported APIs are available on our [API coverage page]({{< ref "coverage_opensearch" >}}), which provides information on the extent of OpenSearch's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of OpenSearch's integration with LocalStack. The following versions of OpenSearch Service are supported by LocalStack: @@ -42,9 +40,9 @@ To create an OpenSearch Service cluster, you can use the [`CreateDomain`](https: OpenSearch Service domain is synonymous with an OpenSearch cluster. Execute the following command to create a new OpenSearch domain: -{{< command >}} -$ awslocal opensearch create-domain --domain-name my-domain -{{< / command >}} +```bash +awslocal opensearch create-domain --domain-name my-domain +``` Each time you establish a cluster using a new version of OpenSearch, the corresponding OpenSearch binary must be downloaded, a process that might require some time to complete. In the LocalStack log you will see something like, where you can see the cluster starting up in the background. @@ -52,10 +50,10 @@ In the LocalStack log you will see something like, where you can see the cluster You can open the LocalStack logs, to see that the OpenSearch Service cluster is being created in the background. You can use the [`DescribeDomain`](https://docs.aws.amazon.com/opensearch-service/latest/APIReference/API_DescribeDomain.html) API to check the status of the cluster: -{{< command >}} -$ awslocal opensearch describe-domain \ +```bash +awslocal opensearch describe-domain \ --domain-name my-domain | jq ".DomainStatus.Processing" -{{< / command >}} +``` The `Processing` attribute will be `false` once the cluster is up and running. Once the cluster is up, you can interact with the cluster. @@ -66,15 +64,15 @@ You can now interact with the cluster at the cluster API endpoint for the domain Run the following command to get the cluster health: -{{< command >}} -$ curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566 -{{< / command >}} +```bash +curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566 +``` You can verify that the cluster is up and running by checking the cluster health: -{{< command >}} -$ curl -s http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq . -{{< / command >}} +```bash +curl -s http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq . +``` The following output will be visible on your terminal: @@ -108,7 +106,7 @@ The strategy can be configured via the `OPENSEARCH_ENDPOINT_STRATEGY` environmen | ------- | ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | `domain` | `...localhost.localstack.cloud:4566` | The default strategy employing the `localhost.localstack.cloud` domain for routing to localhost. | | `path` | `localhost:4566///` | An alternative strategy useful if resolving LocalStack's localhost domain poses difficulties. | -| `port` | `localhost:` | Directly exposes cluster(s) via ports from [the external service port range]({{< ref "external-ports" >}}). | +| `port` | `localhost:` | Directly exposes cluster(s) via ports from [the external service port range](). | Irrespective of the originating service for the clusters, the domain of each cluster consistently aligns with its engine type, be it OpenSearch or Elasticsearch. Consequently, OpenSearch clusters incorporate `opensearch` within their domains (e.g., `my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566`), while Elasticsearch clusters feature `es` in their domains (e.g., `my-domain.us-east-1.es.localhost.localstack.cloud:4566`). @@ -121,16 +119,16 @@ Moreover, you can opt for custom domains, though it's important to incorporate t Run the following command to create a new OpenSearch domain with a custom endpoint: -{{< command >}} -$ awslocal opensearch create-domain --domain-name my-domain \ +```bash +awslocal opensearch create-domain --domain-name my-domain \ --domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }' -{{< / command >}} +``` After the domain processing is complete, you can access the cluster using the custom endpoint: -{{< command >}} -$ curl http://localhost:4566/my-custom-endpoint/_cluster/health -{{< / command >}} +```bash +curl http://localhost:4566/my-custom-endpoint/_cluster/health +``` ## Re-using a single cluster instance @@ -146,20 +144,22 @@ As a result, we advise caution when considering this approach and generally reco OpenSearch will be organized in your state directory as follows: -{{< command >}} -$ tree -L 4 ./volume/state -./volume/state -├── opensearch -│ └── arn:aws:es:us-east-1:000000000000:domain -│ ├── my-cluster-1 -│ │ ├── backup -│ │ ├── data -│ │ └── tmp -│ ├── my-cluster-2 -│ │ ├── backup -│ │ ├── data -│ │ └── tmp -{{< /command >}} +import { Tabs, TabItem, FileTree } from '@astrojs/starlight/components'; + + +- volume + - state + - opensearch + - arn:aws:es:us-east-1:000000000000:domain + - my-cluster-1 + - backup + - data + - tmp + - my-cluster-2 + - backup + - data + - tmp + ## Advanced Security Options @@ -208,15 +208,15 @@ Save it in a file named `opensearch_domain.json`. To provision it, use the following `awslocal` CLI command, assuming the aforementioned CLI input has been stored in a file named `opensearch_domain.json`: -{{< command >}} -$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json -{{< /command >}} +```bash +awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json +``` Once the domain setup is complete (`Processing: false`), the cluster can only be accessed with the given master user credentials, via HTTP basic authentication: -{{< command >}} -$ curl -u 'admin:really-secure-passwordAa!1' http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health -{{< /command >}} +```bash +curl -u 'admin:really-secure-passwordAa!1' http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health +``` The following output will be visible on your terminal: @@ -232,22 +232,23 @@ It's important to note that any unauthorized requests will yield an HTTP respons And you can directly use the official OpenSearch Dashboards Docker image to analyze data in your OpenSearch domain within LocalStack! When using OpenSearch Dashboards with LocalStack, you need to make sure to: -- Enable the [advanced security options]({{< ref "#advanced-security-options" >}}) and set a username and a password. +- Enable the [advanced security options](#advanced-security-options) and set a username and a password. This is required by OpenSearch Dashboards. - Ensure that the OpenSearch Dashboards Docker container uses the LocalStack DNS. - You can find more information on how to connect your Docker container to Localstack in our [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}). + You can find more information on how to connect your Docker container to Localstack in our [Network Troubleshooting guide](). First, you need to make sure to start LocalStack in a specific Docker network: -{{< command >}} -$ localstack start --network ls -{{< /command >}} + +```bash +localstack start --network ls +``` Now you can provision a new OpenSearch domain. -Make sure to enable the [advanced security options]({{< ref "#advanced-security-options" >}}): +Make sure to enable the [advanced security options](#advanced-security-options): -{{< command >}} -$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json -{{< /command >}} +```bash +awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json +``` Now you can start another container for OpenSearch Dashboards, which is configured such that: - The port for OpenSearch Dashboards is mapped (`5601`). @@ -257,10 +258,9 @@ Now you can start another container for OpenSearch Dashboards, which is configur - The OpenSearch credentials are set. - The version of OpenSearch Dashboards is the same as the OpenSearch domain. -{{< command >}} +```bash docker inspect localstack-main | \ jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress' -# prints 172.22.0.2 docker run --rm -p 5601:5601 \ --network ls \ @@ -268,7 +268,7 @@ docker run --rm -p 5601:5601 \ -e "OPENSEARCH_HOSTS=http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566" \ -e "OPENSEARCH_USERNAME=admin" -e 'OPENSEARCH_PASSWORD=really-secure-passwordAa!1' \ opensearchproject/opensearch-dashboards:2.11.0 -{{< /command >}} +``` Once the container is running, you can reach OpenSearch Dashboards at `http://localhost:5601` and you can log in with your OpenSearch domain credentials. @@ -329,45 +329,43 @@ volumes: You can start the Docker Compose environment using the following command: -{{< command >}} -$ docker-compose up -d -{{< /command >}} +```bash +docker-compose up -d +``` You can now create an OpenSearch cluster using the `awslocal` CLI: -{{< command >}} -$ awslocal opensearch create-domain --domain-name my-domain -{{< /command >}} +```bash +awslocal opensearch create-domain --domain-name my-domain +``` If the `Processing` status shows as `true`, the cluster isn't fully operational yet. You can use the `describe-domain` command to retrieve the current status: -{{< command >}} -$ awslocal opensearch describe-domain --domain-name my-domain -{{< /command >}} +```bash +awslocal opensearch describe-domain --domain-name my-domain +``` You can now verify cluster health and set up indices: -{{< command >}} -$ curl my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq -{{< /command >}} +```bash +curl my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq +``` The output will provide insights into the cluster's health and version information. Finally create an example index using the following command: -{{< command >}} -$ curl -X PUT my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/my-index -{{< /command >}} +```bash +curl -X PUT my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/my-index +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing OpenSearch domains. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **OpenSearch Service** under the **Analytics** section. -OpenSearch Resource Browser -
-
+![OpenSearch Resource Browser](/images/aws/opensearch-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -390,4 +388,4 @@ The `CustomEndpointOptions` in LocalStack offers the flexibility to utilize arbi ## Troubleshooting If you encounter difficulties resolving subdomains while employing the `OPENSEARCH_ENDPOINT_STRATEGY=domain` (the default setting), it's advisable to investigate whether your DNS configuration might be obstructing rebind queries. -For further insights on addressing this issue, refer to the section on [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}). +For further insights on addressing this issue, refer to the section on [DNS rebind protection](/aws/tooling/dns-server#dns-rebind-protection). From 9752e707ad8a8ebcd67702cd8e9fcfed6e4b560f Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:52:28 +0530 Subject: [PATCH 68/80] revamp organizations, pca, pinpoint --- .../docs/aws/services/organizations.md | 75 +++++----- src/content/docs/aws/services/pca.md | 139 +++++++++++------- src/content/docs/aws/services/pinpoint.md | 62 ++++---- 3 files changed, 157 insertions(+), 119 deletions(-) diff --git a/src/content/docs/aws/services/organizations.md b/src/content/docs/aws/services/organizations.md index 73ab5ee2..85b906e8 100644 --- a/src/content/docs/aws/services/organizations.md +++ b/src/content/docs/aws/services/organizations.md @@ -1,15 +1,16 @@ --- title: "Organizations" -linkTitle: "Organizations" tags: ["Ultimate"] description: Get started with AWS Organizations on LocalStack --- +## Introduction + Amazon Web Services Organizations is an account management service that allows you to consolidate multiple different AWS accounts into an organization. It allows you to manage different accounts in a single organization and consolidate billing. With Organizations, you can also attach different policies to your organizational units (OUs) or individual accounts in your organization. -Organizations is available over LocalStack Pro and the supported APIs are available over our [configuration page]({{< ref "configuration" >}}). +Organizations is available over LocalStack Pro and the supported APIs are available over our [configuration page](/aws/capabilities/config/configuration). ## Getting started @@ -18,69 +19,69 @@ This guide is intended for users who wish to get more acquainted with Organizati To get started, start your LocalStack instance using your preferred method: 1. Create a new local AWS Organization with the feature set flag set to `ALL`: - {{< command >}} - $ awslocal organizations create-organization --feature-set ALL - {{< /command >}} + ```bash + awslocal organizations create-organization --feature-set ALL + ``` 2. You can now run the `describe-organization` command to see the details of your organization: - {{< command >}} - $ awslocal organizations describe-organization - {{< /command >}} + ```bash + awslocal organizations describe-organization + ``` 3. You can now create an AWS account that would be a member of your organization: - {{< command >}} - $ awslocal organizations create-account \ + ```bash + awslocal organizations create-account \ --email example@example.com \ --account-name "Test Account" - {{< /command >}} + ``` Since LocalStack essentially mocks AWS, the account creation is instantaneous. You can now run the `list-accounts` command to see the details of your organization: - {{< command >}} - $ awslocal organizations list-accounts - {{< /command >}} + ```bash + awslocal organizations list-accounts + ``` 4. You can also remove a member account from your organization: - {{< command >}} - $ awslocal organizations remove-account-from-organization --account-id - {{< /command >}} + ```bash + awslocal organizations remove-account-from-organization --account-id + ``` 5. To close an account in your organization, you can run the `close-account` command: - {{< command >}} - $ awslocal organizations close-account --account-id 000000000000 - {{< /command >}} + ```bash + awslocal organizations close-account --account-id 000000000000 + ``` 6. You can use organizational units (OUs) to group accounts together to administer as a single unit. To create an OU, you can run: - {{< command >}} - $ awslocal organizations list-roots - $ awslocal organizations list-children \ + ```bash + awslocal organizations list-roots + awslocal organizations list-children \ --parent-id \ --child-type ORGANIZATIONAL_UNIT - $ awslocal organizations create-organizational-unit \ + awslocal organizations create-organizational-unit \ --parent-id \ --name New-Child-OU - {{< /command >}} + ``` 7. Before you can create and attach a policy to your organization, you must enable a policy type. To enable a policy type, you can run: - {{< command >}} - $ awslocal organizations enable-policy-type \ + ```bash + awslocal organizations enable-policy-type \ --root-id \ --policy-type BACKUP_POLICY - {{< /command >}} + ``` To disable a policy type, you can run: - {{< command >}} - $ awslocal organizations disable-policy-type \ + ```bash + awslocal organizations disable-policy-type \ --root-id \ --policy-type BACKUP_POLICY - {{< /command >}} + ``` 8. To view the policies that are attached to your organization, you can run: - {{< command >}} - $ awslocal organizations list-policies --filter SERVICE_CONTROL_POLICY - {{< /command >}} + ```bash + awslocal organizations list-policies --filter SERVICE_CONTROL_POLICY + ``` 9. To delete an organization, you can run: - {{< command >}} - $ awslocal organizations delete-organization - {{< /command >}} + ```bash + awslocal organizations delete-organization + ``` diff --git a/src/content/docs/aws/services/pca.md b/src/content/docs/aws/services/pca.md index ec778e85..ffee90ed 100644 --- a/src/content/docs/aws/services/pca.md +++ b/src/content/docs/aws/services/pca.md @@ -1,6 +1,5 @@ --- -title: "Private Certificate Authority (ACM PCA)" -linkTitle: "Private Certificate Authority (ACM PCA)" +title: Private Certificate Authority (ACM PCA) description: Get started with Private Certificate Authority (ACM PCA) on LocalStack tags: ["Ultimate"] --- @@ -12,7 +11,7 @@ ACM PCA extends ACM's certificate management capabilities to private certificate LocalStack allows you to use the ACM PCA APIs to create, list, and delete private certificates. You can creating, describing, tagging, and listing tags for a CA using ACM PCA. -The supported APIs are available on our [API coverage page]({{< ref "coverage_acm-pca" >}}), which provides information on the extent of ACM PCA's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of ACM PCA's integration with LocalStack. ## Getting started @@ -24,8 +23,8 @@ We will follow the procedure to create and install a certificate for a single-le Start by creating a new Certificate Authority with ACM PCA using the [`CreateCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_CreateCertificateAuthority.html) API. This command sets up a new CA with specified configurations for key algorithm, signing algorithm, and subject information. -{{< command >}} -$ awslocal acm-pca create-certificate-authority \ +```bash +awslocal acm-pca create-certificate-authority \ --certificate-authority-configuration '{ "KeyAlgorithm":"RSA_2048", "SigningAlgorithm":"SHA256WITHRSA", @@ -37,22 +36,29 @@ $ awslocal acm-pca create-certificate-authority \ } }' \ --certificate-authority-type "ROOT" - +``` + +The output will be similar to the following: + +```json { "CertificateAuthorityArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff" } - -{{< /command >}} +``` Note the `CertificateAuthorityArn` from the output as it will be needed for subsequent commands. To retrieve the detailed information about the created Certificate Authority, use the [`DescribeCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_DescribeCertificateAuthority.html) API. This command returns the detailed information about the CA, including the CA's ARN, status, and configuration. -{{< command >}} -$ awslocal acm-pca describe-certificate-authority \ +```bash +awslocal acm-pca describe-certificate-authority \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff - +``` + +The output will be similar to the following: + +```json { "CertificateAuthority": { "Arn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff", @@ -79,8 +85,7 @@ $ awslocal acm-pca describe-certificate-authority \ "UsageMode": "SHORT_LIVED_CERTIFICATE" } } - -{{< /command >}} +``` Note the `PENDING_CERTIFICATE` status. In the following steps, we will create and attach a certificate for this CA. @@ -89,27 +94,30 @@ In the following steps, we will create and attach a certificate for this CA. Use the [`GetCertificateAuthorityCsr`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCsr.html) operation to obtain the Certificate Signing Request (CSR) for the CA. -{{< command >}} -$ awslocal acm-pca get-certificate-authority-csr \ +```bash +awslocal acm-pca get-certificate-authority-csr \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \ --output text | tee ca.csr -{{< /command >}} +``` Next, issue the certificate for the CA using this CSR. -{{< command >}} -$ awslocal acm-pca issue-certificate \ +```bash +awslocal acm-pca issue-certificate \ --csr fileb://ca.csr \ --signing-algorithm SHA256WITHRSA \ --template-arn arn:aws:acm-pca:::template/RootCACertificate/V1 \ --validity Value=10,Type=YEARS \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff - +``` + +The output will be similar to the following: + +```json { "CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392" } - -{{< /command >}} +``` The CA certificate is now created and its ARN is indicated by the `CertificateArn` parameter. @@ -117,31 +125,36 @@ The CA certificate is now created and its ARN is indicated by the `CertificateAr Finally, we retrieve the signed certificate with [`GetCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificate.html) and import it using [`ImportCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ImportCertificateAuthorityCertificate.html). -{{< command >}} -$ awslocal acm-pca get-certificate \ +```bash +awslocal acm-pca get-certificate \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \ --certificate-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392 \ --output text | tee cert.pem -{{< /command >}} +``` + +The output will be similar to the following: -{{< command >}} -$ awslocal acm-pca import-certificate-authority-certificate \ +```bash +awslocal acm-pca import-certificate-authority-certificate \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \ --certificate fileb://cert.pem -{{< /command >}} +``` The CA is now ready for use. You can verify this by checking its status: -{{< command >}} -$ awslocal acm-pca describe-certificate-authority \ +```bash +awslocal acm-pca describe-certificate-authority \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \ --query CertificateAuthority.Status \ --output text - +``` + +The output will be: + +```bash ACTIVE - -{{< /command >}} +``` The CA certificate can be retrieved at a later point using [`GetCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCertificate.html). In general, this operation returns both the certificate and the certificate chain. @@ -154,16 +167,20 @@ With the private CA set up, you can now issue end-entity certificates. Using [OpenSSL](https://openssl-library.org/), create a CSR and the private key: -{{< command >}} -$ openssl req -out local-csr.pem -new -newkey rsa:2048 -nodes -keyout local-pkey.pem -{{< /command >}} +```bash +openssl req -out local-csr.pem -new -newkey rsa:2048 -nodes -keyout local-pkey.pem +``` You may inspect the CSR using the following command. It should resemble the illustrated output. -{{< command >}} -$ openssl req -in local-csr.pem -text -noout - +```bash +openssl req -in local-csr.pem -text -noout +``` + +The output will be similar to the following: + +```bash Certificate Request: Data: Version: 1 (0x0) @@ -182,34 +199,41 @@ Certificate Request: Signature Value: 3e:23:12:26:45:af:39:35:5d:d7:b4:40:fb:1a:08:c7:16:c3: ... - -{{< /command >}} +``` Next, using [`IssueCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_IssueCertificate.html) you can generate the end-entity certificate. Note that there is no [certificate template](https://docs.aws.amazon.com/privateca/latest/userguide/UsingTemplates.html) specified which causes the end-entity certificate to be issued by default. -{{< command >}} -$ awslocal acm-pca issue-certificate \ +```bash +awslocal acm-pca issue-certificate \ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \ --csr fileb://local-csr.pem \ --signing-algorithm "SHA256WITHRSA" \ --validity Value=365,Type="DAYS" - +``` + +The output will be similar to the following: + +```json { "CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/079d0a13daf943f6802d365dd83658c7" } - -{{< /command >}} +``` ### Verify Certificates Using OpenSSL, you can verify that the end-entity certificate was indeed signed by the CA. In the following command, `local-cert.pem` refers to the end-entity certificate and `cert.pem` refers to the CA certificate. -{{< command >}} -$ openssl verify -CAfile cert.pem local-cert.pem +```bash +openssl verify -CAfile cert.pem local-cert.pem +``` + +The output will be: + +```bash local-cert.pem: OK -{{< /command >}} +``` ### Tag the Certificate Authority @@ -217,20 +241,24 @@ Tagging resources in AWS helps in managing and identifying them. Use the [`TagCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_TagCertificateAuthority.html) API to tag the created Certificate Authority. This command adds the specified tags to the specified CA. -{{< command >}} -$ awslocal acm-pca tag-certificate-authority \ +```bash +awslocal acm-pca tag-certificate-authority \ --certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \ --tags Key=Admin,Value=Alice -{{< /command >}} +``` After tagging your Certificate Authority, you may want to view these tags. You can use the [`ListTags`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ListTags.html) API to list all the tags associated with the specified CA. -{{< command >}} -$ awslocal acm-pca list-tags \ +```bash +awslocal acm-pca list-tags \ --certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \ --max-results 10 - +``` + +The output will be similar to the following: + +```json { "Tags": [ { @@ -243,5 +271,4 @@ $ awslocal acm-pca list-tags \ } ] } - -{{< /command >}} +``` diff --git a/src/content/docs/aws/services/pinpoint.md b/src/content/docs/aws/services/pinpoint.md index d11d67f1..d8d33105 100644 --- a/src/content/docs/aws/services/pinpoint.md +++ b/src/content/docs/aws/services/pinpoint.md @@ -1,16 +1,15 @@ --- title: "Pinpoint" -linkTitle: "Pinpoint" description: Get started with Pinpoint on LocalStack tags: ["Ultimate"] persistence: supported --- -{{< callout "warning" >}} +:::danger Amazon Pinpoint will be [retired on 30 October 2026](https://docs.aws.amazon.com/pinpoint/latest/userguide/migrate.html). It will be removed from LocalStack soon after this date. -{{< /callout >}} +::: ## Introduction @@ -18,7 +17,7 @@ Pinpoint is a customer engagement service to facilitate communication across mul Pinpoint allows developers to create and manage customer segments based on various attributes, such as user behavior and demographics, while integrating with other AWS services to send targeted messages to customers. LocalStack allows you to mock the Pinpoint APIs in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_pinpoint" >}}), which provides information on the extent of Pinpoint's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Pinpoint's integration with LocalStack. ## Getting started @@ -32,10 +31,10 @@ We will demonstrate how to create a Pinpoint application, retrieve all applicati Create a Pinpoint application using the [`CreateApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API. Execute the following command: -{{< command >}} -$ awslocal pinpoint create-app \ +```bash +awslocal pinpoint create-app \ --create-application-request Name=ExampleCorp,tags={"Stack"="Test"} -{{< /command >}} +``` The following output would be retrieved: @@ -55,9 +54,9 @@ The following output would be retrieved: You can list all applications using the [`GetApps`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps.html) API. Execute the following command: -{{< command >}} -$ awslocal pinpoint get-apps -{{< /command >}} +```bash +awslocal pinpoint get-apps +``` The following output would be retrieved: @@ -81,10 +80,10 @@ The following output would be retrieved: You can list all tags for the application using the [`GetApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API. Execute the following command: -{{< command >}} -$ awslocal pinpoint list-tags-for-resource \ +```bash +awslocal pinpoint list-tags-for-resource \ --resource-arn arn:aws:mobiletargeting:us-east-1:000000000000:apps/4487a55ac6fb4a2699a1b90727c978e7 -{{< /command >}} +``` Replace the `resource-arn` with the ARN of the application you created earlier. The following output would be retrieved: @@ -111,8 +110,8 @@ Instead it provides alternative ways to retrieve the actual OTP code as illustra Begin by making a OTP request: -{{< command >}} -$ awslocal pinpoint send-otp-message \ +```bash +awslocal pinpoint send-otp-message \ --application-id fff5a801e01643c18a13a763e22a8fbf \ --send-otp-message-request-parameters '{ "BrandName": "LocalStack Community", @@ -124,19 +123,27 @@ $ awslocal pinpoint send-otp-message \ "AllowedAttempts": 3, "ValidityPeriod": 2 }' - +``` + +The output will be similar to the following: + +```json { "MessageResponse": { "ApplicationId": "fff5a801e01643c18a13a763e22a8fbf" } } - -{{< /command >}} +``` You can use the debug endpoint `/_aws/pinpoint//` to retrieve the OTP message details: -{{< command >}} -$ curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/liftoffcampaign | jq . +```bash +curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/liftoffcampaign | jq . +``` + +The output will be similar to the following: + +```json { "AllowedAttempts": 3, "BrandName": "LocalStack Community", @@ -150,7 +157,7 @@ $ curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/lift "CreatedTimestamp": "2024-10-17T05:38:24.070Z", "Code": "655745" } -{{< /command >}} +``` The OTP code is also printed in an `INFO` level message in the LocalStack log output: @@ -160,22 +167,25 @@ The OTP code is also printed in an `INFO` level message in the LocalStack log ou Finally, the OTP code can be verified using: -{{< command >}} -$ awslocal pinpoint verify-otp-message \ +```bash +awslocal pinpoint verify-otp-message \ --application-id fff5a801e01643c18a13a763e22a8fbf \ --verify-otp-message-request-parameters '{ "ReferenceId": "liftoffcampaign", "DestinationIdentity": "+1224364860", "Otp": "655745" }' - +``` + +The output will be similar to the following: + +```json { "VerificationResponse": { "Valid": true } } - -{{< /command >}} +``` When validating OTP codes, LocalStack checks for the number of allowed attempts and the validity period. Unlike AWS, there is no lower limit for validity period. From 37da02776755ce4dcaee4e7b2af1d4456eb5642c Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:53:30 +0530 Subject: [PATCH 69/80] revamp pipes --- src/content/docs/aws/services/pipes.md | 51 ++++++++++++-------------- 1 file changed, 24 insertions(+), 27 deletions(-) diff --git a/src/content/docs/aws/services/pipes.md b/src/content/docs/aws/services/pipes.md index 100bb41a..73f0f945 100644 --- a/src/content/docs/aws/services/pipes.md +++ b/src/content/docs/aws/services/pipes.md @@ -1,6 +1,5 @@ --- title: "EventBridge Pipes" -linkTitle: "EventBridge Pipes" description: Get started with EventBridge Pipes on LocalStack tags: ["Free"] persistence: supported with limitations @@ -16,12 +15,12 @@ In contrast, EventBridge Event Bus offers a one-to-many integration where an eve LocalStack allows you to use the Pipes APIs in your local environment to create Pipes with SQS queues and Kinesis streams as source and target. You can also filter events using EventBridge event patterns and enrich events using Lambda. -The supported APIs are available on our [API coverage page]({{< ref "coverage_pipes" >}}), which provides information on the extent of Pipe's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Pipe's integration with LocalStack. -{{< callout >}} +:::note The implementation of EventBridge Pipes is currently in **preview** stage and under active development. If you would like support for more APIs or report bugs, please make an issue on [GitHub](https://github.com/localstack/localstack/issues/new/choose). -{{< /callout >}} +::: ## Getting started @@ -35,29 +34,29 @@ We will demonstrate how to create a Pipe with SQS queues as source and target, a Create two SQS queues that will be used as source and target for the Pipe. Run the following command to create a queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API: -{{< command >}} -$ awslocal sqs create-queue --queue-name source-queue -$ awslocal sqs create-queue --queue-name target-queue -{{< /command >}} +```bash +awslocal sqs create-queue --queue-name source-queue +awslocal sqs create-queue --queue-name target-queue +``` You can fetch their queue ARNs using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API: -{{< command >}} -$ SOURCE_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue --attribute-names QueueArn --output text) -$ TARGET_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue --attribute-names QueueArn --output text) -{{< /command >}} +```bash +SOURCE_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue --attribute-names QueueArn --output text) +TARGET_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue --attribute-names QueueArn --output text) +``` ### Create a Pipe You can now create a Pipe, using the [`CreatePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreatePipe.html) API. Run the following command, by specifying the source and target queue ARNs we created earlier: -{{< command >}} -$ awslocal pipes create-pipe --name sample-pipe \ +```bash +awslocal pipes create-pipe --name sample-pipe \ --source $SOURCE_QUEUE_ARN \ --target $TARGET_QUEUE_ARN \ --role-arn arn:aws:iam::000000000000:role/pipes-role -{{< /command >}} +``` The following output would be retrieved: @@ -76,9 +75,9 @@ The following output would be retrieved: You can use the [`DescribePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_DescribePipe.html) API to get information about the Pipe: -{{< command >}} -$ awslocal pipes describe-pipe --name sample-pipe -{{< /command >}} +```bash +awslocal pipes describe-pipe --name sample-pipe +``` The following output would be retrieved: @@ -110,29 +109,27 @@ The following output would be retrieved: You can now send events to the source queue, which will be routed to the target queue. Run the following command to send an event to the source queue: -{{< command >}} -$ awslocal sqs send-message \ +```bash +awslocal sqs send-message \ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue \ --message-body "message-1" -{{< /command >}} +``` ### Receive events from the target queue You can fetch the message from the target queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API: -{{< command >}} -$ awslocal sqs receive-message \ +```bash +awslocal sqs receive-message \ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue -{{< /command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing EventBridge Pipes. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EventBridge Pipes** under the **App Integration** section. -EventBridge Pipes Resource Browser -
-
+![EventBridge Pipes Resource Browser](/images/aws/pipes-resource-browser.png) The Resource Browser for EventBridge Pipes in LocalStack allows you to perform the following actions: From 8953a83c3b0a202f24b887b479cad84562b8c7f1 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:54:53 +0530 Subject: [PATCH 70/80] revamp qldb --- src/content/docs/aws/services/qldb.md | 86 ++++++++++++++++----------- 1 file changed, 52 insertions(+), 34 deletions(-) diff --git a/src/content/docs/aws/services/qldb.md b/src/content/docs/aws/services/qldb.md index 9401e79a..d52dde13 100644 --- a/src/content/docs/aws/services/qldb.md +++ b/src/content/docs/aws/services/qldb.md @@ -5,10 +5,10 @@ tags: ["Ultimate"] description: Get started with Quantum Ledger Database (QLDB) on LocalStack --- -{{< callout "warning" >}} +:::danger Amazon QLDB will be [retired on 31 July 2025](https://docs.aws.amazon.com/qldb/latest/developerguide/what-is.html). It will be removed from LocalStack soon after this date. -{{< /callout >}} +::: ## Introduction @@ -22,7 +22,7 @@ and scalable way to maintain a complete and verifiable history of data changes over time. LocalStack allows you to use the QLDB APIs in your local environment to create and manage ledgers. -The supported APIs are available on the [API coverage page]({{< ref "/references/coverage/coverage_qldb/index.md" >}} "QLDB service coverage page"), which provides information on the extent of QLDB's integration with LocalStack. +The supported APIs are available on the [API coverage page](), which provides information on the extent of QLDB's integration with LocalStack. ## Getting started @@ -54,9 +54,11 @@ the [Releases](https://github.com/awslabs/amazon-qldb-shell/releases) section of QLDB provides ledger databases, which are centralized, immutable, and cryptographically verifiable journals of transactions. -{{< command >}} -$ awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALLOW_ALL -{{< / command >}} +```bash +awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALLOW_ALL +``` + +The output will be similar to the following: ```bash { @@ -69,7 +71,7 @@ $ awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALL } ``` -{{< callout >}} +:::note - Permissions mode – the following options are available in AWS: @@ -89,13 +91,13 @@ To allow PartiQL commands, you must create IAM permissions policies for specific table resources and PartiQL actions, in addition to the `SendCommand` API permission for the ledger. -{{< /callout >}} +::: The following command can be used directly to write PartiQL statements against a QLDB ledger: -{{< command >}} -$ qldb --qldb-session-endpoint http://localhost:4566 --ledger vehicle-registration -{{< / command >}} +```bash +qldb --qldb-session-endpoint http://localhost:4566 --ledger vehicle-registration +``` The user can continue from here to create tables, populate and interrogate them. @@ -104,9 +106,11 @@ The user can continue from here to create tables, populate and interrogate them. PartiQL is a query language designed for processing structured data, allowing you to perform various data manipulation tasks using familiar SQL-like syntax. -{{< command >}} +```bash qldb> CREATE TABLE VehicleRegistration -{{< / command >}} +``` + +The output will be: ```bash { @@ -131,7 +135,7 @@ qldb> CREATE TABLE VehicleRegistration The `VehicleRegistration` table was created. Now it's time to add some items: -{{< command >}} +```bash qldb> INSERT INTO VehicleRegistration VALUE { 'VIN' : 'KM8SRDHF6EU074761', @@ -149,7 +153,9 @@ qldb> INSERT INTO VehicleRegistration VALUE 'ValidFromDate' : `2017-09-14T`, 'ValidToDate' : `2020-06-25T` } -{{< / command >}} +``` + +The output will be: ```bash { @@ -162,9 +168,11 @@ documentId: "3TYR9BamzyqHWBjYOfHegE" The table can be interrogated based on the inserted registration number: -{{< command >}} +```bash qldb> SELECT * FROM VehicleRegistration WHERE RegNum=1722 -{{< / command >}} +``` + +The output will be: ```bash { @@ -193,9 +201,10 @@ queries. Supposed the vehicle is sold and changes owners, this information needs to be updated with a new person ID. -{{< command >}} +```bash qldb> UPDATE VehicleRegistration AS r SET r.Owners.PrimaryOwner.PersonId = '112233445566NO' WHERE r.VIN = 'KM8SRDHF6EU074761' -{{< / command >}} +``` + The command will return the updated document ID. ```bash @@ -206,9 +215,12 @@ The command will return the updated document ID. ``` The next step is to check on the updates made to the `PersonId` field of the `PrimaryOwner`: -{{< command >}} + +```bash qldb> SELECT r.Owners FROM VehicleRegistration AS r WHERE r.VIN = 'KM8SRDHF6EU074761' -{{< / command >}} +``` + +The output will be: ```bash { @@ -236,9 +248,11 @@ You can see all revisions of a document that you inserted, updated, and deleted built-in History function. First the unique `id` of the document must be found. -{{< command >}} +```bash qldb> SELECT r_id FROM VehicleRegistration AS r BY r_id WHERE r.VIN = 'KM8SRDHF6EU074761' -{{< / command >}} +``` + +The output will be: ```bash { @@ -250,9 +264,11 @@ r_id: "3TYR9BamzyqHWBjYOfHegE" Then, the `id` is used to query the history function. -{{< command >}} +```bash qldb> SELECT h.data.VIN, h.data.City, h.data.Owners FROM history(VehicleRegistration) AS h WHERE h.metadata.id = '3TYR9BamzyqHWBjYOfHegE' -{{< / command >}} +``` + +The output will be: ```bash { @@ -298,9 +314,11 @@ Unused ledgers can be deleted. You'll notice that directly running the following command will lead to an error message. -{{< command >}} -$ awslocal qldb delete-ledger --name vehicle-registration -{{< / command >}} +```bash +awslocal qldb delete-ledger --name vehicle-registration +``` + +The output will be: ```bash An error occurred (ResourcePreconditionNotMetException) when calling the DeleteLedger operation: Preventing deletion @@ -309,9 +327,11 @@ of ledger vehicle-registration with DeletionProtection enabled This can be adjusted using the `update-ledger` command in the AWS CLI to remove the deletion protection of the ledger: -{{< command >}} -$ awslocal qldb update-ledger --name vehicle-registration --no-deletion-protection -{{< / command >}} +```bash +awslocal qldb update-ledger --name vehicle-registration --no-deletion-protection +``` + +The output will be: ```bash { @@ -330,9 +350,7 @@ Now the `delete-ledger` command can be repeated without errors. The LocalStack Web Application provides a Resource Browser for managing QLDB ledgers. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **QLDB** under the **Database** section. -QLDB Resource Browser -
-
+![QLDB Resource Browser](/images/aws/qldb-resource-browser.png) The Resource Browser allows you to perform the following actions: From cdd84f482ede5b4aa7a8b17d340fde32ae0dc03c Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:55:25 +0530 Subject: [PATCH 71/80] revamp ram --- src/content/docs/aws/services/ram.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/src/content/docs/aws/services/ram.md b/src/content/docs/aws/services/ram.md index 4d1db6dd..4978ac67 100644 --- a/src/content/docs/aws/services/ram.md +++ b/src/content/docs/aws/services/ram.md @@ -1,6 +1,5 @@ --- title: "Resource Access Manager (RAM)" -linkTitle: "Resource Access Manager (RAM)" description: Get started with RAM on LocalStack tags: ["Ultimate"] --- @@ -9,7 +8,7 @@ tags: ["Ultimate"] Resource Access Manager (RAM) helps resources to be shared across AWS accounts, within or across organizations. On AWS, RAM is an abstraction on top of AWS Identity and Access Management (IAM) which can manage resource-based policies to supported resource types. -The API operations supported by LocalStack can be found on the [API coverage page]({{< ref "coverage_ram" >}}). +The API operations supported by LocalStack can be found on the [API coverage page](), which provides information on the extent of RAM's integration with LocalStack. ## Getting started @@ -18,21 +17,21 @@ This section will illustrate how to create permissions and resource shares using ### Create a permission -{{< command >}} -$ awslocal ram create-permission \ +```bash +awslocal ram create-permission \ --name example \ --resource-type appsync:apis \ --policy-template '{"Effect": "Allow", "Action": "appsync:SourceGraphQL"}' -{{< /command >}} +``` ### Create a resource share -{{< command >}} -$ awslocal ram create-resource-share \ +```bash +awslocal ram create-resource-share \ --name example-resource-share \ --principals arn:aws:organizations::000000000000:organization/o-truopwybwi \ --resource-arn arn:aws:appsync:eu-central-1:000000000000:apis/wcgmjril5wuyvhmpildatuaat3 -{{< /command >}} +``` ## Current Limitations From 3b2d34dbf62262d1bdf3fb6430fa972459d9f305 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Wed, 18 Jun 2025 23:57:19 +0530 Subject: [PATCH 72/80] revamp rds --- src/content/docs/aws/services/rds.md | 95 ++++++++++++++-------------- 1 file changed, 46 insertions(+), 49 deletions(-) diff --git a/src/content/docs/aws/services/rds.md b/src/content/docs/aws/services/rds.md index 75ee825d..16d2fbfc 100644 --- a/src/content/docs/aws/services/rds.md +++ b/src/content/docs/aws/services/rds.md @@ -1,6 +1,5 @@ --- title: "Relational Database Service (RDS)" -linkTitle: "Relational Database Service (RDS)" description: Get started with Relational Database Service (RDS) on LocalStack tags: ["Base"] persistence: supported with limitations @@ -13,15 +12,15 @@ RDS allows you to deploy and manage various relational database engines like MyS RDS handles routine database tasks such as provisioning, patching, backup, recovery, and scaling. LocalStack allows you to use the RDS APIs in your local environment to create and manage RDS clusters and instances for testing & integration purposes. -The supported APIs are available on our [API coverage page]({{< ref "coverage_rds" >}}), which provides information on the extent of RDS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of RDS's integration with LocalStack. -{{< callout >}} +:::note We’ve introduced a new native RDS provider in LocalStack and made it the default. This replaces Moto-based CRUD operations with a more reliable setup. RDS state created in version 4.3 or earlier using Cloud Pods or standard persistence will not be compatible with the new provider introduced in version 4.4. Recreating the RDS state is recommended for compatibility. -{{< /callout >}} +::: ## Getting started @@ -42,14 +41,14 @@ To create an RDS cluster, you can use the [`CreateDBCluster`](https://docs.aws.a The following command creates a new cluster with the name `db1` and the engine `aurora-postgresql`. Instances for the cluster must be added manually. -{{< command >}} -$ awslocal rds create-db-cluster \ +```bash +awslocal rds create-db-cluster \ --db-cluster-identifier db1 \ --engine aurora-postgresql \ --database-name test \ --master-username myuser \ --master-user-password mypassword -{{< / command >}} +``` You should see the following output: @@ -67,13 +66,13 @@ You should see the following output: To add an instance you can run the following command: -{{< command >}} -$ awslocal rds create-db-instance \ +```bash +awslocal rds create-db-instance \ --db-instance-identifier db1-instance \ --db-cluster-identifier db1 \ --engine aurora-postgresql \ --db-instance-class db.t3.large -{{< / command >}} +``` ### Create a SecretsManager secret @@ -81,8 +80,8 @@ To create a `SecretsManager` secret, you can use the [`CreateSecret`](https://do Before creating the secret, you need to create a JSON file containing the credentials for the database. The following command creates a file called `mycreds.json` with the credentials for the database. -{{< command >}} -$ cat << 'EOF' > mycreds.json +```bash +cat << 'EOF' > mycreds.json { "engine": "aurora-postgresql", "username": "myuser", @@ -92,15 +91,15 @@ $ cat << 'EOF' > mycreds.json "port": "4510" } EOF -{{< / command >}} +``` Run the following command to create the secret: -{{< command >}} -$ awslocal secretsmanager create-secret \ +```bash +awslocal secretsmanager create-secret \ --name dbpass \ --secret-string file://mycreds.json -{{< / command >}} +``` You should see the following output: @@ -121,13 +120,13 @@ Make sure to replace the `secret-arn` with the ARN from the secret you just crea The following command executes a query against the database. The query returns the value `123`. -{{< command >}} -$ awslocal rds-data execute-statement \ +```bash +awslocal rds-data execute-statement \ --database test \ --resource-arn arn:aws:rds:us-east-1:000000000000:cluster:db1 \ --secret-arn arn:aws:secretsmanager:us-east-1:000000000000:secret:dbpass-cfnAX \ --include-result-metadata --sql 'SELECT 123' -{{< / command >}} +``` You should see the following output: @@ -165,9 +164,9 @@ You should see the following output: Alternative clients, such as `psql`, can also be employed to interact with the database. You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API. -{{< command >}} -$ psql -d test -U test -p 4513 -h localhost -W -{{< / command >}} +```bash +psql -d test -U test -p 4513 -h localhost -W +``` ## Supported DB engines @@ -185,10 +184,10 @@ It's important to note that the selection of minor versions is not available. The latest major version will be installed within the Docker environment. If you wish to prevent the installation of customized versions, adjusting the `RDS_PG_CUSTOM_VERSIONS` environment variable to `0` will enforce the use of the default PostgreSQL version 17. -{{< callout >}} +:::note While the [`DescribeDbCluster`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) and [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) APIs will still reflect the initially defined `engine-version`, the actual installed PostgreSQL engine might differ. This can have implications, particularly when employing a Terraform configuration, where unexpected changes should be avoided. -{{< /callout >}} +::: Instances and clusters with the PostgreSQL engine have the capability to both create and restore snapshots. @@ -205,10 +204,10 @@ A MySQL community server will be launched in a new Docker container upon request The `engine-version` will serve as the tag for the Docker image, allowing you to freely select the desired MySQL version from those available on the [official MySQL Docker Hub](https://hub.docker.com/_/mysql). If you have a specific image in mind, you can also use the environment variable `MYSQL_IMAGE=`. -{{< callout >}} +:::note The `arm64` MySQL images are limited to newer versions. For more information about availability, check the [MySQL Docker Hub repository](https://hub.docker.com/_/mysql). -{{< /callout >}} +::: It's essential to understand that the `MasterUserPassword` you define for the database cluster/instance will be used as the `MYSQL_ROOT_PASSWORD` environment variable for the `root` user within the MySQL container. The user specified in `MasterUserName` will use the same password and will have complete access to the database. @@ -255,11 +254,11 @@ In this example, you will be able to verify the IAM authentication process for R The following command creates a new database instance with the name `mydb` and the engine `postgres`. The database will be created with a single instance, which will be used as the master instance. -{{< command >}} -$ MASTER_USER=hello -$ MASTER_PW='MyPassw0rd!' -$ DB_NAME=test -$ awslocal rds create-db-instance \ +```bash +MASTER_USER=hello +MASTER_PW='MyPassw0rd!' +DB_NAME=test +awslocal rds create-db-instance \ --master-username $MASTER_USER \ --master-user-password $MASTER_PW \ --db-instance-identifier mydb \ @@ -267,38 +266,38 @@ $ awslocal rds create-db-instance \ --db-name $DB_NAME \ --enable-iam-database-authentication \ --db-instance-class db.t3.small -{{< / command >}} +``` ### Connect to the database You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API. Run the following command to retrieve the host and port of the instance: -{{< command >}} -$ PORT=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Port") -$ HOST=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Address") -{{< / command >}} +```bash +PORT=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Port") +HOST=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Address") +``` Next, you can connect to the database using the master username and password: -{{< command >}} -$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'CREATE USER myiam WITH LOGIN' -$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'GRANT rds_iam TO myiam' -{{< / command >}} +```bash +PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'CREATE USER myiam WITH LOGIN' +PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'GRANT rds_iam TO myiam' +``` ### Create a token You can create a token for the user you generated using the [`generate-db-auth-token`](https://docs.aws.amazon.com/cli/latest/reference/rds/generate-db-auth-token.html) command: -{{< command >}} -$ TOKEN=$(awslocal rds generate-db-auth-token --username myiam --hostname $HOST --port $PORT) -{{< / command >}} +```bash +TOKEN=$(awslocal rds generate-db-auth-token --username myiam --hostname $HOST --port $PORT) +``` You can now connect to the database utilizing the user you generated and the token obtained in the previous step as the password: -{{< command >}} -$ PGPASSWORD=$TOKEN psql -d $DB_NAME -U myiam -w -p $PORT -h $HOST -{{< / command >}} +```bash +PGPASSWORD=$TOKEN psql -d $DB_NAME -U myiam -w -p $PORT -h $HOST +``` ## Global Database Support @@ -369,9 +368,7 @@ In addition to the `aws_*` extensions described in the sections above, LocalStac The LocalStack Web Application provides a Resource Browser for managing RDS instances and clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RDS** under the **Database** section. -RDS Resource Browser -
-
+![RDS Resource Browser](/images/aws/rds-resource-browser.png) The Resource Browser allows you to perform the following actions: From 7f5dcd38f2a656fcee657ec415e4a29fa99cf1f1 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:01:01 +0530 Subject: [PATCH 73/80] do a bunch --- src/content/docs/aws/services/redshift.md | 71 ++++++++--------- .../docs/aws/services/resourcegroups.md | 23 +++--- src/content/docs/aws/services/route53.md | 61 ++++++++------- .../docs/aws/services/route53resolver.md | 77 +++++++++++-------- 4 files changed, 120 insertions(+), 112 deletions(-) diff --git a/src/content/docs/aws/services/redshift.md b/src/content/docs/aws/services/redshift.md index 507b9e8c..5cd19eda 100644 --- a/src/content/docs/aws/services/redshift.md +++ b/src/content/docs/aws/services/redshift.md @@ -1,6 +1,5 @@ --- title: "Redshift" -linkTitle: "Redshift" description: Get started with Redshift on LocalStack tags: ["Free", "Ultimate"] --- @@ -12,12 +11,12 @@ RedShift is fully managed by AWS and serves as a petabyte-scale service which al The query results can be saved to an S3 Data Lake while additional analytics can be provided by Athena or SageMaker. LocalStack allows you to use the RedShift APIs in your local environment to analyze structured and semi-structured data across local data warehouses and data lakes. -The supported APIs are available on our [API coverage page]({{< ref "coverage_redshift" >}}), which provides information on the extent of RedShift's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of RedShift's integration with LocalStack. -{{< callout "Note" >}} +:::note Users on Free plan can use RedShift APIs in LocalStack for basic mocking and testing. For advanced features like Redshift Data API and other emulation capabilities, please refer to the Ultimate plan. -{{< /callout >}} +::: ## Getting started @@ -51,108 +50,106 @@ You will also create a Glue database, connection, and crawler to populate the Gl You can create a RedShift cluster using the [`CreateCluster`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_CreateCluster.html) API. The following command will create a RedShift cluster with the variables defined above: -{{< command >}} -$ awslocal redshift create-cluster \ +```bash +awslocal redshift create-cluster \ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \ --db-name $REDSHIFT_DATABASE_NAME \ --master-username $REDSHIFT_USERNAME \ --master-user-password $REDSHIFT_PASSWORD \ --node-type n1 -{{< / command >}} +``` You can fetch the status of the cluster using the [`DescribeClusters`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) API. Run the following command to extract the URL of the cluster: -{{< command >}} -$ REDSHIFT_URL=$(awslocal redshift describe-clusters \ +```bash +REDSHIFT_URL=$(awslocal redshift describe-clusters \ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER | jq -r '(.Clusters[0].Endpoint.Address) + ":" + (.Clusters[0].Endpoint.Port|tostring)') -{{< / command >}} +``` ### Create a Glue database, connection, and crawler You can create a Glue database using the [`CreateDatabase`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateDatabase.html) API. The following command will create a Glue database: -{{< command >}} -$ awslocal glue create-database \ +```bash +awslocal glue create-database \ --database-input "{\"Name\": \"$GLUE_DATABASE_NAME\"}" -{{< / command >}} +``` You can create a connection to the RedShift cluster using the [`CreateConnection`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateConnection.html) API. The following command will create a Glue connection with the RedShift cluster: -{{< command >}} -$ awslocal glue create-connection \ +```bash +awslocal glue create-connection \ --connection-input "{\"Name\":\"$GLUE_CONNECTION_NAME\", \"ConnectionType\": \"JDBC\", \"ConnectionProperties\": {\"USERNAME\": \"$REDSHIFT_USERNAME\", \"PASSWORD\": \"$REDSHIFT_PASSWORD\", \"JDBC_CONNECTION_URL\": \"jdbc:redshift://$REDSHIFT_URL/$REDSHIFT_DATABASE_NAME\"}}" -{{< / command >}} +``` Finally, you can create a Glue crawler using the [`CreateCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateCrawler.html) API. The following command will create a Glue crawler: -{{< command >}} -$ awslocal glue create-crawler \ +```bash +awslocal glue create-crawler \ --name $GLUE_CRAWLER_NAME \ --database-name $GLUE_DATABASE_NAME \ --targets "{\"JdbcTargets\": [{\"ConnectionName\": \"$GLUE_CONNECTION_NAME\", \"Path\": \"$REDSHIFT_DATABASE_NAME/%/$REDSHIFT_TABLE_NAME\"}]}" \ --role r1 -{{< / command >}} +``` ### Create table in RedShift You can create a table in RedShift using the [`CreateTable`](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html) API. The following command will create a table in RedShift: -{{< command >}} -$ REDSHIFT_STATEMENT_ID=$(awslocal redshift-data execute-statement \ +```bash +REDSHIFT_STATEMENT_ID=$(awslocal redshift-data execute-statement \ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \ --database $REDSHIFT_DATABASE_NAME \ --sql \ "create table $REDSHIFT_TABLE_NAME(salesid integer not null, listid integer not null, sellerid integer not null, buyerid integer not null, eventid integer not null, dateid smallint not null, qtysold smallint not null, pricepaid decimal(8,2), commission decimal(8,2), saletime timestamp)" | jq -r .Id) -{{< / command >}} +``` You can check the status of the statement using the [`DescribeStatement`](https://docs.aws.amazon.com/redshift-data/latest/APIReference/API_DescribeStatement.html) API. The following command will check the status of the statement: -{{< command >}} -$ wait "awslocal redshift-data describe-statement \ +```bash +wait "awslocal redshift-data describe-statement \ --id $REDSHIFT_STATEMENT_ID" ".Status" "FINISHED" -{{< / command >}} +``` ### Run the crawler You can run the crawler using the [`StartCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_StartCrawler.html) API. The following command will run the crawler: -{{< command >}} -$ awslocal glue start-crawler \ +```bash +awslocal glue start-crawler \ --name $GLUE_CRAWLER_NAME -{{< / command >}} +``` You can wait for the crawler to finish using the [`GetCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetCrawler.html) API. The following command will wait for the crawler to finish: -{{< command >}} -$ wait "awslocal glue get-crawler \ +```bash +wait "awslocal glue get-crawler \ --name $GLUE_CRAWLER_NAME" ".Crawler.State" "READY" -{{< / command >}} +``` You can finally retrieve the schema of the table using the [`GetTable`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetTable.html) API. The following command will retrieve the schema of the table: -{{< command >}} -$ awslocal glue get-table \ +```bash +awslocal glue get-table \ --database-name $GLUE_DATABASE_NAME \ --name "${REDSHIFT_DATABASE_NAME}_${REDSHIFT_SCHEMA_NAME}_${REDSHIFT_TABLE_NAME}" -{{< / command >}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing RedShift clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RedShift** under the **Analytics** section. -RedShift Resource Browser -
-
+![RedShift Resource Browser](/images/aws/redshift-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/resourcegroups.md b/src/content/docs/aws/services/resourcegroups.md index b3729229..58649d89 100644 --- a/src/content/docs/aws/services/resourcegroups.md +++ b/src/content/docs/aws/services/resourcegroups.md @@ -1,6 +1,5 @@ --- title: "Resource Groups" -linkTitle: "Resource Groups" tags: ["Free"] description: > Get started with Resource Groups on LocalStack @@ -14,7 +13,7 @@ Resource Groups in AWS provide two types of queries that developers can use to b With Tag-based queries, developers can organize resources based on common attributes or characteristics, while CloudFormation stack-based queries allow developers to group resources that are deployed together as part of a CloudFormation stack. LocalStack allows you to use the Resource Groups APIs in your local environment to group and categorize resources based on criteria such as tags, resource types, regions, or custom attributes. -The supported APIs are available on our [API coverage page]({{< ref "coverage_resource-groups" >}}), which provides information on the extent of Resource Group's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Resource Group's integration with LocalStack. ## Getting Started @@ -34,11 +33,11 @@ A tag-based group is created based on a query of type `TAG_FILTERS_1_0`. Use the [`CreateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_CreateGroup.html) API to create a Resource Group. Run the following command to create a Resource Group named `my-resource-group`: -{{< command >}} -$ awslocal resource-groups create-group \ +```bash +awslocal resource-groups create-group \ --name my-resource-group \ --resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}' -{{< /command >}} +``` You can also specify `AWS::AllSupported` as the `ResourceTypeFilters` value to include all supported resource types in the group. @@ -47,27 +46,27 @@ You can also specify `AWS::AllSupported` as the `ResourceTypeFilters` value to i To update a Resource Group, use the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API. Execute the following command to update the Resource Group `my-resource-group`: -{{< command >}} +```bash awslocal resource-groups update-group \ --group-name my-resource-group \ --description "EC2 S3 buckets and RDS DBs that we are using for the test stage" -{{< /command >}} +``` Furthermore, you can also update the query and tags associated with a Resource Group using the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API. Run the following command to update the query and tags of the Resource Group `my-resource-group`: -{{< command >}} +```bash awslocal resource-groups update-group-query \ --group-name my-resource-group \ --resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\",\"AWS::S3::Bucket\",\"AWS::RDS::DBInstance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}' -{{< /command >}} +``` ### Delete a Resource Group To delete a Resource Group, use the [`DeleteGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_DeleteGroup.html) API. Run the following command to delete the Resource Group `my-resource-group`: -{{< command >}} -$ awslocal resource-groups delete-group \ +```bash +awslocal resource-groups delete-group \ --group-name my-resource-group -{{< /command >}} +``` diff --git a/src/content/docs/aws/services/route53.md b/src/content/docs/aws/services/route53.md index 8496ab94..454ebc37 100644 --- a/src/content/docs/aws/services/route53.md +++ b/src/content/docs/aws/services/route53.md @@ -1,6 +1,5 @@ --- title: "Route 53" -linkTitle: "Route 53" description: Get started with Route 53 on LocalStack persistence: supported tags: ["Free"] @@ -14,14 +13,14 @@ In addition to basic DNS functionality, Route 53 offers advanced features like h Route 53 integrates seamlessly with other AWS services, such as route traffic to CloudFront distributions, S3 buckets configured for static website hosting, EC2 instances, and more. LocalStack allows you to use the Route53 APIs in your local environment to create hosted zones and to manage DNS entries. -The supported APIs are available on our [API coverage page]({{< ref "coverage_route53" >}}), which provides information on the extent of Route53's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Route53's integration with LocalStack. LocalStack also integrates with its DNS server to respond to DNS queries with these domains. -{{< callout "note">}} +:::note LocalStack CLI does not publish port `53` anymore by default. Use the CLI flag `--host-dns` to expose the port on the host. This would be required if you want to reach out to Route53 domain names from your host machine, using the LocalStack DNS server. -{{< /callout >}} +::: ## Getting started @@ -35,12 +34,12 @@ We will demonstrate how to create a hosted zone and query the DNS record with th You can created a hosted zone for `example.com` using the [`CreateHostedZone`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API. Run the following command: -{{< command >}} -$ zone_id=$(awslocal route53 create-hosted-zone \ +```bash +zone_id=$(awslocal route53 create-hosted-zone \ --name example.com \ --caller-reference r1 | jq -r '.HostedZone.Id') -$ echo $zone_id -{{< / command >}} +echo $zone_id +``` The following output would be retrieved: @@ -53,11 +52,11 @@ The following output would be retrieved: You can now change the resource record sets for the hosted zone `example.com` using the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API. Run the following command: -{{< command >}} -$ awslocal route53 change-resource-record-sets \ +```bash +awslocal route53 change-resource-record-sets \ --hosted-zone-id $zone_id \ --change-batch 'Changes=[{Action=CREATE,ResourceRecordSet={Name=test.example.com,Type=A,ResourceRecords=[{Value=1.2.3.4}]}}]' -{{< / command >}} +``` The following output would be retrieved: @@ -73,19 +72,19 @@ The following output would be retrieved: ## DNS resolution -LocalStack Pro supports the ability to respond to DNS queries for your Route53 domain names, with our [integrated DNS server]({{< ref "user-guide/tools/dns-server" >}}). +LocalStack Pro supports the ability to respond to DNS queries for your Route53 domain names, with our [integrated DNS server](/aws/tooling/dns-server). -{{< callout >}} -To follow the example below you must [configure your system DNS to use the LocalStack DNS server]({{< ref "user-guide/tools/dns-server#system-dns-configuration" >}}). -{{< /callout >}} +:::note +To follow the example below you must [configure your system DNS to use the LocalStack DNS server](/aws/tooling/dns-server#system-dns-configuration). +::: ### Query a DNS record You can query the DNS record using `dig` via the built-in DNS server by running the following command: -{{< command >}} -$ dig @localhost test.example.com -{{< / command >}} +```bash +dig @localhost test.example.com +``` The following output would be retrieved: @@ -101,7 +100,7 @@ test.example.com. 300 IN A 1.2.3.4 The DNS name `localhost.localstack.cloud`, along with its subdomains like `mybucket.s3.localhost.localstack.cloud`, serves an internal routing purpose within LocalStack. It facilitates communication between a LocalStack compute environment (such as a Lambda function) and the LocalStack APIs, as well as your containerised applications with the LocalStack APIs. -For example configurations, see the [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}). +For example configurations, see the [Network Troubleshooting guide](). For most use-cases, the default configuration of the internal LocalStack DNS name requires no modification. It functions seamlessly in typical scenarios. @@ -115,12 +114,12 @@ This can be accomplished using Route53. Create a hosted zone for the domain `localhost.localstack.cloud` using the [`CreateHostedZone` API](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API. Run the following command: -{{< command >}} -$ zone_id=$(awslocal route53 create-hosted-zone \ +```bash +zone_id=$(awslocal route53 create-hosted-zone \ --name localhost.localstack.cloud \ --caller-reference r1 | jq -r .HostedZone.Id) -$ echo $zone_id -{{< / command >}} +echo $zone_id +``` The following output would be retrieved: @@ -131,11 +130,11 @@ The following output would be retrieved: You can now use the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API to create a record set for the domain `localhost.localstack.cloud` using the `zone_id` retrieved in the previous step. Run the following command to accomplish this: -{{< command >}} -$ awslocal route53 change-resource-record-sets \ +```bash +awslocal route53 change-resource-record-sets \ --hosted-zone-id $zone_id \ --change-batch '{"Changes":[{"Action":"CREATE","ResourceRecordSet":{"Name":"localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}},{"Action":"CREATE","ResourceRecordSet":{"Name":"*.localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}}]}' -{{< / command >}} +``` The following output would be retrieved: @@ -151,10 +150,10 @@ The following output would be retrieved: You can now verify that the DNS name `localhost.localstack.cloud` and its subdomains resolve to the IP address: -{{< command >}} -$ dig @127.0.0.1 bucket1.s3.localhost.localstack.cloud -$ dig @127.0.0.1 localhost.localstack.cloud -{{< / command >}} +```bash +dig @127.0.0.1 bucket1.s3.localhost.localstack.cloud +dig @127.0.0.1 localhost.localstack.cloud +``` The following output would be retrieved: @@ -176,7 +175,7 @@ localhost.localstack.cloud. 300 IN A 5.6.7.8 The LocalStack Web Application provides a Route53 for creating hosted zones and to manage DNS entries. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Route53** under the **Analytics** section. -Route53 Resource Browser +![Route53 Resource Browser](/images/aws/route53-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/route53resolver.md b/src/content/docs/aws/services/route53resolver.md index dfebf69c..9ba53178 100644 --- a/src/content/docs/aws/services/route53resolver.md +++ b/src/content/docs/aws/services/route53resolver.md @@ -1,6 +1,5 @@ --- title: "Route 53 Resolver" -linkTitle: "Route 53 Resolver" description: Get started with Route 53 Resolver on LocalStack persistence: supported tags: ["Free"] @@ -13,7 +12,7 @@ Route 53 Resolver forwards DNS queries for domain names to the appropriate DNS s Route 53 Resolver can be used to resolve domain names between your VPC and your network, and to resolve domain names between your VPCs. LocalStack allows you to use the Route 53 Resolver endpoints in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_route53resolver" >}}), which provides information on the extent of Route 53 Resolver's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Route 53 Resolver's integration with LocalStack. ## Getting started @@ -26,15 +25,15 @@ We will demonstrate how to create a resolver endpoint, list the endpoints, and d Fetch the default VPC ID using the following command: -{{< command >}} -$ VPC_ID=$(awslocal ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text) -{{< / command >}} +```bash +VPC_ID=$(awslocal ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text) +``` Fetch the default VPC's security group ID using the following command: -{{< command >}} -$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID --query 'Subnets[].SubnetId' -{{< / command >}} +```bash +awslocal ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID --query 'Subnets[].SubnetId' +``` You should see the following output: @@ -49,34 +48,48 @@ You should see the following output: ] ``` -Choose two subnets from the list above and fetch the CIDR block of the subnets which tells you the range of IP addresses within it: +Choose two subnets from the list above and fetch the CIDR block of the subnets which tells you the range of IP addresses within it. Let's fetch the CIDR block of the subnet `subnet-957d6ba6`: + +```bash +awslocal ec2 describe-subnets --subnet-ids subnet-957d6ba6 --query 'Subnets[*].CidrBlock' +``` + +The following output would be retrieved: -{{< command >}} -$ awslocal ec2 describe-subnets --subnet-ids subnet-957d6ba6 --query 'Subnets[*].CidrBlock' - +```bash [ "172.31.16.0/20" ] - -$ awslocal ec2 describe-subnets --subnet-ids subnet-bdd58a47 --query 'Subnets[*].CidrBlock' - +``` + +Similarly, fetch the CIDR block of the subnet `subnet-bdd58a47`: + +```bash +awslocal ec2 describe-subnets --subnet-ids subnet-bdd58a47 --query 'Subnets[*].CidrBlock' +``` + +The following output would be retrieved: + +```bash [ "172.31.0.0/20" ] - -{{< / command >}} +``` Save the CIDR blocks of the subnets as you will need them later. Lastly fetch the security group ID of the default VPC: -{{< command >}} -$ awslocal ec2 describe-security-groups \ +```bash +awslocal ec2 describe-security-groups \ --filters Name=vpc-id,Values=$VPC_ID \ --query 'SecurityGroups[0].GroupId' - +``` + +The following output would be retrieved: + +```bash sg-39936e572e797b360 - -{{< / command >}} +``` Save the security group ID as you will need it later. @@ -114,10 +127,10 @@ Replace the `Ip` and `SubnetId` values with the CIDR blocks and subnet IDs you f You can now use the [`CreateResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_CreateResolverEndpoint.html) API to create an outbound resolver endpoint. Run the following command: -{{< command >}} -$ awslocal route53resolver create-resolver-endpoint \ +```bash +awslocal route53resolver create-resolver-endpoint \ --cli-input-json file://create-outbound-resolver-endpoint.json -{{< / command >}} +``` The following output would be retrieved: @@ -147,9 +160,9 @@ The following output would be retrieved: You can list the resolver endpoints using the [`ListResolverEndpoints`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_ListResolverEndpoints.html) API. Run the following command: -{{< command >}} -$ awslocal route53resolver list-resolver-endpoints -{{< / command >}} +```bash +awslocal route53resolver list-resolver-endpoints +``` The following output would be retrieved: @@ -182,10 +195,10 @@ The following output would be retrieved: You can delete the resolver endpoint using the [`DeleteResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DeleteResolverEndpoint.html) API. Run the following command: -{{< command >}} -$ awslocal route53resolver delete-resolver-endpoint \ +```bash +awslocal route53resolver delete-resolver-endpoint \ --resolver-endpoint-id rslvr-out-5d61abaff9de06b99 -{{< / command >}} +``` Replace `rslvr-out-5d61abaff9de06b99` with the ID of the resolver endpoint you want to delete. @@ -195,7 +208,7 @@ The LocalStack Web Application provides a Route53 Resolver for creating and mana You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Route53** under the **Analytics** section. Navigate to the **Resolver Endpoints** tab to view the resolver endpoints. -Route53Resolver Resource Browser +![Route53Resolver Resource Browser](/images/aws/route53-resolver-resource-browser.png) The Resource Browser allows you to perform the following actions: From 9fc4db9191d56ee3bec304e529e43a21d829f511 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:06:27 +0530 Subject: [PATCH 74/80] revamp s3 docs --- .../docs/aws/services/{s3.md => s3.mdx} | 129 ++++++++++-------- 1 file changed, 72 insertions(+), 57 deletions(-) rename src/content/docs/aws/services/{s3.md => s3.mdx} (84%) diff --git a/src/content/docs/aws/services/s3.md b/src/content/docs/aws/services/s3.mdx similarity index 84% rename from src/content/docs/aws/services/s3.md rename to src/content/docs/aws/services/s3.mdx index 118d83d4..430c4e5b 100644 --- a/src/content/docs/aws/services/s3.md +++ b/src/content/docs/aws/services/s3.mdx @@ -1,6 +1,5 @@ --- title: "Simple Storage Service (S3)" -linkTitle: "Simple Storage Service (S3)" description: Get started with Amazon S3 on LocalStack persistence: supported tags: ["Free"] @@ -14,13 +13,13 @@ Each object or file within S3 encompasses essential attributes such as a unique S3 can store unlimited objects, allowing you to store, retrieve, and manage your data in a highly adaptable and reliable manner. LocalStack allows you to use the S3 APIs in your local environment to create new buckets, manage your S3 objects, and test your S3 configurations locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_s3" >}}), which provides information on the extent of S3's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of S3's integration with LocalStack. ## Getting started This guide is designed for users new to S3 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script. -Start your LocalStack container using your [preferred method]({{< ref "getting-started/installation" >}}). +Start your LocalStack container using your preferred method. We will demonstrate how you can create an S3 bucket, manage S3 objects, and generate pre-signed URLs for S3 objects. ### Create an S3 bucket @@ -28,16 +27,16 @@ We will demonstrate how you can create an S3 bucket, manage S3 objects, and gene You can create an S3 bucket using the [`CreateBucket`](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) API. Run the following command to create an S3 bucket named `sample-bucket`: -{{< command >}} -$ awslocal s3api create-bucket --bucket sample-bucket -{{< / command >}} +```bash +awslocal s3api create-bucket --bucket sample-bucket +``` You can list your S3 buckets using the [`ListBuckets`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html) API. Run the following command to list your S3 buckets: -{{< command >}} -$ awslocal s3api list-buckets -{{< / command >}} +```bash +awslocal s3api list-buckets +``` On successful creation of the S3 bucket, you will see the following output: @@ -62,20 +61,20 @@ To upload a file to your S3 bucket, you can use the [`PutObject`](https://docs.a Download a random image from the internet and save it as `image.jpg`. Run the following command to upload the file to your S3 bucket: -{{< command >}} -$ awslocal s3api put-object \ +```bash +awslocal s3api put-object \ --bucket sample-bucket \ --key image.jpg \ --body image.jpg -{{< / command >}} +``` You can list the objects in your S3 bucket using the [`ListObjects`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html) API. Run the following command to list the objects in your S3 bucket: -{{< command >}} -$ awslocal s3api list-objects \ +```bash +awslocal s3api list-objects \ --bucket sample-bucket -{{< / command >}} +``` If your image has been uploaded successfully, you will see the following output: @@ -99,14 +98,17 @@ If your image has been uploaded successfully, you will see the following output: Run the following command to upload a file named `index.html` to your S3 bucket: -{{< command >}} +```bash +awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html +``` -$ awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html +The following output would be retrieved: +```bash { "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"" } -{{< / command >}} +``` ### Generate a pre-signed URL for S3 object @@ -115,9 +117,9 @@ Pre-signed URL allows anyone to retrieve the S3 object with an HTTP GET request. Run the following command to generate a pre-signed URL for your S3 object: -{{< command >}} -$ awslocal s3 presign s3://sample-bucket/image.jpg -{{< / command >}} +```bash +awslocal s3 presign s3://sample-bucket/image.jpg +``` You will see a generated pre-signed URL for your S3 object. You can use [curl](https://curl.se/) or [`wget`](https://www.gnu.org/software/wget/) to retrieve the S3 object using the pre-signed URL. @@ -143,13 +145,13 @@ By default, most SDKs will try to use **Virtual-Hosted style** requests and prep However, if the endpoint is not prefixed by `s3.`, LocalStack will not be able to understand the request and it will most likely result in an error. You can either change the endpoint to an S3-specific one, or configure your SDK to use **Path style** requests instead. -Check out our [SDK documentation]({{< ref "sdks" >}}) to learn how you can configure AWS SDKs to access LocalStack and S3. +Check out our [SDK documentation](/aws/integrations/aws-sdks) to learn how you can configure AWS SDKs to access LocalStack and S3. -{{< callout "tip" >}} +:::note While using [AWS SDKs](https://aws.amazon.com/developer/tools/#SDKs), you would need to configure the `ForcePathStyle` parameter to `true` in the S3 client configuration to use **Path style** requests. If you want to use virtual host addressing of buckets, you can remove `ForcePathStyle` from the configuration. -The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation]({{< ref "sdks" >}}) -{{< /callout >}} +The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation](/aws/integrations/aws-sdks) +::: If your endpoint is not prefixed with `s3.`, all requests are treated as **Path style** requests. Using the `s3.localhost.localstack.cloud` endpoint URL is recommended for all requests aimed at S3. @@ -168,12 +170,17 @@ Follow this step-by-step guide to configure CORS rules on your S3 bucket. Run the following command on your terminal to create your S3 bucket: -{{< command >}} -$ awslocal s3api create-bucket --bucket cors-bucket +```bash +awslocal s3api create-bucket --bucket cors-bucket +``` + +The following output would be retrieved: + +```bash { "Location": "/cors-bucket" } -{{< / command >}} +``` Next, create a JSON file with the CORS configuration. The file should have the following format: @@ -191,22 +198,22 @@ The file should have the following format: } ``` -{{< callout >}} +:::note Note that this configuration is a sample, and you can tailor it to fit your needs better, for example, restricting the **AllowedHeaders** to specific ones. -{{< /callout >}} +::: Save the file locally with a name of your choice, for example, `cors-config.json`. Run the following command to apply the CORS configuration to your S3 bucket: -{{< command >}} -$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json -{{< / command >}} +```bash +awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json +``` You can further verify that the CORS configuration was applied successfully by running the following command: -{{< command >}} -$ awslocal s3api get-bucket-cors --bucket cors-bucket -{{< / command >}} +```bash +awslocal s3api get-bucket-cors --bucket cors-bucket +``` On applying the configuration successfully, you should see the same JSON configuration file you created earlier. Your S3 bucket is configured to allow cross-origin resource sharing, and if you try to send requests from your local application running on [localhost:3000](http://localhost:3000), they should be successful. @@ -233,10 +240,10 @@ We can edit the JSON file `cors-config.json` you created earlier with the follow You can now run the same steps as before to update the CORS configuration and verify if it is applied correctly: -{{< command >}} -$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json -$ awslocal s3api get-bucket-cors --bucket cors-bucket -{{< / command >}} +```bash +awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json +awslocal s3api get-bucket-cors --bucket cors-bucket +``` You can try again to upload files in your bucket from the [LocalStack Web Application](https://app.localstack.cloud) and it should work. @@ -245,17 +252,22 @@ You can try again to upload files in your bucket from the [LocalStack Web Applic LocalStack provides a Docker image for S3, which you can use to run S3 in a Docker container. The image is available on [Docker Hub](https://hub.docker.com/r/localstack/localstack) and can be pulled using the following command: -{{< command >}} -$ docker pull localstack/localstack:s3-latest -{{< / command >}} +```bash +docker pull localstack/localstack:s3-latest +``` The S3 Docker image only supports the S3 APIs and does not include other services like Lambda, DynamoDB, etc. You can run the S3 Docker image using any of the following commands: -{{< tabpane lang="shell" >}} -{{< tab header="LocalStack CLI" lang="shell" >}} +import { Tabs, TabItem } from '@astrojs/starlight/components'; + + + +```bash IMAGE_NAME=localstack/localstack:s3-latest localstack start -{{< /tab >}} -{{< tab header="Docker Compose" lang="yml" >}} +``` + + +```yaml services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}" @@ -267,22 +279,25 @@ services: volumes: - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock" -{{< /tab >}} -{{< tab header="Docker" lang="shell" >}} +``` + + +```bash docker run \ --rm \ -p 4566:4566 \ localstack/localstack:s3-latest -{{< /tab >}} -{{< /tabpane >}} +``` + + The S3 Docker image has similar parity with the S3 APIs supported by LocalStack Docker image. -You can use similar [configuration options]({{< ref "configuration/#s3" >}}) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`. +You can use similar [configuration options](/aws/capabilities/config/configuration/#s3) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`. -{{< callout >}} +:::note The S3 Docker image does not support persistence, and all data is lost when the container is stopped. To use persistence or save the container state as a Cloud Pod, you need to use the [`localstack/localstack-pro`](https://hub.docker.com/r/localstack/localstack-pro) image. -{{< /callout >}} +::: ## SSE-C Encryption @@ -303,10 +318,10 @@ However, LocalStack does not support the actual encryption and decryption of obj ## Resource Browser -The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing S3 buckets & configurations. +The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing S3 buckets & configurations. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **S3** under the **Storage** section. -S3 Resource Browser +![S3 Resource Browser](/images/aws/s3-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -324,4 +339,4 @@ The following code snippets and sample applications provide practical examples o - [Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES](https://github.com/localstack/sample-transcribe-app) - [Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation](https://github.com/localstack/query-data-s3-athena-glue-sample) - [Serverless Image Resizer with Lambda, S3, SNS, and SES](https://github.com/localstack/serverless-image-resizer) -- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack]({{< ref "s3-static-website-terraform" >}}) +- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack]() From d2897653f24f2aca305bf2c407f8ef254af04a32 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:07:45 +0530 Subject: [PATCH 75/80] revamp sagemaker --- src/content/docs/aws/services/sagemaker.md | 51 +++++++++++----------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/src/content/docs/aws/services/sagemaker.md b/src/content/docs/aws/services/sagemaker.md index 6ff02246..5bafd5ab 100644 --- a/src/content/docs/aws/services/sagemaker.md +++ b/src/content/docs/aws/services/sagemaker.md @@ -1,6 +1,5 @@ --- title: "SageMaker" -linkTitle: "SageMaker" description: Get started with SageMaker on LocalStack tags: ["Ultimate"] --- @@ -11,13 +10,13 @@ Amazon SageMaker is a fully managed service provided by Amazon Web Services (AWS It streamlines the machine learning development process, reduces the time and effort required to build and deploy models, and offers the scalability and flexibility needed for large-scale machine learning projects in the AWS cloud. LocalStack provides a local version of the SageMaker API, which allows running jobs to create machine learning models (e.g., using PyTorch) and to deploy them. -The supported APIs are available on our [API coverage page]({{< ref "coverage_sagemaker" >}}), which provides information on the extent of Sagemaker's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Sagemaker's integration with LocalStack. -{{< callout >}} +:::note LocalStack supports custom-built models in SageMaker. You can push your Docker image to LocalStack's Elastic Container Registry (ECR) and use it in SageMaker. LocalStack will use the local ECR image to create a SageMaker model. -{{< /callout >}} +::: ## Getting started @@ -29,46 +28,46 @@ We will demonstrate an application illustrating running a machine learning job u - Creates a SageMaker Endpoint for accessing the model - Invokes the endpoint directly on the container via Boto3 -{{< callout >}} +:::note SageMaker is a fairly comprehensive API for now. Currently a subset of the functionality is provided locally, but new features are being added on a regular basis. -{{< /callout >}} +::: ### Download the sample application You can download the sample application from [GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/sagemaker-inference) or by running the following commands: -{{< command >}} -$ mkdir localstack-samples && cd localstack-samples -$ git init -$ git remote add origin -f git@github.com:localstack/localstack-pro-samples.git -$ git config core.sparseCheckout true -$ echo sagemaker-inference >> .git/info/sparse-checkout -$ git pull origin master -{{< /command >}} +```bash +mkdir localstack-samples && cd localstack-samples +git init +git remote add origin -f git@github.com:localstack/localstack-pro-samples.git +git config core.sparseCheckout true +echo sagemaker-inference >> .git/info/sparse-checkout +git pull origin master +``` ### Set up the environment After downloading the sample application, you can set up your Docker Client to pull the AWS Deep Learning images by running the following command: -{{< command >}} -$ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com -{{< /command >}} +```bash +aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com +``` Since the images are quite large (several gigabytes), it's a good idea to pull the images using Docker in advance. -{{< command >}} -$ docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3 -{{< /command >}} +```bash +docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3 +``` ### Run the sample application Start your LocalStack container using your preferred method. Run the sample application by executing the following command: -{{< command >}} -$ python3 main.,py -{{< /command >}} +```bash +python3 main.py +``` You should see the following output: @@ -92,19 +91,19 @@ You can also invoke a serverless endpoint, by navigating to `main.py` and uncomm ## Resource Browser -The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing Lambda resources. +The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing Sagemaker resources. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Sagemaker** under the **Compute** section. The Resource Browser displays Models, Endpoint Configurations and Endpoint. You can click on individual resources to view their details. -Sagemaker Resource Browser +![Sagemaker Resource Browser](/images/aws/sagemaker-resource-browser.png) The Resource Browser allows you to perform the following actions: - **Create and Remove Models**: You can remove existing model and create a new model with the required configuration - Sagemaker Resource Browser + ![Sagemaker Create Model](/images/aws/sagemaker-create-model.png) - **Endpoint Configurations & Endpoints**: You can create endpoints from the resource browser that hosts your deployed machine learning model. You can also create endpoint configuration that specifies the type and number of instances that will be used to serve your model on an endpoint. From 4bab28140bd91632a3d787f23f00a5f4e4d143f6 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:09:56 +0530 Subject: [PATCH 76/80] do more --- src/content/docs/aws/services/scheduler.md | 40 ++++---- .../docs/aws/services/secretsmanager.md | 75 ++++++++------- .../docs/aws/services/serverlessrepo.md | 28 +++--- .../docs/aws/services/servicediscovery.md | 94 ++++++++++--------- 4 files changed, 119 insertions(+), 118 deletions(-) diff --git a/src/content/docs/aws/services/scheduler.md b/src/content/docs/aws/services/scheduler.md index 09ebf360..d81aeeaa 100644 --- a/src/content/docs/aws/services/scheduler.md +++ b/src/content/docs/aws/services/scheduler.md @@ -1,6 +1,5 @@ --- title: "EventBridge Scheduler" -linkTitle: "EventBridge Scheduler" description: Get started with EventBridge Scheduler on LocalStack tags: ["Free"] --- @@ -12,7 +11,7 @@ You can use EventBridge Scheduler to create schedules that run at a specific tim You can also use EventBridge Scheduler to create schedules that run within a flexible time window. LocalStack allows you to use the Scheduler APIs in your local environment to create and run schedules. -The supported APIs are available on our [API coverage page]({{< ref "coverage_scheduler" >}}), which provides information on the extent of EventBridge Scheduler's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of EventBridge Scheduler's integration with LocalStack. ## Getting started @@ -26,18 +25,18 @@ We will demonstrate how you can create a new schedule, list all schedules, and t You can create a new SQS queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API. Run the following command to create a new SQS queue: -{{< command >}} -$ awslocal sqs create-queue --queue-name local-notifications -{{< /command >}} +```bash +awslocal sqs create-queue --queue-name local-notifications +``` You can fetch the Queue ARN using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API. Run the following command to fetch the Queue ARN by specifying the Queue URL: -{{< command >}} -$ awslocal sqs get-queue-attributes \ +```bash +awslocal sqs get-queue-attributes \ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/local-notifications \ --attribute-names All -{{< /command >}} +``` Save the Queue ARN for later use. @@ -46,13 +45,13 @@ Save the Queue ARN for later use. You can create a new schedule using the [`CreateSchedule`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreateSchedule.html) API. Run the following command to create a new schedule: -{{< command >}} -$ awslocal scheduler create-schedule \ +```bash +awslocal scheduler create-schedule \ --name sqs-templated-schedule \ --schedule-expression 'rate(5 minutes)' \ --target '{"RoleArn": "arn:aws:iam::000000000000:role/schedule-role", "Arn":"arn:aws:sqs:us-east-1:000000000000:local-notifications", "Input": "test" }' \ --flexible-time-window '{ "Mode": "OFF"}' -{{< /command >}} +``` The following output is displayed: @@ -67,9 +66,9 @@ The following output is displayed: You can list all schedules using the [`ListSchedules`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListSchedules.html) API. Run the following command to list all schedules: -{{< command >}} -$ awslocal scheduler list-schedules -{{< /command >}} +```bash +awslocal scheduler list-schedules +``` The following output is displayed: @@ -96,19 +95,18 @@ The following output is displayed: You can tag a schedule using the [`TagResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_TagResource.html) API. Run the following command to tag a schedule: -{{< command >}} -$ awslocal scheduler tag-resource \ +```bash +awslocal scheduler tag-resource \ --resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule \ --tags Key=Name,Value=Test -{{< /command >}} +``` You can view the tags associated with a schedule using the [`ListTagsForResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListTagsForResource.html) API. Run the following command to list the tags associated with a schedule: -{{< command >}} -$ awslocal scheduler list-tags-for-resource \ - --resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule -{{< /command >}} +```bash +awslocal scheduler list-tags-for-resource \ + --resource-arn arn:aws:scheduler:us-east-1:00000000 The following output is displayed: diff --git a/src/content/docs/aws/services/secretsmanager.md b/src/content/docs/aws/services/secretsmanager.md index bc05e433..16e93d04 100644 --- a/src/content/docs/aws/services/secretsmanager.md +++ b/src/content/docs/aws/services/secretsmanager.md @@ -1,6 +1,5 @@ --- title: "Secrets Manager" -linkTitle: "Secrets Manager" description: Get started with Secrets Manager on LocalStack persistence: supported tags: ["Free"] @@ -13,7 +12,7 @@ Secrets Manager integrates seamlessly with AWS services, making it easier to man Secrets Manager supports automatic secret rotation, replacing long-term secrets with short-term ones to mitigate the risk of compromise without requiring application updates. LocalStack allows you to use the Secrets Manager APIs in your local environment to manage, retrieve, and rotate secrets. -The supported APIs are available on our [API coverage page]({{< ref "coverage_secretsmanager" >}}), which provides information on the extent of Secrets Manager's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Secrets Manager's integration with LocalStack. ## Getting started @@ -26,52 +25,52 @@ We will demonstrate how to create a secret, get the secret value, and rotate the Before your create a secret, create a file named `secrets.json` and add the following content: -{{}} -$ touch secrets.json -$ cat > secrets.json << EOF +```bash +touch secrets.json +cat > secrets.json << EOF { "username": "admin", "password": "password" } EOF -{{}} +``` You can now create a secret using the [`CreateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) API. Execute the following command to create a secret named `test-secret`: -{{}} -$ awslocal secretsmanager create-secret \ +```bash +awslocal secretsmanager create-secret \ --name test-secret \ --description "LocalStack Secret" \ --secret-string file://secrets.json -{{}} +``` Upon successful execution, the output will provide you with the ARN of the newly created secret. This identifier will be useful for further operations or integrations. The following output would be retrieved: -{{}} +```bash { "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP", "Name": "test-secret", "VersionId": "a50c6752-3343-4eb0-acf3-35c74f00f707" } -{{}} +``` ### Describe the secret To retrieve the details of the secret you created earlier, you can use the [`DescribeSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_DescribeSecret.html) API. Execute the following command: -{{}} -$ awslocal secretsmanager describe-secret \ +```bash +awslocal secretsmanager describe-secret \ --secret-id test-secret -{{}} +``` The following output would be retrieved: -{{}} +```bash { "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP", "Name": "test-secret", @@ -84,29 +83,29 @@ The following output would be retrieved: }, "CreatedDate": 1692882479.857329 } -{{}} +``` You can also get a list of the secrets available in your local environment that have **Secret** in the name using the [`ListSecrets`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_ListSecrets.html) API. Execute the following command: -{{}} -$ awslocal secretsmanager list-secrets \ +```bash +awslocal secretsmanager list-secrets \ --filters Key=name,Values=Secret -{{}} +``` ### Get the secret value To retrieve the value of the secret you created earlier, you can use the [`GetSecretValue`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) API. Execute the following command: -{{}} -$ awslocal secretsmanager get-secret-value \ +```bash +awslocal secretsmanager get-secret-value \ --secret-id test-secret -{{}} +``` The following output would be retrieved: -{{}} +```bash { "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP", "Name": "test-secret", @@ -117,16 +116,16 @@ The following output would be retrieved: ], "CreatedDate": 1692882479.857329 } -{{}} +``` You can tag your secret using the [`TagResource`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_TagResource.html) API. Execute the following command: -{{}} -$ awslocal secretsmanager tag-resource \ +```bash +awslocal secretsmanager tag-resource \ --secret-id test-secret \ --tags Key=Environment,Value=Development -{{}} +``` ### Rotate the secret @@ -136,15 +135,15 @@ You can copy the code from a [Secrets Manager template](https://docs.aws.amazon. Zip the Lambda function and create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API. Execute the following command: -{{}} -$ zip my-function.zip lambda_function.py -$ awslocal lambda create-function \ +```bash +zip my-function.zip lambda_function.py +awslocal lambda create-function \ --function-name my-rotation-function \ --runtime python3.9 \ --zip-file fileb://my-function.zip \ --handler my-handler \ --role arn:aws:iam::000000000000:role/service-role/rotation-lambda-role -{{}} +``` You can now set a resource policy on the Lambda function to allow Secrets Manager to invoke it using [`AddPermission`](https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html) API. @@ -152,30 +151,30 @@ Please note that this is not required with the default LocalStack settings, sinc Execute the following command: -{{}} -$ awslocal lambda add-permission \ +```bash +awslocal lambda add-permission \ --function-name my-rotation-function \ --action lambda:InvokeFunction \ --statement-id SecretsManager \ --principal secretsmanager.amazonaws.com -{{}} +``` You can now create a rotation schedule for the secret using the [`RotateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_RotateSecret.html) API. Execute the following command: -{{}} -$ awslocal secretsmanager rotate-secret \ +```bash +awslocal secretsmanager rotate-secret \ --secret-id MySecret \ --rotation-lambda-arn arn:aws:lambda:us-east-1:000000000000:function:my-rotation-function \ --rotation-rules "{\"ScheduleExpression\": \"cron(0 16 1,15 *?*)\", \"Duration\": \"2h\"}" -{{}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing secrets in your local environment. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Secrets Manager** under the **Security Identity Compliance** section. -Secrets Manager Resource Browser +![Secrets Manager Resource Browser](/images/aws/secrets-manager-resource-browser.png)

The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/serverlessrepo.md b/src/content/docs/aws/services/serverlessrepo.md index c7f3f860..93ab1d1c 100644 --- a/src/content/docs/aws/services/serverlessrepo.md +++ b/src/content/docs/aws/services/serverlessrepo.md @@ -1,8 +1,6 @@ --- title: "Serverless Application Repository" -linkTitle: "Serverless Application Repository" -description: > - Get started with Serverless Application Repository on LocalStack +description: Get started with Serverless Application Repository on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ Using Serverless Application Repository, developers can build & publish applicat Serverless Application Repository provides a user-friendly interface to search, filter, and browse through a diverse catalog of serverless applications. LocalStack allows you to use the Serverless Application Repository APIs in your local environment to create, update, delete, and list serverless applications and components. -The supported APIs are available on our [API coverage page]({{< ref "coverage_serverlessrepo" >}}), which provides information on the extent of Serverless Application Repository's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Serverless Application Repository's integration with LocalStack. ## Getting started @@ -26,9 +24,9 @@ We will demonstrate how to create a SAM application that comprises a Hello World To create a sample SAM application using the `samlocal` CLI, execute the following command: -{{< command >}} -$ samlocal init --runtime python3.9 -{{< /command >}} +```bash +samlocal init --runtime python3.9 +``` This command downloads a sample SAM application template and generates a `template.yml` file in the current directory. The template includes a Lambda function and an API Gateway endpoint that supports a `GET` operation. @@ -53,11 +51,11 @@ Metadata: Once the Metadata section is added, run the following command to create the Lambda function deployment package and the packaged SAM template: -{{< command >}} +```bash samlocal package \ --template-file template.yaml \ --output-template-file packaged.yaml -{{< /command >}} +``` This command generates a `packaged.yaml` file in the current directory containing the packaged SAM template. The packaged template will be similar to the original template file, but it will now include a `CodeUri` property for the Lambda function, as shown in the example below: @@ -74,9 +72,9 @@ Resources: To retrieve the Application ID for your SAM application, you can utilize the [`awslocal`](https://github.com/localstack/awscli-local) CLI by running the following command: -{{< command >}} +```bash awslocal serverlessrepo list-applications -{{< /command >}} +``` In the output, you will observe the `ApplicationId` property in the output, which is the Application ID for your SAM application, along with other properties such as the `Author`, `Description`, `Name`, `SpdxLicenseId`, and `Version` providing further details about your application. @@ -84,20 +82,20 @@ In the output, you will observe the `ApplicationId` property in the output, whic To publish your application to the Serverless Application Repository, execute the following command: -{{< command >}} +```bash samlocal publish \ --template packaged.yaml \ --region us-east-1 -{{< /command >}} +``` ### Delete the SAM application To remove a SAM application from the Serverless Application Repository, you can use the following command: -{{< command >}} +```bash awslocal serverlessrepo delete-application \ --application-id -{{< /command >}} +``` Replace `` with the Application ID of your SAM application that you retrieved in the previous step. diff --git a/src/content/docs/aws/services/servicediscovery.md b/src/content/docs/aws/services/servicediscovery.md index acc5c8f0..078714cf 100644 --- a/src/content/docs/aws/services/servicediscovery.md +++ b/src/content/docs/aws/services/servicediscovery.md @@ -1,8 +1,6 @@ --- title: "Service Discovery" -linkTitle: "Service Discovery" -description: > - Get started with Service Discovery on LocalStack +description: Get started with Service Discovery on LocalStack tags: ["Ultimate"] --- @@ -13,7 +11,7 @@ Service Discovery allows for a centralized mechanism for dynamically registering Service discovery uses Cloud Map API actions to manage HTTP and DNS namespaces for services, enabling automatic registration and discovery of services running in the cluster. LocalStack allows you to use the Service Discovery APIs in your local environment to monitor and manage your services across various environments and network topologies. -The supported APIs are available on our [API coverage page]({{< ref "coverage_servicediscovery" >}}), which provides information on the extent of Service Discovery's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Service Discovery's integration with LocalStack. ## Getting Started @@ -29,11 +27,11 @@ This API allows you to define a custom name for your namespace and specify the V To create the private Cloud Map service discovery namespace, execute the following command: -{{< command >}} -$ awslocal servicediscovery create-private-dns-namespace \ +```bash +awslocal servicediscovery create-private-dns-namespace \ --name tutorial \ --vpc -{{< /command >}} +``` Ensure that you replace `` with the actual ID of the VPC you intend to use for the namespace. Upon running this command, you will receive an output containing an `OperationId`. @@ -41,10 +39,10 @@ This identifier can be used to check the status of the operation. To verify the status of the operation, execute the following command: -{{< command >}} -$ awslocal servicediscovery get-operation \ +```bash +awslocal servicediscovery get-operation \ --operation-id -{{< /command >}} +``` The output will consist of a `NAMESPACE` ID, which you will need to create a service within the namespace. @@ -55,12 +53,12 @@ This service represents a specific component or resource in your application. To create a service within the namespace, execute the following command: -{{< command >}} -$ awslocal servicediscovery create-service \ +```bash +awslocal servicediscovery create-service \ --name myapplication \ --dns-config "NamespaceId="",DnsRecords=[{Type="A",TTL="300"}]" \ --health-check-custom-config FailureThreshold=1 -{{< /command >}} +``` Upon successful execution, the output will provide you with the Service ID and the Amazon Resource Name (ARN) of the newly created service. These identifiers will be useful for further operations or integrations. @@ -72,10 +70,10 @@ To integrate the service you created earlier with an ECS (Elastic Container Serv Start by creating an ECS cluster using the [`CreateCluster`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCluster.html) API. Execute the following command: -{{< command >}} -$ awslocal ecs create-cluster \ +```bash +awslocal ecs create-cluster \ --cluster-name tutorial -{{< /command >}} +``` ### Register a task definition @@ -120,10 +118,10 @@ Create a file named `fargate-task.json` and add the following content: Register the task definition using the [`RegisterTaskDefinition`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html) API. Execute the following command: -{{< command >}} -$ awslocal ecs register-task-definition \ +```bash +awslocal ecs register-task-definition \ --cli-input-json file://fargate-task.json -{{< /command >}} +``` ### Create an ECS service @@ -131,20 +129,26 @@ To create an ECS service, you will need to retrieve the `securityGroups` and `su You can obtain this information by using the [`DescribeVpcs`](https://docs.aws.amazon.com/vpc/latest/APIReference/API_DescribeVpcs.html) API. Execute the following command to retrieve the details of all VPCs: -{{< command >}} -$ awslocal ec2 describe-vpcs -{{< /command >}} +```bash +awslocal ec2 describe-vpcs +``` The output will include a list of VPCs. Locate the VPC that was used to create the Cloud Map namespace and make a note of its `VpcId` value. Next, execute the following commands to retrieve the `securityGroups` and `subnets` associated with the VPC: -{{< command >}} -$ awslocal ec2 describe-security-groups --filters Name=vpc-id,Values=vpc- --query 'SecurityGroups[*].[GroupId, GroupName]' --output text +```bash +awslocal ec2 describe-security-groups \ + --filters Name=vpc-id,Values=vpc- \ + --query 'SecurityGroups[*].[GroupId, GroupName]' \ + --output text -$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=vpc- --query 'Subnets[*].[SubnetId, CidrBlock]' --output text -{{< /command >}} +awslocal ec2 describe-subnets \ + --filters Name=vpc-id,Values=vpc- \ + --query 'Subnets[*].[SubnetId, CidrBlock]' \ + --output text +``` Replace `` with the actual VpcId value of the VPC you identified earlier. Make a note of the `GroupId` and `SubnetId` values. @@ -177,20 +181,20 @@ Create a new file named `ecs-service-discovery.json` and add the following conte Create your ECS service using the [`CreateService`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html) API. Execute the following command: -{{< command >}} -$ awslocal ecs create-service \ +```bash +awslocal ecs create-service \ --cli-input-json file://ecs-service-discovery.json -{{< /command >}} +``` ### Verify the service You can use the Service Discovery service ID to verify that the service was created successfully. Execute the following command: -{{< command >}} -$ awslocal servicediscovery list-instances \ +```bash +awslocal servicediscovery list-instances \ --service-id -{{< /command >}} +``` The output will consist of the resource ID, and you can further use the [`DiscoverInstances`](https://docs.aws.amazon.com/cloud-map/latest/api/API_DiscoverInstances.html) API. This API allows you to query the DNS records associated with the service and perform various operations. @@ -212,31 +216,33 @@ Both `list-services` and `list-namespaces` support `EQ` (default condition if no Both conditions and only support a single value to match by. The following examples demonstrate how to use filters with these operations: -{{< command >}} -$ awslocal servicediscovery list-namespaces \ +```bash +awslocal servicediscovery list-namespaces \ --filters "Name=HTTP_NAME,Values=['example-namespace'],Condition=EQ" -{{< /command >}} +``` -{{< command >}} -$ awslocal servicediscovery list-services \ +```bash +awslocal servicediscovery list-services \ --filters "Name=NAMESPACE_ID,Values=['id_to_match']" -{{< /command >}} +``` The command `discover-instance` supports parameters and optional parameters as filter criteria. Conditions in parameters must match return values, while if one ore more conditions in optional parameters match, the subset is returned, if no conditions in optional parameters match, all unfiltered results are returned. This command will only return instances where the parameter `env` is equal to `fuu`: -{{< command >}} -$ awslocal servicediscovery discover-instances \ + +```bash +awslocal servicediscovery discover-instances \ --namespace-name example-namespace \ --service-name example-service \ --query-parameters "env"="fuu" -{{< /command >}} +``` This command instead will return all instances where the optional parameter `env` is equal to `bar`, but if no instances match, all instances are returned: -{{< command >}} -$ awslocal servicediscovery discover-instances \ + +```bash +awslocal servicediscovery discover-instances \ --namespace-name example-namespace \ --service-name example-service \ --optional-parameters "env"="bar" -{{< /command >}} +``` From 36686f2b10a6f52c29aaf38b4067c137468f8bf3 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:11:35 +0530 Subject: [PATCH 77/80] revamp ses --- src/content/docs/aws/services/ses.md | 101 +++++++++++++++------------ 1 file changed, 55 insertions(+), 46 deletions(-) diff --git a/src/content/docs/aws/services/ses.md b/src/content/docs/aws/services/ses.md index f4509be4..bff9e9a5 100644 --- a/src/content/docs/aws/services/ses.md +++ b/src/content/docs/aws/services/ses.md @@ -1,6 +1,5 @@ --- title: "Simple Email Service (SES)" -linkTitle: "Simple Email Service (SES)" description: Get started with Amazon Simple Email Service (SES) on LocalStack tags: ["Free", "Base"] persistence: supported @@ -11,12 +10,12 @@ persistence: supported Simple Email Service (SES) is an emailing service that can be integrated with other cloud-based services. It provides API to facilitate email templating, sending bulk emails and more. -The supported APIs are available on the API coverage page for [SESv1]({{< ref "coverage_ses" >}}) and [SESv2]({{< ref "coverage_sesv2" >}}). +The supported APIs are available on the API coverage page for [SESv1](), and [SESv2](). -{{< callout "Note" >}} +:::note Users on Free plan can use SES V1 APIs in LocalStack for basic mocking and testing. For advanced features like SMTP integration and other emulation capabilities, please refer to the Ultimate plan. -{{< /callout >}} +::: ## Getting Started @@ -30,38 +29,43 @@ A verified identity appears as part of the 'From' field in the sent email. A singular email identity can be added using the `VerifyEmailIdentity` operation. -{{< command >}} -$ awslocal ses verify-email-identity --email hello@example.com +```bash +awslocal ses verify-email-identity --email hello@example.com -$ awslocal ses list-identities +awslocal ses list-identities { "Identities": [ "hello@example.com" ] } -{{< /command >}} +``` -{{< callout >}} +:::note On AWS, verifying email identities or domain identities require additional steps like changing DNS configuration or clicking verification links respectively. In LocalStack, identities are automatically verified. -{{< /callout >}} +::: Next, emails can be sent using the `SendEmail` operation. -{{< command >}} -$ awslocal ses send-email \ +```bash +awslocal ses send-email \ --from "hello@example.com" \ --message 'Body={Text={Data="This is the email body"}},Subject={Data="This is the email subject"}' \ --destination 'ToAddresses=jeff@aws.com' +``` + +The following output is displayed: + +```bash { "MessageId": "labpqxukegeaftfh-ymaouvvy-ribr-qeoy-izfp-kxaxbfcfsgbh-wpewvd" } -{{< /command >}} +``` -{{< callout >}} +:::note In LocalStack Community, all operations are mocked and no real emails are sent. In LocalStack Pro, it is possible to send real emails via an SMTP server. -{{< /callout >}} +::: ## Retrieve Sent Emails @@ -70,56 +74,61 @@ Sent messages can be retrieved in following ways: - **API endpoint:** LocalStack provides a service endpoint (`/_aws/ses`) which can be used to return in-memory saved messages. A `GET` call returns all messages. Query parameters `id` and `email` can be used to filter by message ID and message source respectively. - {{< command >}} -$ curl --silent localhost.localstack.cloud:4566/_aws/ses?email=hello@example.com | jq . -{ - "messages": [ + + ```bash + curl --silent localhost.localstack.cloud:4566/_aws/ses?email=hello@example.com | jq . + ``` + + The following output is displayed: + + ```bash { - "Id": "dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc", - "Region": "eu-central-1", - "Destination": { - "ToAddresses": [ - "jeff@aws.com" - ] - }, - "Source": "hello@example.com", - "Subject": "This is the email subject", - "Body": { - "text_part": "This is the email body", - "html_part": null - }, - "Timestamp": "2023-09-11T08:37:13" + "messages": [ + { + "Id": "dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc", + "Region": "eu-central-1", + "Destination": { + "ToAddresses": [ + "jeff@aws.com" + ] + }, + "Source": "hello@example.com", + "Subject": "This is the email subject", + "Body": { + "text_part": "This is the email body", + "html_part": null + }, + "Timestamp": "2023-09-11T08:37:13" + } + ] } - ] -} - {{< /command >}} + ``` A `DELETE` call clears all messages from the memory. The query parameter `id` can be used to delete only a specific message. - {{< command >}} - $ curl -X DELETE localhost.localstack.cloud:4566/_aws/ses?id=dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc - {{< /command >}} -- **Filesystem:** All messages are saved to the state directory (see [filesystem layout]({{< ref "filesystem" >}})). + + ```bash + curl -X DELETE localhost.localstack.cloud:4566/_aws/ses?id=dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc + ``` +- **Filesystem:** All messages are saved to the state directory (see [filesystem layout](/aws/capabilities/config/filesystem)). The files are saved as JSON in the `ses/` subdirectory and named by the message ID. ## SMTP Integration LocalStack Pro supports sending emails via an SMTP server. To enable this, set the connections parameters and access credentials for the server in the configuration. -Refer to the [Configuration]({{< ref "configuration#emails" >}}) guide for details. +Refer to the [Configuration](/aws/capabilities/config/configuration/#emails) guide for details. -{{< callout "tip" >}} +:::note If you do not have access to a live SMTP server, you can use tools like [MailDev](https://github.com/maildev/maildev) or [smtp4dev](https://github.com/rnwood/smtp4dev). These run as Docker containers on your local machine. Make sure they run in the same Docker network as the LocalStack container. -{{< /callout >}} +::: ## Resource Browser LocalStack Web Application provides a resource browser for managing email identities and introspecing sent emails. -SES Resource Browser -
-
+![SES Resource Browser](/images/aws/ses-resource-browser.png) The Resource Browser allows you to perform following actions: - **Create Email Identity**: Create an email identity by clicking **Create Identity** and specifying the email address. From 75cd35cb450a2ae31d8fb28b166fdd4e63ac8407 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:19:32 +0530 Subject: [PATCH 78/80] more --- src/content/docs/aws/services/shield.md | 27 +- src/content/docs/aws/services/sns.md | 257 ++++++++++++------ .../docs/aws/services/{sqs.md => sqs.mdx} | 234 +++++++++------- 3 files changed, 322 insertions(+), 196 deletions(-) rename src/content/docs/aws/services/{sqs.md => sqs.mdx} (86%) diff --git a/src/content/docs/aws/services/shield.md b/src/content/docs/aws/services/shield.md index 3b32f837..92fffcd5 100644 --- a/src/content/docs/aws/services/shield.md +++ b/src/content/docs/aws/services/shield.md @@ -1,6 +1,5 @@ --- title: "Shield" -linkTitle: "Shield" description: Get started with Shield on LocalStack tags: ["Ultimate"] --- @@ -12,7 +11,7 @@ Shield provides always-on detection and inline mitigations that minimize applica Shield detection and mitigation is designed to protect against threats, including ones that are not known to the service at the time of detection. LocalStack allows you to use the Shield APIs in your local environment, and provides a simple way to mock and test the Shield service locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_shield" >}}), which provides information on the extent of Shield's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Shield's integration with LocalStack. ## Getting Started @@ -26,11 +25,11 @@ We will demonstrate how to create a Shield protection, list all protections, and To create a Shield protection, use the [`CreateProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/create-protection.html) API. The following command creates a Shield protection for a resource: -{{< command >}} -$ awslocal shield create-protection \ +```bash +awslocal shield create-protection \ --name "my-protection" \ --resource-arn "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/app/my-alb/1234567890" -{{< /command >}} +``` The output should look similar to the following: @@ -45,9 +44,9 @@ The output should look similar to the following: To list all Shield protections, use the [`ListProtections`](https://docs.aws.amazon.com/cli/latest/reference/shield/list-protections.html) API. The following command lists all Shield protections: -{{< command >}} -$ awslocal shield list-protections -{{< /command >}} +```bash +awslocal shield list-protections +``` The output should look similar to the following: @@ -69,10 +68,10 @@ The output should look similar to the following: To describe a Shield protection, use the [`DescribeProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/describe-protection.html) API. The following command describes a Shield protection: -{{< command >}} -$ awslocal shield describe-protection \ +```bash +awslocal shield describe-protection \ --protection-id "67908d33-16c0-443d-820a-31c02c4d5976" -{{< /command >}} +``` Replace the protection ID with the ID of the protection you want to describe. The output should look similar to the following: @@ -93,10 +92,10 @@ The output should look similar to the following: To delete a Shield protection, use the [`DeleteProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/delete-protection.html) API. The following command deletes a Shield protection: -{{< command >}} -$ awslocal shield delete-protection \ +```bash +awslocal shield delete-protection \ --protection-id "67908d33-16c0-443d-820a-31c02c4d5976" -{{< /command >}} +``` ## Current Limitations diff --git a/src/content/docs/aws/services/sns.md b/src/content/docs/aws/services/sns.md index 7b091368..e052249f 100644 --- a/src/content/docs/aws/services/sns.md +++ b/src/content/docs/aws/services/sns.md @@ -1,6 +1,5 @@ --- title: "Simple Notification Service (SNS)" -linkTitle: "Simple Notification Service (SNS)" description: Get started with Simple Notification Service (SNS) on LocalStack persistence: supported tags: ["Free"] @@ -12,7 +11,7 @@ Simple Notification Service (SNS) is a serverless messaging service that can dis SNS employs the Publish/Subscribe, an asynchronous messaging pattern that decouples services that produce events from services that process events. LocalStack allows you to use the SNS APIs in your local environment to coordinate the delivery of messages to subscribing endpoints or clients. -The supported APIs are available on our [API coverage page]({{< ref "coverage_sns" >}}), which provides information on the extent of SNS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of SNS's integration with LocalStack. ## Getting started @@ -27,68 +26,68 @@ We will demonstrate how to create an SNS topic, publish messages, and subscribe To create an SNS topic, use the [`CreateTopic`](https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html) API. Run the following command to create a topic named `localstack-topic`: -{{< command >}} -$ awslocal sns create-topic --name localstack-topic -{{< /command >}} +```bash +awslocal sns create-topic --name localstack-topic +``` You can set the SNS topic attribute using the SNS topic you created previously by using the [`SetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_SetTopicAttributes.html) API. Run the following command to set the `DisplayName` attribute for the topic: -{{< command >}} -$ awslocal sns set-topic-attributes \ +```bash +awslocal sns set-topic-attributes \ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \ --attribute-name DisplayName \ --attribute-value MyTopicDisplayName -{{< /command >}} +``` You can list all the SNS topics using the [`ListTopics`](https://docs.aws.amazon.com/sns/latest/api/API_ListTopics.html) API. Run the following command to list all the SNS topics: -{{< command >}} -$ awslocal sns list-topics -{{< /command >}} +```bash +awslocal sns list-topics +``` ### Get attributes and publish messages to SNS topic You can get attributes for a single SNS topic using the [`GetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_GetTopicAttributes.html) API. Run the following command to get the attributes for the SNS topic: -{{< command >}} -$ awslocal sns get-topic-attributes \ +```bash +awslocal sns get-topic-attributes \ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic -{{< /command >}} +``` You can change the `topic-arn` to the ARN of the SNS topic you created previously. To publish messages to the SNS topic, create a new file named `messages.txt` in your current directory and add some content. Run the following command to publish messages to the SNS topic using the [`Publish`](https://docs.aws.amazon.com/sns/latest/api/API_Publish.html) API: -{{< command >}} -$ awslocal sns publish \ +```bash +awslocal sns publish \ --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" \ --message file://message.txt -{{< /command >}} +``` ### Subscribing to SNS topics and setting subscription attributes You can subscribe to the SNS topic using the [`Subscribe`](https://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html) API. Run the following command to subscribe to the SNS topic: -{{< command >}} -$ awslocal sns subscribe \ +```bash +awslocal sns subscribe \ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \ --protocol email \ --notification-endpoint test@gmail.com -{{< /command >}} +``` You can configure the SNS Subscription attributes, using the `SubscriptionArn` returned by the previous step. For example, run the following command to set the `RawMessageDelivery` attribute for the subscription: -{{< command >}} -$ awslocal sns set-subscription-attributes \ +```bash +awslocal sns set-subscription-attributes \ --subscription-arn arn:aws:sns:us-east-1:000000000000:test-topic:b6f5e924-dbb3-41c9-aa3b-589dbae0cfff \ --attribute-name RawMessageDelivery --attribute-value true -{{< /command >}} +``` ### Working with SQS subscriptions for SNS @@ -96,32 +95,54 @@ The getting started covers email subscription, but SNS can integrate with many A A Common technology to integrate with is SQS. First we need to ensure we create an SQS queue named `my-queue`: -{{< command >}} -$ awslocal sqs create-queue --queue-name my-queue + +```bash +awslocal sqs create-queue --queue-name my-queue +``` + +The following output is displayed: + +```bash { "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue" } -{{< /command >}} +``` Subscribe the SQS queue to the topic we created previously: -{{< command >}} -$ awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --protocol sqs --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-queue" + +```bash +awslocal sns subscribe \ + --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" \ + --protocol sqs \ + --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-queue" +``` + +The following output is displayed: + +```bash { "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8" } -{{< /command >}} +``` Sending a message to the queue, via the topic -{{< command >}} + +```bash $ awslocal sns publish --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --message "hello" { "MessageId": "5a1593ce-411b-44dc-861d-907daa05353b" } -{{< /command >}} +``` Check that our message has arrived: -{{< command >}} -$ awslocal sqs receive-message --queue-url "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue" + +```bash +awslocal sqs receive-message \ + --queue-url "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue" +``` + +The following output is displayed: + { "Messages": [ { @@ -132,15 +153,19 @@ $ awslocal sqs receive-message --queue-url "http://sqs.us-east-1.localhost.local } ] } - -{{< /command >}} +``` To remove the subscription you need the subscription ARN which you can find by listing the subscriptions. You can list all the SNS subscriptions using the [`ListSubscriptions`](https://docs.aws.amazon.com/sns/latest/api/API_ListSubscriptions.html) API. Run the following command to list all the SNS subscriptions: -{{< command >}} -$ awslocal sns list-subscriptions +```bash +awslocal sns list-subscriptions +``` + +The following output is displayed: + +```bash { "Subscriptions": [ { @@ -152,12 +177,14 @@ $ awslocal sns list-subscriptions } ] } -{{< /command >}} +``` Then, use the ARN to unsubscribe -{{< command >}} -$ awslocal sns unsubscribe --subscription-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8" -{{< /command >}} + +```bash +awslocal sns unsubscribe \ + --subscription-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8" +``` ## Developer endpoints @@ -193,9 +220,13 @@ You can also call `DELETE /_aws/sns/platform-endpoint-messages` to clear the mes In this example, we will create a platform endpoint in SNS and publish a message to it. Run the following commands to create a platform endpoint: -{{< command >}} -$ awslocal sns create-platform-application --name app-test --platform APNS --attributes {} -{{< /command >}} +```bash +awslocal sns create-platform-application \ + --name app-test \ + --platform APNS \ + --attributes {} +``` + An example response is shown below: ```json @@ -205,9 +236,14 @@ An example response is shown below: ``` Using the `PlatformApplicationArn` from the previous call: -{{< command >}} -$ awslocal sns create-platform-endpoint --platform-application-arn "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test" --token my-fake-token -{{< /command >}} + +```bash +awslocal sns create-platform-endpoint \ + --platform-application-arn "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test" \ + --token my-fake-token +``` + +The following output is displayed: ```json { @@ -217,9 +253,14 @@ $ awslocal sns create-platform-endpoint --platform-application-arn "arn:aws:sns: Publish a message to the platform endpoint: -{{< command >}} -$ awslocal sns publish --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944" --message '{"APNS_PLATFORM": "{\"aps\": {\"content-available\": 1}}"}' --message-structure json -{{< /command >}} +```bash +awslocal sns publish \ + --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944" \ + --message '{"APNS_PLATFORM": "{\"aps\": {\"content-available\": 1}}"}' \ + --message-structure json +``` + +The following output is displayed: ```json { @@ -229,9 +270,11 @@ $ awslocal sns publish --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint Retrieve the messages published to the platform endpoint using [curl](https://curl.se/): -{{< command >}} -$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq . -{{< /command >}} +```bash +curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq . +``` + +The following output is displayed: ```json { @@ -253,13 +296,17 @@ $ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq . With those same filters, you can reset the saved messages at `DELETE /_aws/sns/platform-endpoint-messages`. Run the following command to reset the saved messages: -{{< command >}} -$ curl -X "DELETE" "http://localhost:4566/_aws/sns/platform-endpoint-messages" -{{< /command >}} +```bash +curl -X "DELETE" "http://localhost:4566/_aws/sns/platform-endpoint-messages" +``` + We can now check that the messages have been properly deleted: -{{< command >}} -$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq . -{{< /command >}} + +```bash +curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq . +``` + +The following output is displayed: ```json { @@ -298,9 +345,12 @@ In this example, we will publish a message to a phone number and retrieve it: Publish a message to a phone number: -{{< command >}} -$ awslocal sns publish --phone-number "" --message "Hello World!" -{{< /command >}} +```bash +awslocal sns publish \ + --phone-number "" \ + --message "Hello World!" +``` + An example response is shown below: ```json @@ -311,9 +361,11 @@ An example response is shown below: Retrieve the message published using [curl](https://curl.se/) and [jq](https://jqlang.github.io/jq/): -{{< command >}} -$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq . -{{< /command >}} +```bash +curl "http://localhost:4566/_aws/sns/sms-messages" | jq . +``` + +The following output is displayed: ```json { @@ -339,13 +391,17 @@ You can reset the saved messages at `DELETE /_aws/sns/sms-messages`. Using the query parameters, you can also selectively reset messages only in one region or from one phone number. Run the following command to reset the saved messages: -{{< command >}} -$ curl -X "DELETE" "http://localhost:4566/_aws/sns/sms-messages" -{{< /command >}} +```bash +curl -X "DELETE" "http://localhost:4566/_aws/sns/sms-messages" +``` + We can now check that the messages have been properly deleted: -{{< command >}} -$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq . -{{< /command >}} + +```bash +curl "http://localhost:4566/_aws/sns/sms-messages" | jq . +``` + +The following output is displayed: ```json { @@ -388,9 +444,11 @@ In this example, we will subscribe to an external SNS integration not confirming Create an SNS topic, and create a subscription to a external HTTP SNS integration: -{{< command >}} +```bash awslocal sns create-topic --name "test-external-integration" -{{< /command >}} +``` + +The following output is displayed: ```json { @@ -399,9 +457,16 @@ awslocal sns create-topic --name "test-external-integration" ``` We now create an HTTP SNS subscription to an external endpoint: -{{< command >}} -awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --protocol https --notification-endpoint "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9" --return-subscription-arn -{{< /command >}} + +```bash +awslocal sns subscribe \ + --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" \ + --protocol https \ + --notification-endpoint "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9" \ + --return-subscription-arn +``` + +The following output is displayed: ```json { @@ -411,9 +476,13 @@ awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:test-exte Now, we can check the `PendingConfirmation` status of our subscription, showing our endpoint did not confirm the subscription. You will need to use the `SubscriptionArn` from the response of your subscribe call: -{{< command >}} -awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" -{{< /command >}} + +```bash +awslocal sns get-subscription-attributes \ + --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" +``` + +The following output is displayed: ```json { @@ -431,9 +500,12 @@ awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east ``` To manually confirm the subscription, we will fetch its token with our developer endpoint: -{{< command >}} + +```bash curl "http://localhost:4566/_aws/sns/subscription-tokens/arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" | jq . -{{< /command >}} +``` + +The following output is displayed: ```json { @@ -443,9 +515,14 @@ curl "http://localhost:4566/_aws/sns/subscription-tokens/arn:aws:sns:us-east-1:0 ``` We can now use this token to manually confirm the subscription: -{{< command >}} -awslocal sns confirm-subscription --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --token 75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87 -{{< /command >}} + +```bash +awslocal sns confirm-subscription \ + --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" \ + --token 75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87 +``` + +The following output is displayed: ```json { @@ -454,9 +531,13 @@ awslocal sns confirm-subscription --topic-arn "arn:aws:sns:us-east-1:00000000000 ``` We can now finally verify the subscription has been confirmed: -{{< command >}} -awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" -{{< /command >}} + +```bash +awslocal sns get-subscription-attributes \ + --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" +``` + +The following output is displayed: ```json { @@ -481,7 +562,7 @@ SNS will now publish messages to your HTTP endpoint, even if it did not confirm The LocalStack Web Application provides a Resource Browser for managing SNS topics. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SNS** under the **App Integration** section. -SNS Resource Browser +![SNS Resource Browser](/images/aws/sns-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/sqs.md b/src/content/docs/aws/services/sqs.mdx similarity index 86% rename from src/content/docs/aws/services/sqs.md rename to src/content/docs/aws/services/sqs.mdx index 0b46990b..a90b4929 100644 --- a/src/content/docs/aws/services/sqs.md +++ b/src/content/docs/aws/services/sqs.mdx @@ -1,8 +1,6 @@ --- title: "Simple Queue Service (SQS)" description: Get started with Simple Queue Service (SQS) on LocalStack -aliases: -- /aws/sqs/ persistence: supported tags: ["Free"] --- @@ -14,7 +12,7 @@ It allows you to decouple different components of your applications by enabling SQS allows you to reliably send, store, and receive messages with support for standard and FIFO queues. LocalStack allows you to use the SQS APIs in your local environment to integrate and decouple distributed systems via hosted queues. -The supported APIs are available on our [API coverage page]({{< ref "coverage_sqs" >}}), which provides information on the extent of SQS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of SQS's integration with LocalStack. ## Getting started @@ -28,16 +26,16 @@ We will demonstrate how to create an SQS queue, retrieve queue attributes and UR To create an SQS queue, use the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API. Run the following command to create a queue named `localstack-queue`: -{{< command >}} -$ awslocal sqs create-queue --queue-name localstack-queue -{{< / command >}} +```bash +awslocal sqs create-queue --queue-name localstack-queue +``` You can list all queues in your account using the [`ListQueues`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ListQueues.html) API. Run the following command to list all queues in your account: -{{< command >}} -$ awslocal sqs list-queues -{{< / command >}} +```bash +awslocal sqs list-queues +``` You will see the following output: @@ -54,9 +52,11 @@ You need to pass the `queue-url` and `attribute-names` parameters. Run the following command to retrieve the queue attributes: -{{< command >}} -$ awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --attribute-names All -{{< / command >}} +```bash +awslocal sqs get-queue-attributes \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \ + --attribute-names All +``` ### Sending and receiving messages from the queue @@ -65,9 +65,11 @@ To send a message to a SQS queue, you can use the [`SendMessage`](https://docs.a Run the following command to send a message to the queue: -{{< command >}} -$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --message-body "Hello World" -{{< / command >}} +```bash +awslocal sqs send-message \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \ + --message-body "Hello World" +``` It will return the MD5 hash of the Message Body and a Message ID. You will see output similar to the following: @@ -82,9 +84,10 @@ You will see output similar to the following: You can receive messages from the queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API. Run the following command to receive messages from the queue: -{{< command >}} -$ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue -{{< / command >}} +```bash +awslocal sqs receive-message \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue +``` You will see the Message ID, MD5 hash of the Message Body, Receipt Handle, and the Message Body in the output. @@ -95,18 +98,21 @@ You need to pass the `queue-url` and `receipt-handle` parameters. Run the following command to delete a message from the queue: -{{< command >}} -$ awslocal sqs delete-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --receipt-handle -{{< / command >}} +```bash +awslocal sqs delete-message \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \ + --receipt-handle +``` Replace `` with the receipt handle you received in the previous step. If you have sent multiple messages to the queue, you can purge the queue using the [`PurgeQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html) API. Run the following command to purge the queue: -{{< command >}} -$ awslocal sqs purge-queue --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue -{{< / command >}} +```bash +awslocal sqs purge-queue \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue +``` ## Dead-letter queue testing @@ -115,10 +121,15 @@ Here's an end-to-end example of how to use message move tasks to test DLQ redriv First, create three queues. One will serve as original input queue, one as DLQ, and the third as target for DLQ redrive. -{{< command >}} -$ awslocal sqs create-queue --queue-name input-queue -$ awslocal sqs create-queue --queue-name dead-letter-queue -$ awslocal sqs create-queue --queue-name recovery-queue +```bash +awslocal sqs create-queue --queue-name input-queue +awslocal sqs create-queue --queue-name dead-letter-queue +awslocal sqs create-queue --queue-name recovery-queue +``` + +The following output is displayed: + +```json { "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue" } @@ -128,27 +139,36 @@ $ awslocal sqs create-queue --queue-name recovery-queue { "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue" } -{{< /command >}} +``` Configure `dead-letter-queue` to be a DLQ for `input-queue`: -{{< command >}} -$ awslocal sqs set-queue-attributes \ ---queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \ ---attributes '{ + +```bash +awslocal sqs set-queue-attributes \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \ + --attributes '{ "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:dead-letter-queue\",\"maxReceiveCount\":\"1\"}" }' -{{< /command >}} +``` Send a message to the input queue: -{{< command >}} -$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue --message-body '{"hello": "world"}' -{{< /command >}} + +```bash +awslocal sqs send-message \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \ + --message-body '{"hello": "world"}' +``` Receive the message twice to provoke a move into the dead-letter queue: -{{< command >}} -$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue -$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue -{{< /command >}} + +```bas +awslocal sqs receive-message \ + --visibility-timeout 0 \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue +awslocal sqs receive-message \ + --visibility-timeout 0 \ + --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue +``` In the localstack logs you should see something like the following line, indicating the message was moved to the DLQ: @@ -157,15 +177,23 @@ In the localstack logs you should see something like the following line, indicat ``` Now, start a message move task to asynchronously move the messages from the DLQ into the recovery queue: -{{< command >}} -$ awslocal sqs start-message-move-task \ - --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue \ - --destination-arn arn:aws:sqs:us-east-1:000000000000:recovery-queue -{{< /command >}} + +```bash +awslocal sqs start-message-move-task \ + --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue \ + --destination-arn arn:aws:sqs:us-east-1:000000000000:recovery-queue +``` Listing the message move tasks should yield something like -{{< command >}} -$ awslocal sqs list-message-move-tasks --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue + +```bash +awslocal sqs list-message-move-tasks \ + --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue +``` + +The following output is displayed: + +```json { "Results": [ { @@ -178,10 +206,11 @@ $ awslocal sqs list-message-move-tasks --source-arn arn:aws:sqs:us-east-1:000000 } ] } -{{< /command >}} +``` Receiving messages from the recovery queue should now show us the original message: -{{< command >}} + +```bash $ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue { "Messages": [ @@ -193,7 +222,7 @@ $ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.locals } ] } -{{< /command >}} +``` ## SQS Query API @@ -204,9 +233,9 @@ With LocalStack, you can conveniently test SQS Query API calls without the need For instance, you can use a basic [curl](https://curl.se/) command to send a `SendMessage` command along with a MessageBody attribute: -{{< command >}} -$ curl "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld" -{{< / command >}} +```bash +curl "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld" +``` You will see the following output: @@ -229,9 +258,9 @@ Adding the `Accept: application/json` header will make the server return JSON: To receive JSON responses from the server, include the `Accept: application/json` header in your request. Here's an example using the [curl](https://curl.se/) command: -{{< command >}} -$ curl -H "Accept: application/json" "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld" -{{< / command >}} +```bash +curl -H "Accept: application/json" "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld" +``` The response will be in JSON format: @@ -288,11 +317,11 @@ You can enable this behavior in LocalStack by setting the `SQS_ENABLE_MESSAGE_RE In AWS, valid values for message retention range from 60 (1 minute) to 1,209,600 (14 days). In LocalStack, we do not put constraints on the value which can be helpful for test scenarios. -{{< callout >}} -Note that, if you enable this option, [persistence]({{< ref "user-guide/state-management/persistence" >}}) or [cloud pods]({{}}) for SQS may not work as expected. +:::note +Note that, if you enable this option, [persistence](/aws/capabilities/state-management/persistence) or [cloud pods](/aws/capabilities/state-management/cloud-pods) for SQS may not work as expected. The reason is that, LocalStack does not adjust timestamps when restoring a state, so time appears to pass between LocalStack runs. Consequently, when you restart LocalStack after a period that is longer than the message retention period, LocalStack will remove all those messages when SQS starts. -{{}} +::: ### Disable CloudWatch Metrics Reporting @@ -337,7 +366,7 @@ Our Lambda implementation automatically resolves these URLs to the LocalStack co When your code run within different containers like ECS tasks or your custom ones, it's advisable to establish your Docker network setup. You can follow these steps: -1. Override the `LOCALSTACK_HOST` variable as outlined in our [network troubleshooting guide]({{< ref "endpoint-url" >}}). +1. Override the `LOCALSTACK_HOST` variable as outlined in our [network troubleshooting guide](). 2. Ensure that your containers can resolve `LOCALSTACK_HOST` to the LocalStack container within the Docker network. 3. We recommend employing `SQS_ENDPOINT_STRATEGY=path`, which generates queue URLs in the format `http:///queue/...`. @@ -359,24 +388,29 @@ The endpoint ignores any additional parameters from the `ReceiveMessage` operati You can call the `/_aws/sqs/messages` endpoint in two different ways: 1. Using the query argument `QueueUrl`, like this: - {{< command >}} - $ http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue - {{< / command >}} + ```bash + http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue + ``` 2. Utilizing the path-based endpoint, as shown in this example: - {{< command >}} - $ http://localhost.localstack.cloud:4566/_aws/sqs/messages/us-east-1/000000000000/my-queue - {{< / command >}} + ```bash + http://localhost.localstack.cloud:4566/_aws/sqs/messages/us-east-1/000000000000/my-queue + ``` #### XML response You can directly call the endpoint to obtain the raw AWS XML response. -{{< tabpane >}} -{{< tab header="curl" lang="bash" >}} +import { Tabs, TabItem } from '@astrojs/starlight/components'; + + + +```bash curl "http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue" -{{< /tab >}} -{{< tab header="Python Requests" lang="python" >}} +``` + + +```python import requests response = requests.get( @@ -384,8 +418,9 @@ response = requests.get( params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"}, ) print(response.text) # outputs the response XML -{{< /tab >}} -{{< / tabpane >}} +``` + + An example response is shown below: @@ -448,21 +483,25 @@ An example response is shown below: You can include the `Accept: application/json` header in your request if you prefer a JSON response. -{{< tabpane >}} -{{< tab header="curl" lang="bash" >}} + + +```bash curl -H "Accept: application/json" \ "http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue" -{{< /tab >}} -{{< tab header="Python Requests" lang="python" >}} +``` + + +```python import requests response = requests.get( url="http://localhost.localstack.cloud:4566/_aws/sqs/messages", - params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue""}, + params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"}, ) print(response.text) # outputs the response XML -{{< /tab >}} -{{< / tabpane >}} +``` + + An example response is shown below: @@ -532,18 +571,22 @@ An example response is shown below: Since the `/_aws/sqs/messages` endpoint is compatible with the SQS `ReceiveMessage` operation, you can use the endpoint as the endpoint URL parameter in your AWS client call. -{{< tabpane >}} -{{< tab header="aws-cli" lang="bash" >}} + + +```bash aws --endpoint-url=http://localhost.localstack.cloud:4566/_aws/sqs/messages sqs receive-message \ --queue-url=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue -{{< /tab >}} -{{< tab header="Boto3" lang="python" >}} +``` + + +```python import boto3 sqs = boto3.client("sqs", endpoint_url="http://localhost.localstack.cloud:4566/_aws/sqs/messages") response = sqs.receive_message(QueueUrl="http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue") print(response) -{{< /tab >}} -{{< / tabpane >}} +``` + + An example response is shown below: @@ -582,22 +625,25 @@ An example response is shown below: The developer endpoint also supports showing invisible and delayed messages via the query arguments `ShowInvisible` and `ShowDelayed`. -{{< tabpane >}} -{{< tab header="curl" lang="bash" >}} + + +```bash curl -H "Accept: application/json" \ "http://localhost.localstack.cloud:4566/_aws/sqs/messages?ShowInvisible=true&ShowDelayed=true&QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue -{{< /tab >}} -{{< tab header="Python Requests" lang="python" >}} +``` + + +```python import requests - response = requests.get( "http://localhost.localstack.cloud:4566/_aws/sqs/messages", params={"QueueUrl": queue_url, "ShowInvisible": True, "ShowDelayed": True}, headers={"Accept": "application/json"}, ) print(response.text) -{{< /tab >}} -{{< / tabpane >}} +``` + + This will also include messages that currently have an active visibility timeout or were delayed and are not actually in the queue yet. Here's an example: @@ -627,7 +673,7 @@ Here's an example: The LocalStack Web Application provides a Resource Browser for managing SQS queues. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SQS** under the **App Integration** section. -SQS Resource Browser +![SQS Resource Browser](/images/aws/sqs-resource-browser.png) The Resource Browser allows you to perform the following actions: From 6a5326a3ac56a02ce88190cec1c3bcc98429e337 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:26:56 +0530 Subject: [PATCH 79/80] get all done --- src/content/docs/aws/services/ssm.md | 31 ++++---- .../{stepfunctions.md => stepfunctions.mdx} | 78 ++++++++++--------- src/content/docs/aws/services/sts.md | 45 ++++++----- src/content/docs/aws/services/support.md | 27 ++++--- src/content/docs/aws/services/swf.md | 66 ++++++++-------- src/content/docs/aws/services/textract.md | 21 +++-- src/content/docs/aws/services/timestream.md | 39 +++++----- src/content/docs/aws/services/transcribe.md | 65 +++++++++------- src/content/docs/aws/services/transfer.md | 3 +- .../docs/aws/services/verifiedpermissions.md | 28 +++---- src/content/docs/aws/services/waf.md | 48 +++++++----- src/content/docs/aws/services/xray.md | 53 +++++++------ 12 files changed, 262 insertions(+), 242 deletions(-) rename src/content/docs/aws/services/{stepfunctions.md => stepfunctions.mdx} (94%) diff --git a/src/content/docs/aws/services/ssm.md b/src/content/docs/aws/services/ssm.md index 60560f38..0e513fc0 100644 --- a/src/content/docs/aws/services/ssm.md +++ b/src/content/docs/aws/services/ssm.md @@ -1,6 +1,5 @@ --- title: "Systems Manager (SSM)" -linkTitle: "Systems Manager (SSM)" description: Get started with Systems Manager (SSM) on LocalStack tags: ["Free"] persistence: supported @@ -12,7 +11,7 @@ Systems Manager (SSM) is a management service provided by Amazon Web Services th SSM simplifies tasks related to system and application management, patching, configuration, and automation, allowing you to maintain the health and compliance of your environment. LocalStack allows you to use the SSM APIs in your local environment to run operational tasks on the Dockerized instances. -The supported APIs are available on our [API coverage page]({{< ref "coverage_ssm" >}}), which provides information on the extent of SSM's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of SSM's integration with LocalStack. ## Getting started @@ -27,20 +26,20 @@ To get started, pull the `ubuntu:focal` image from Docker Hub and tag it as `loc LocalStack uses a naming scheme to recognise and manage the containers and images associated with it. The container are named `localstack-ec2.`, while images are tagged `localstack-ec2/:`. -{{< command >}} -$ docker pull ubuntu:focal -$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-docker-ami:ami-00a001 -{{< / command >}} +```bash +docker pull ubuntu:focal +docker tag ubuntu:focal localstack-ec2/ubuntu-focal-docker-ami:ami-00a001 +``` LocalStack's Docker backend treats Docker images with the above naming scheme as AMIs. The AMI ID is the last part of the image tag, `ami-00a001` in this case. You can run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RunInstances.html) API. Execute the following command to create an EC2 instance using the `ami-00a001` AMI. -{{< command >}} -$ awslocal ec2 run-instances \ +```bash +awslocal ec2 run-instances \ --image-id ami-00a001 --count 1 -{{< / command >}} +``` The following output would be retrieved: @@ -71,12 +70,12 @@ You can copy the `InstanceId` value and use it in the following commands. You can use the [`SendCommand`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) API to send a command to the EC2 instance. The following command sends a `cat lsb-release` command in the `/etc` directory to the EC2 instance. -{{< command >}} -$ awslocal ssm send-command --document-name "AWS-RunShellScript" \ +```bash +awslocal ssm send-command --document-name "AWS-RunShellScript" \ --document-version "1" \ --instance-ids i-abf6920789a06dd84 \ --parameters "commands='cat lsb-release',workingDirectory=/etc" -{{< / command >}} +``` The following output would be retrieved: @@ -101,11 +100,11 @@ You can copy the `CommandId` value and use it in the following commands. You can use the [`GetCommandInvocation`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetCommandInvocation.html) API to retrieve the command output. The following command retrieves the output of the command sent in the previous step. -{{< command >}} -$ awslocal ssm get-command-invocation \ +```bash +awslocal ssm get-command-invocation \ --command-id 23547a9b-6993-4967-9446-f96b9b5dac70 \ --instance-id i-abf6920789a06dd84 -{{< / command >}} +``` Change the `CommandId` and `InstanceId` values to the ones you received in the previous step. The following output would be retrieved: @@ -127,7 +126,7 @@ The following output would be retrieved: The LocalStack Web Application provides a Resource Browser for managing SSM System Parameters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Simple Systems Manager (SSM)** under the **Management/Governance** section. -SSM Resource Browser +![SSM Resource Browser](/images/aws/ssm-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/stepfunctions.md b/src/content/docs/aws/services/stepfunctions.mdx similarity index 94% rename from src/content/docs/aws/services/stepfunctions.md rename to src/content/docs/aws/services/stepfunctions.mdx index d5ae8d21..b8e2f136 100644 --- a/src/content/docs/aws/services/stepfunctions.md +++ b/src/content/docs/aws/services/stepfunctions.mdx @@ -1,9 +1,7 @@ --- title: "Step Functions" -linkTitle: "Step Functions" tags: ["Free"] -description: > - Get started with Step Functions on LocalStack +description: Get started with Step Functions on LocalStack --- ## Introduction @@ -13,7 +11,7 @@ It provides a JSON-based structured language called Amazon States Language (ASL) Thus making it easier to build and maintain complex and distributed applications. LocalStack allows you to use the Step Functions APIs in your local environment to create, execute, update, and delete state machines locally. -The supported APIs are available on our [API coverage page]({{< ref "coverage_stepfunctions" >}}), which provides information on the extent of Step Function's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Step Function's integration with LocalStack. ## Getting started @@ -28,8 +26,8 @@ You can create a state machine using the [`CreateStateMachine`](https://docs.aws The API requires the name of the state machine, the state machine definition, and the role ARN that the state machine will assume to call AWS services. Run the following command to create a state machine: -{{< command >}} -$ awslocal stepfunctions create-state-machine \ +```bash +awslocal stepfunctions create-state-machine \ --name "CreateAndListBuckets" \ --definition '{ "Comment": "Create bucket and list buckets", @@ -51,7 +49,7 @@ $ awslocal stepfunctions create-state-machine \ } }' \ --role-arn "arn:aws:iam::000000000000:role/stepfunctions-role" -{{< /command >}} +``` The output of the above command is the ARN of the state machine: @@ -68,10 +66,10 @@ You can execute the state machine using the [`StartExecution`](https://docs.aws. The API requires the state machine's ARN and the state machine's input. Run the following command to execute the state machine: -{{< command >}} -$ awslocal stepfunctions start-execution \ +```bash +awslocal stepfunctions start-execution \ --state-machine-arn "arn:aws:states:us-east-1:000000000000:stateMachine:CreateAndListBuckets" -{{< /command >}} +``` The output of the above command is the execution ARN: @@ -87,10 +85,10 @@ The output of the above command is the execution ARN: To check the status of the execution, you can use the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API. Run the following command to describe the execution: -{{< command >}} -$ awslocal stepfunctions describe-execution \ +```bash +awslocal stepfunctions describe-execution \ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:CreateAndListBuckets:bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e" -{{< /command >}} +``` Replace the `execution-arn` with the ARN of the execution you want to describe. @@ -161,10 +159,10 @@ LocalStack can also serve as a drop-in replacement for [AWS Step Functions Local It supports test cases with mocked Task states and maintains compatibility with existing Step Functions Local configurations. This functionality is extended in LocalStack by providing access to the latest Step Functions features such as [JSONata and Variables](https://blog.localstack.cloud/aws-step-functions-made-easy/), as well as the ability to enable both mocked and emulated service interactions emulated by LocalStack. -{{< callout >}} +:::note LocalStack does not validate response formats. Ensure the payload structure in the mocked responses matches what the real service expects. -{{< /callout >}} +::: ### Identify a State Machine for Mocked Integrations @@ -287,9 +285,9 @@ In the example above: - `Return`: Simulates a successful response by returning a predefined payload. - `Throw`: Simulates a failure by returning an `Error` and an optional `Cause`. -{{< callout >}} +:::note Each entry must have **either** `Return` or `Throw`, but cannot have both. -{{< /callout >}} +::: Here is a complete example of the `MockedResponses` section: @@ -390,12 +388,17 @@ Set the `SFN_MOCK_CONFIG` environment variable to the path of your mock configur If you're running LocalStack in Docker, mount the file and pass the variable as shown below: -{{< tabpane >}} -{{< tab header="LocalStack CLI" lang="shell" >}} +import { Tabs, TabItem } from '@astrojs/starlight/components'; + + + +```bash LOCALSTACK_SFN_MOCK_CONFIG=/tmp/MockConfigFile.json \ localstack start --volume /path/to/MockConfigFile.json:/tmp/MockConfigFile.json -{{< /tab >}} -{{< tab header="Docker Compose" lang="yaml" >}} +``` + + +```yaml services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}" @@ -411,8 +414,9 @@ services: - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock" - "./MockConfigFile.json:/tmp/MockConfigFile.json" -{{< /tab >}} -{{< /tabpane >}} +``` + + ### Run Test Cases with Mocked Integrations @@ -420,12 +424,12 @@ Create the state machine to match the name defined in the mock configuration fil In this example, create the `LambdaSQSIntegration` state machine using: -{{< command >}} -$ awslocal stepfunctions create-state-machine \ +```bash +awslocal stepfunctions create-state-machine \ --definition file://LambdaSQSIntegration.json \ --name "LambdaSQSIntegration" \ --role-arn "arn:aws:iam::000000000000:role/service-role/testrole" -{{< /command >}} +``` After the state machine is created and correctly named, you can run test cases defined in the mock configuration file using the [`StartExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API. @@ -435,22 +439,22 @@ This tells LocalStack to apply the corresponding mocked responses from the confi For example, to run the `BaseCase` test case: -{{< command >}} -$ awslocal stepfunctions start-execution \ +```bash +awslocal stepfunctions start-execution \ --state-machine arn:aws:states:us-east-1:000000000000:stateMachine:LambdaSQSIntegration#BaseCase \ --input '{"name": "John", "surname": "smith"}' \ --name "MockExecutionBaseCase" -{{< /command >}} +``` During execution, any state mapped in the mock config will use the predefined response. States without mock entries invoke the actual emulated service as usual. You can inspect the execution using the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API: -{{< command >}} -$ awslocal stepfunctions describe-execution \ +```bash +awslocal stepfunctions describe-execution \ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase" -{{< /command >}} +``` The sample output shows the execution details, including the state machine ARN, execution ARN, status, start and stop dates, input, and output: @@ -475,10 +479,10 @@ The sample output shows the execution details, including the state machine ARN, You can also use the [`GetExecutionHistory`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) API to retrieve the execution history, including the events and their details. -{{< command >}} -$ awslocal stepfunctions get-execution-history \ +```bash +awslocal stepfunctions get-execution-history \ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase" -{{< /command >}} +``` This will return the full execution history, including entries that indicate how mocked responses were applied to Lambda and SQS states. @@ -522,9 +526,7 @@ The LocalStack Web Application includes a **Resource Browser** for managing Step To access it, open the LocalStack Web UI in your browser, navigate to the **Resource Browser** section, and click **Step Functions** under **App Integration**. -Step Functions Resource Browser -
-
+![Step Functions Resource Browser](/images/aws/stepfunctions-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/sts.md b/src/content/docs/aws/services/sts.md index c8e0dd52..4d30b154 100644 --- a/src/content/docs/aws/services/sts.md +++ b/src/content/docs/aws/services/sts.md @@ -1,6 +1,5 @@ --- title: "Security Token Service (STS)" -linkTitle: "Security Token Service (STS)" description: Get started with Security Token Service on LocalStack persistence: supported tags: ["Free"] @@ -13,7 +12,7 @@ STS implements fine-grained access control and reduce the exposure of your long- The temporary credentials, known as security tokens, can be used to access AWS services and resources based on the permissions specified in the associated policies. LocalStack allows you to use the STS APIs in your local environment to request security tokens, manage permissions, integrate with identity providers, and more. -The supported APIs are available on our [API coverage page]({{< ref "coverage_sts" >}}), which provides information on the extent of STS's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of STS's integration with LocalStack. ## Getting started @@ -28,18 +27,18 @@ You can create an IAM User and Role using the [`CreateUser`](https://docs.aws.am The IAM User will be used to assume the IAM Role. Run the following command to create an IAM User, named `localstack-user`: -{{< command >}} -$ awslocal iam create-user \ +```bash +awslocal iam create-user \ --user-name localstack-user -{{< /command >}} +``` You can generate long-term access keys for the IAM user using the [`CreateAccessKey`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateAccessKey.html) API. Run the following command to create an access key for the IAM user: -{{< command >}} -$ awslocal iam create-access-key \ +```bash +awslocal iam create-access-key \ --user-name localstack-user -{{< /command >}} +``` The following output would be retrieved: @@ -58,9 +57,9 @@ The following output would be retrieved: Using STS, you can also fetch temporary credentials for this user using the [`GetSessionToken`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html) API. Run the following command using your long-term credentials to get your temporary credentials: -{{< command >}} -$ awslocal sts get-session-token -{{< /command >}} +```bash +awslocal sts get-session-token +``` The following output would be retrieved: @@ -80,11 +79,11 @@ The following output would be retrieved: You can now create an IAM Role, named `localstack-role`, using the [`CreateRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateRole.html) API. Run the following command to create the IAM Role: -{{< command >}} -$ awslocal iam create-role \ +```bash +awslocal iam create-role \ --role-name localstack-role \ --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::000000000000:root"},"Action":"sts:AssumeRole"}]}' -{{< /command >}} +``` The following output would be retrieved: @@ -115,22 +114,22 @@ The following output would be retrieved: You can attach the policy to the IAM role using the [`AttachRolePolicy`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AttachRolePolicy.html) API. Run the following command to attach the policy to the IAM role: -{{< command >}} -$ awslocal iam attach-role-policy \ +```bash +awslocal iam attach-role-policy \ --role-name localstack-role \ --policy-arn arn:aws:iam::aws:policy/AdministratorAccess -{{< /command >}} +``` ### Assume an IAM Role You can assume an IAM Role using the [`AssumeRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API. Run the following command to assume the IAM Role: -{{< command >}} -$ awslocal sts assume-role \ +```bash +awslocal sts assume-role \ --role-arn arn:aws:iam::000000000000:role/localstack-role \ --role-session-name localstack-session -{{< /command >}} +``` The following output would be retrieved: @@ -157,9 +156,9 @@ You can use the temporary credentials in your applications for temporary access. You can get the caller identity to identify the principal your current credentials are valid for using the [`GetCallerIdentity`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) API. Run the following command to get the caller identity for the credentials set in your environment: -{{< command >}} -$ awslocal sts get-caller-identity -{{< /command >}} +```bash +awslocal sts get-caller-identity +``` The following output would be retrieved: diff --git a/src/content/docs/aws/services/support.md b/src/content/docs/aws/services/support.md index 63e61890..bf82d4d6 100644 --- a/src/content/docs/aws/services/support.md +++ b/src/content/docs/aws/services/support.md @@ -1,6 +1,5 @@ --- title: "Support" -linkTitle: "Support" description: Get started with Support on LocalStack persistence: supported tags: ["Free"] @@ -14,12 +13,12 @@ You can further automate your support workflow using various AWS services, such LocalStack allows you to use the Support APIs in your local environment to create and manage new cases, while testing your configurations locally. LocalStack provides a mock implementation via a mock Support Center provided by [Moto](https://docs.getmoto.org/en/latest/docs/services/support.html), and does not create real cases in the AWS. -The supported APIs are available on our [API coverage page]({{< ref "coverage_support" >}}), which provides information on the extent of Support API's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Support API's integration with LocalStack. -{{< callout >}} -For technical support with LocalStack, you can reach out through our [support channels]({{< ref "help-and-support" >}}). +:::note +For technical support with LocalStack, you can reach out through our [support channels](/aws/getting-started/help-support). It's important to note that LocalStack doesn't offer a programmatic interface to create support cases, and this documentation is only intended to demonstrate how you can use and mock the AWS Support APIs in your local environment. -{{< /callout >}} +::: ## Getting started @@ -33,13 +32,13 @@ We will demonstrate how you can create a case in the mock Support Center using t To create a support case, you can use the [`CreateCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/CreateCase) API. The following example creates a case with the subject "Test case" and the description "This is a test case" in the category "General guidance". -{{< command >}} -$ awslocal support create-case \ +```bash +awslocal support create-case \ --subject "Test case" \ --service-code "general-guidance" \ --category-code "general-guidance" \ --communication-body "This is a test case" -{{< / command >}} +``` The following output would be retrieved: @@ -54,9 +53,9 @@ The following output would be retrieved: To list all support cases, you can use the [`DescribeCases`](https://docs.aws.amazon.com/awssupport/latest/APIReference/API_DescribeCases.html) API. The following example lists all cases in the category "General guidance". -{{< command >}} -$ awslocal support describe-cases -{{< / command >}} +```bash +awslocal support describe-cases +``` The following output would be retrieved: @@ -89,10 +88,10 @@ The following output would be retrieved: To resolve a support case, you can use the [`ResolveCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/ResolveCase) API. The following example resolves the case created in the previous step. -{{< command >}} -$ awslocal support resolve-case \ +```bash +awslocal support resolve-case \ --case-id "case-12345678910-2020-kEa16f90bJE766J4" -{{< / command >}} +``` Replace the case ID with the ID of the case you want to resolve. The following output would be retrieved: diff --git a/src/content/docs/aws/services/swf.md b/src/content/docs/aws/services/swf.md index 04038589..878c9f28 100644 --- a/src/content/docs/aws/services/swf.md +++ b/src/content/docs/aws/services/swf.md @@ -1,8 +1,6 @@ --- title: "Simple Workflow Service (SWF)" -linkTitle: "Simple Workflow Service (SWF)" -description: > - Get started with Simple Workflow Service (SWF) on LocalStack +description: Get started with Simple Workflow Service (SWF) on LocalStack tags: ["Free"] --- @@ -13,7 +11,7 @@ SWF allows you to define workflows in a way that's separate from the actual appl SWF also provides a programming framework to design, coordinate, and execute workflows that involve multiple tasks, steps, and decision points. LocalStack allows you to use the SWF APIs in your local environment to monitor and manage workflow design, task coordination, activity implementation, and error handling. -The supported APIs are available on our [API coverage page]({{< ref "coverage_swf" >}}), which provides information on the extent of SWF's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of SWF's integration with LocalStack. ## Getting started @@ -27,19 +25,19 @@ We will demonstrate how to register an SWF domain and workflow using the AWS CLI You can register an SWF domain using the [`RegisterDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterDomain.html) API. Execute the following command to register a domain named `test-domain`: -{{< command >}} -$ awslocal swf register-domain \ +```bash +awslocal swf register-domain \ --name test-domain \ --workflow-execution-retention-period-in-days 1 -{{< /command >}} +``` You can use the [`DescribeDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeDomain.html) API to verify that the domain was registered successfully. Run the following command to describe the `test-domain` domain: -{{< command >}} -$ awslocal swf describe-domain \ +```bash +awslocal swf describe-domain \ --name test-domain -{{< /command >}} +``` The following output would be retrieved: @@ -61,31 +59,31 @@ The following output would be retrieved: You can list all registered domains using the [`ListDomains`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_ListDomains.html) API. Run the following command to list all registered domains: -{{< command >}} -$ awslocal swf list-domains --registration-status REGISTERED -{{< /command >}} +```bash +awslocal swf list-domains --registration-status REGISTERED +``` To deprecate a domain, use the [`DeprecateDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DeprecateDomain.html) API. Run the following command to deprecate the `test-domain` domain: -{{< command >}} -$ awslocal swf deprecate-domain \ +```bash +awslocal swf deprecate-domain \ --name test-domain -{{< /command >}} +``` You can now list the deprecated domains using the `--registration-status DEPRECATED` flag: -{{< command >}} -$ awslocal swf list-domains --registration-status DEPRECATED -{{< /command >}} +```bash +awslocal swf list-domains --registration-status DEPRECATED +``` ### Registering a workflow You can register a workflow using the [`RegisterWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterWorkflowType.html) API. Execute the following command to register a workflow named `test-workflow`: -{{< command >}} -$ awslocal swf register-workflow-type \ +```bash +awslocal swf register-workflow-type \ --domain test-domain \ --name test-workflow \ --default-task-list name=test-task-list \ @@ -93,16 +91,16 @@ $ awslocal swf register-workflow-type \ --default-execution-start-to-close-timeout 60 \ --default-child-policy TERMINATE \ --workflow-version "1.0" -{{< /command >}} +``` You can use the [`DescribeWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeWorkflowType.html) API to verify that the workflow was registered successfully. Run the following command to describe the `test-workflow` workflow: -{{< command >}} -$ awslocal swf describe-workflow-type \ +```bash +awslocal swf describe-workflow-type \ --domain test-domain \ --workflow-type name=test-workflow,version=1.0 -{{< /command >}} +``` The following output would be retrieved: @@ -132,8 +130,8 @@ The following output would be retrieved: You can register an activity using the [`RegisterActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterActivityType.html) API. Execute the following command to register an activity named `test-activity`: -{{< command >}} -$ awslocal swf register-activity-type \ +```bash +awslocal swf register-activity-type \ --domain test-domain \ --name test-activity \ --default-task-list name=test-task-list \ @@ -142,16 +140,16 @@ $ awslocal swf register-activity-type \ --default-task-schedule-to-start-timeout 30 \ --default-task-schedule-to-close-timeout 30 \ --activity-version "1.0" -{{< /command >}} +``` You can use the [`DescribeActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeActivityType.html) API to verify that the activity was registered successfully. Run the following command to describe the `test-activity` activity: -{{< command >}} -$ awslocal swf describe-activity-type \ +```bash +awslocal swf describe-activity-type \ --domain test-domain \ --activity-type name=test-activity,version=1.0 -{{< /command >}} +``` The following output would be retrieved: @@ -182,14 +180,14 @@ The following output would be retrieved: You can start a workflow execution using the [`StartWorkflowExecution`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_StartWorkflowExecution.html) API. Execute the following command to start a workflow execution for the `test-workflow` workflow: -{{< command >}} -$ awslocal swf start-workflow-execution \ +```bash +awslocal swf start-workflow-execution \ --domain test-domain \ --workflow-type name=test-workflow,version=1.0 \ --workflow-id test-workflow-id \ --task-list name=test-task-list \ --input '{"foo": "bar"}' -{{< /command >}} +``` The following output would be retrieved: diff --git a/src/content/docs/aws/services/textract.md b/src/content/docs/aws/services/textract.md index 0f218da3..f5a12dbc 100644 --- a/src/content/docs/aws/services/textract.md +++ b/src/content/docs/aws/services/textract.md @@ -1,6 +1,5 @@ --- title: "Textract" -linkTitle: "Textract" description: Get started with Textract on LocalStack tags: ["Ultimate"] persistence: supported @@ -10,7 +9,7 @@ Textract is a machine learning service that automatically extracts text, forms, It simplifies the process of extracting valuable information from a variety of document types, enabling applications to quickly analyze and understand document content. LocalStack allows you to mock Textract APIs in your local environment. -The supported APIs are available on our [API coverage page]({{< ref "coverage_textract" >}}), providing details on the extent of Textract's integration with LocalStack. +The supported APIs are available on our [API coverage page](), providing details on the extent of Textract's integration with LocalStack. ## Getting started @@ -24,10 +23,10 @@ We will demonstrate how to perform basic Textract operations, such as mocking te You can use the [`DetectDocumentText`](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html) API to identify and extract text from a document. Execute the following command: -{{< command >}} -$ awslocal textract detect-document-text \ +```bash +awslocal textract detect-document-text \ --document '{"S3Object":{"Bucket":"your-bucket","Name":"your-document"}}' -{{< /command >}} +``` The following output would be retrieved: @@ -48,10 +47,10 @@ The following output would be retrieved: You can use the [`StartDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_StartDocumentTextDetection.html) API to asynchronously detect text in a document. Execute the following command: -{{< command >}} -$ awslocal textract start-document-text-detection \ +```bash +awslocal textract start-document-text-detection \ --document-location '{"S3Object":{"Bucket":"bucket","Name":"document"}}' -{{< /command >}} +``` The following output would be retrieved: @@ -68,10 +67,10 @@ Save the `JobId` value to use in the next command. You can use the [`GetDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_GetDocumentTextDetection.html) API to retrieve the results of a document text detection job. Execute the following command: -{{< command >}} -$ awslocal textract get-document-text-detection \ +```bash +awslocal textract get-document-text-detection \ --job-id "501d7251-1249-41e0-a0b3-898064bfc506" -{{< /command >}} +``` Replace `501d7251-1249-41e0-a0b3-898064bfc506` with the `JobId` value retrieved from the previous command. The following output would be retrieved: diff --git a/src/content/docs/aws/services/timestream.md b/src/content/docs/aws/services/timestream.md index bf5c3cf3..ad29b52b 100644 --- a/src/content/docs/aws/services/timestream.md +++ b/src/content/docs/aws/services/timestream.md @@ -1,6 +1,5 @@ --- title: "Timestream" -linkTitle: "Timestream" description: Get started with Timestream on LocalStack tags: ["Ultimate"] persistence: supported @@ -15,7 +14,7 @@ LocalStack contains basic support for Timestream time series databases, includin * Writing records to tables * Querying timeseries data from tables -The supported APIs are available on our API Coverage Page ([Timestream-Query]({{< ref "coverage_timestream-query" >}})/[Timestream-Write]({{< ref "coverage_timestream-write" >}})), which provides information on the extent of Timestream integration with LocalStack. +The supported APIs are available on our API Coverage Page ([Timestream-Query]()/[Timestream-Write](), which provides information on the extent of Timestream integration with LocalStack. ## Getting Started @@ -23,22 +22,28 @@ The following example illustrates the basic operations, using the [`awslocal`](h First, we create a test database and table: -{{< command >}} -$ awslocal timestream-write create-database --database-name testDB -$ awslocal timestream-write create-table --database-name testDB --table-name testTable -{{}} +```bash +awslocal timestream-write create-database --database-name testDB +awslocal timestream-write create-table --database-name testDB --table-name testTable +``` We can then add a few records with a timestamp, measure name, and value to the table: -{{< command >}} -$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"60","TimeUnit":"SECONDS","Time":"1636986409"}]' -$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"80","TimeUnit":"SECONDS","Time":"1636986412"}]' -$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"70","TimeUnit":"SECONDS","Time":"1636986414"}]' -{{}} +```bash +awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"60","TimeUnit":"SECONDS","Time":"1636986409"}]' +awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"80","TimeUnit":"SECONDS","Time":"1636986412"}]' +awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"70","TimeUnit":"SECONDS","Time":"1636986414"}]' +``` Finally, we can run a query to retrieve the timeseries data (or aggregate values) from the table: -{{< command >}} -$ awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time, measure_value::double) as cpu FROM testDB.timeStreamTable WHERE measure_name='cpu'" + +```bash +awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time, measure_value::double) as cpu FROM testDB.timeStreamTable WHERE measure_name='cpu'" +``` + +The following output would be retrieved: + +```bash { "Rows": [{ "Data": [{ @@ -49,16 +54,14 @@ $ awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time } }, ... -{{}} +``` ## Resource Browser The LocalStack Web Application provides a Resource Browser for managing Timestream databases. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Timestream** under the **Database** section. -Timestream Resource Browser -
-
+![Timestream Resource Browser](/images/aws/timestream-resource-browser.png) The Resource Browser allows you to perform the following actions: @@ -70,6 +73,6 @@ The Resource Browser allows you to perform the following actions: ## Current Limitations -LocalStack's Timestream implementation is under active development and only supports a limited set of operations, please refer to the API Coverage pages for an up-to-date list of implemented and tested functions within [Timestream-Query]({{< ref "coverage_timestream-query" >}}) and [Timestream-Write]({{< ref "coverage_timestream-write" >}}). +LocalStack's Timestream implementation is under active development and only supports a limited set of operations, please refer to the API Coverage pages for an up-to-date list of implemented and tested functions within [Timestream-Query]() and [Timestream-Write](). If you have a usecase that uses Timestream but doesn't work with our implementation yet, we encourage you to [get in touch](https://localstack.cloud/contact/), so we can streamline any operations you rely on. diff --git a/src/content/docs/aws/services/transcribe.md b/src/content/docs/aws/services/transcribe.md index 331c4c58..732cb17c 100644 --- a/src/content/docs/aws/services/transcribe.md +++ b/src/content/docs/aws/services/transcribe.md @@ -1,6 +1,5 @@ --- title: "Transcribe" -linkTitle: "Transcribe" description: Get started with Amazon Transcribe on LocalStack persistence: supported tags: ["Free"] @@ -12,12 +11,12 @@ Transcribe is a service provided by AWS that offers automatic speech recognition It enables developers to convert spoken language into written text, making it valuable for a wide range of applications, from transcription services to voice analytics. LocalStack allows you to use the Transcribe APIs for offline speech-to-text jobs in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_transcribe" >}}), which provides information on the extent of Transcribe integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Transcribe integration with LocalStack. LocalStack Transcribe uses an offline speech-to-text library called [Vosk](https://alphacephei.com/vosk/). It requires an active internet connection to download the language model. Once the language model is downloaded, subsequent transcriptions for the same language can be performed offline. -Language models typically have a size of around 50 MiB and are saved in the cache directory (see [Filesystem Layout]({{< ref "filesystem" >}})). +Language models typically have a size of around 50 MiB and are saved in the cache directory (see [Filesystem Layout](/aws/capabilities/config/filesystem)). ## Getting Started @@ -31,29 +30,33 @@ We will demonstrate how to create a transcription job and view the transcript in You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command. Run the following command to create a bucket named `foo` to upload a sample audio file named `example.wav`: -{{< command >}} -$ awslocal s3 mb s3://foo -$ awslocal s3 cp ~/example.wav s3://foo/example.wav -{{< / command >}} +```bash +awslocal s3 mb s3://foo +awslocal s3 cp ~/example.wav s3://foo/example.wav +``` ### Create a transcription job You can create a transcription job using the [`StartTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartTranscriptionJob.html) API. Run the following command to create a transcription job named `example` for the audio file `example.wav`: -{{< command >}} -$ awslocal transcribe start-transcription-job \ +```bash +awslocal transcribe start-transcription-job \ --transcription-job-name example \ --media MediaFileUri=s3://foo/example.wav \ --language-code en-IN -{{< / command >}} +``` You can list the transcription jobs using the [`ListTranscriptionJobs`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ListTranscriptionJobs.html) API. Run the following command to list the transcription jobs: -{{< command >}} -$ awslocal transcribe list-transcription-jobs - +```bash +awslocal transcribe list-transcription-jobs +``` + +The following output would be retrieved: + +```bash { "TranscriptionJobSummaries": [ { @@ -65,17 +68,20 @@ $ awslocal transcribe list-transcription-jobs } ] } - -{{< / command >}} +``` ### View the transcript After the job is complete, the transcript can be retrieved from the S3 bucket using the [`GetTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetTranscriptionJob.html) API. Run the following command to get the transcript: -{{< command >}} -$ awslocal transcribe get-transcription-job --transcription-job example - +```bash +awslocal transcribe get-transcription-job --transcription-job example +``` + +The following output would be retrieved: + +```bash { "TranscriptionJob": { "TranscriptionJobName": "example", @@ -93,13 +99,20 @@ $ awslocal transcribe get-transcription-job --transcription-job example "CompletionTime": "2022-08-17T14:04:57.400000+05:30", } } - -$ awslocal s3 cp s3://foo/7844aaa5.json . -$ jq .results.transcripts[0].transcript 7844aaa5.json - +``` + +You can then view the transcript by running the following command: + +```bash +awslocal s3 cp s3://foo/7844aaa5.json . +jq .results.transcripts[0].transcript 7844aaa5.json +``` + +The following output would be retrieved: + +```bash "it is just a question of getting rid of the illusion that we are separate from nature" - -{{< / command >}} +``` ## Audio Formats @@ -150,9 +163,7 @@ The following languages and dialects are supported: The LocalStack Web Application provides a Resource Browser for managing Transcribe Transcription Jobs. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Transcribe Service** under the **Machine Learning** section. -Transcribe Resource Browser -
-
+![Transcribe Resource Browser](/images/aws/transcribe-resource-browser.png) The Resource Browser allows you to perform the following actions: diff --git a/src/content/docs/aws/services/transfer.md b/src/content/docs/aws/services/transfer.md index 468c6959..442c5df0 100644 --- a/src/content/docs/aws/services/transfer.md +++ b/src/content/docs/aws/services/transfer.md @@ -2,8 +2,7 @@ title: "Transfer" linkTitle: "Transfer" tags: ["Ultimate"] -description: > - Get started with Amazon Transfer on LocalStack +description: Get started with Transfer on LocalStack --- ## Introduction diff --git a/src/content/docs/aws/services/verifiedpermissions.md b/src/content/docs/aws/services/verifiedpermissions.md index c94cf47c..f371cc00 100644 --- a/src/content/docs/aws/services/verifiedpermissions.md +++ b/src/content/docs/aws/services/verifiedpermissions.md @@ -1,6 +1,5 @@ --- title: "Verified Permissions" -linkTitle: "Verified Permissions" description: Get started with Verified Permissions on LocalStack tags: ["Ultimate"] --- @@ -12,7 +11,7 @@ It helps secure applications by moving authorization logic outside the app and m It checks if a principal can take an action on a resource in a specific context in your application. LocalStack allows you to use the Verified Permissions APIs in your local environment to test your authorization logic, with integrations with other AWS services like Cognito. -The supported APIs are available on our [API coverage page]({{< ref "coverage_verifiedpermissions" >}}), which provides information on the extent of Verified Permissions' integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Verified Permissions' integration with LocalStack. ## Getting started @@ -26,11 +25,11 @@ We will demonstrate how to create a Verified Permissions Policy Store, add a pol To create a Verified Permissions Policy Store, use the [`CreatePolicyStore`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_CreatePolicyStore.html) API. Run the following command to create a Policy Store with Schema validation settings set to `OFF`: -{{< command >}} -$ awslocal verifiedpermissions create-policy-store \ +```bash +awslocal verifiedpermissions create-policy-store \ --validation-settings mode=OFF \ --description "A local Policy Store" -{{< /command >}} +``` The above command returns the following response: @@ -46,9 +45,9 @@ The above command returns the following response: You can list all the Verified Permissions policy stores using the [`ListPolicyStores`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_ListPolicyStores.html) API. Run the following command to list all the Verified Permissions policy stores: -{{< command >}} -$ awslocal verifiedpermissions list-policy-stores -{{< /command >}} +```bash +awslocal verifiedpermissions list-policy-stores +``` ### Create a Policy @@ -66,11 +65,12 @@ Create a JSON file named `static_policy.json` with the following content: ``` You can then run this command to create the policy: -{{< command >}} -$ awslocal verifiedpermissions create-policy \ + +```bash +awslocal verifiedpermissions create-policy \ --definition file://static_policy.json \ --policy-store-id q5PCScu9qo4aswMVc0owNN -{{< /command >}} +``` Replace the policy store ID with the ID of the policy store you created previously. @@ -106,13 +106,13 @@ You should see the following output: We can now make use of the Policy Store and the Policy to start authorizing requests. To authorize a request using Verified Permissions, use the [`IsAuthorized`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_IsAuthorized.html) API. -{{< command >}} -$ awslocal verifiedpermissions is-authorized \ +```bash +awslocal verifiedpermissions is-authorized \ --policy-store-id q5PCScu9qo4aswMVc0owNN \ --principal entityType=User,entityId=alice \ --action actionType=Action,actionId=view \ --resource entityType=Album,entityId=trip -{{< /command >}} +``` You should get the following output, indicating that your request was allowed: diff --git a/src/content/docs/aws/services/waf.md b/src/content/docs/aws/services/waf.md index ca7b2ce3..5ca30116 100644 --- a/src/content/docs/aws/services/waf.md +++ b/src/content/docs/aws/services/waf.md @@ -1,6 +1,5 @@ --- title: "Web Application Firewall (WAF)" -linkTitle: "Web Application Firewall (WAF)" description: Get started with Web Application Firewall (WAF) on LocalStack tags: ["Ultimate"] --- @@ -11,7 +10,7 @@ Web Application Firewall (WAF) is a service provided by Amazon Web Services (AWS WAFv2 is the latest version of WAF, and it allows you to specify a single set of rules to protect your web applications, APIs, and mobile applications from common attack patterns, such as SQL injection and cross-site scripting. LocalStack allows you to use the WAFv2 APIs for offline web application firewall jobs in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_wafv2" >}}), which provides information on the extent of WAFv2 integration with LocalStack. +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of WAFv2 integration with LocalStack. ## Getting started @@ -25,13 +24,17 @@ We will walk you through creating, listing, tagging, and viewing tags for Web Ac Start by creating a Web Access Control List (WebACL) using the [`CreateWebACL`](https://docs.aws.amazon.com/waf/latest/APIReference/API_CreateWebACL.html) API. Run the following command to create a WebACL named `TestWebAcl`: -{{< command >}} -$ awslocal wafv2 create-web-acl \ +```bash +awslocal wafv2 create-web-acl \ --name TestWebAcl \ --scope REGIONAL \ --default-action Allow={} \ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=TestWebAclMetrics - +``` + +The following output would be retrieved: + +```json { "Summary": { "Name": "TestWebAcl", @@ -40,8 +43,7 @@ $ awslocal wafv2 create-web-acl \ "ARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d" } } - -{{< /command >}} +``` Note the `Id` and `ARN` from the output, as they will be needed for subsequent commands. @@ -50,9 +52,13 @@ Note the `Id` and `ARN` from the output, as they will be needed for subsequent c To view all the WebACLs you have created, use the [`ListWebACLs`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListWebACLs.html) API. Run the following command to list the WebACLs: -{{< command >}} -$ awslocal wafv2 list-web-acls --scope REGIONAL - +```bash +awslocal wafv2 list-web-acls --scope REGIONAL +``` + +The following output would be retrieved: + +```json { "NextMarker": "Not Implemented", "WebACLs": [ @@ -64,8 +70,7 @@ $ awslocal wafv2 list-web-acls --scope REGIONAL } ] } - -{{< /command >}} +``` ### Tag a WebACL @@ -73,20 +78,24 @@ Tagging resources in AWS WAF helps you manage and identify them. Use the [`TagResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_TagResource.html) API to add tags to a WebACL. Run the following command to add a tag to the WebACL created in the previous step: -{{< command >}} -$ awslocal wafv2 tag-resource \ +```bash +awslocal wafv2 tag-resource \ --resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d \ --tags Key=Name,Value=AWSWAF -{{< /command >}} +``` After tagging your resources, you may want to view these tags. Use the [`ListTagsForResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListTagsForResource.html) API to list the tags for a WebACL. Run the following command to list the tags for the WebACL created in the previous step: -{{< command >}} -$ awslocal wafv2 list-tags-for-resource \ +```bash +awslocal wafv2 list-tags-for-resource \ --resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d - +``` + +The following output would be retrieved: + +```json { "TagInfoForResource": { "ResourceARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d", @@ -98,5 +107,4 @@ $ awslocal wafv2 list-tags-for-resource \ ] } } - -{{< /command >}} +``` diff --git a/src/content/docs/aws/services/xray.md b/src/content/docs/aws/services/xray.md index 16f28f68..ac5073b6 100644 --- a/src/content/docs/aws/services/xray.md +++ b/src/content/docs/aws/services/xray.md @@ -1,6 +1,5 @@ --- title: "X-Ray" -linkTitle: "X-Ray" description: Get started with X-Ray on LocalStack tags: ["Ultimate"] --- @@ -20,7 +19,7 @@ The X-Ray API can then be used to retrieve traces originating from different app LocalStack allows you to use the X-Ray APIs to send and retrieve trace segments in your local environment. -The supported APIs are available on our [API Coverage Page]({{< ref "coverage_xray" >}}), +The supported APIs are available on our [API Coverage Page](), which provides information on the extent of X-Ray integration with LocalStack. ## Getting started @@ -41,35 +40,37 @@ You can generates a unique trace ID and constructs a JSON document with trace in It then sends this trace segment to the AWS X-Ray API using the [PutTraceSegments](https://docs.aws.amazon.com/xray/latest/api/API_PutTraceSegments.html) API. Run the following commands in your terminal: -{{< command >}} -$ START_TIME=$(date +%s) -$ HEX_TIME=$(printf '%x\n' $START_TIME) -$ GUID=$(dd if=/dev/random bs=12 count=1 2>/dev/null | od -An -tx1 | tr -d ' \t\n') -$ TRACE_ID="1-$HEX_TIME-$GUID" -$ END_TIME=$(($START_TIME+3)) -$ DOC=$(cat </dev/null | od -An -tx1 | tr -d ' \t\n') +TRACE_ID="1-$HEX_TIME-$GUID" +END_TIME=$(($START_TIME+3)) +DOC=$(cat < +echo "Sending trace segment to X-Ray API: $DOC" +awslocal xray put-trace-segments --trace-segment-documents "$DOC" +``` + +The following output would be retrieved: + +```json Sending trace segment to X-Ray API: {"trace_id": "1-6501ee11-056ec85fafff21f648e2d3ae", "id": "6226467e3f845502", "start_time": 1694625297.37518, "end_time": 1694625300.4042, "name": "test.elasticbeanstalk.com"} { "UnprocessedTraceSegments": [] } -
-{{< /command >}} +``` ### Retrieve trace summaries You can now retrieve the trace summaries from the last 10 minutes using the [GetTraceSummaries](https://docs.aws.amazon.com/xray/latest/api/API_GetTraceSummaries.html) API. Run the following commands in your terminal: -{{< command >}} -$ EPOCH=$(date +%s) -$ awslocal xray get-trace-summaries --start-time $(($EPOCH-600)) --end-time $(($EPOCH)) - +```bash +EPOCH=$(date +%s) +awslocal xray get-trace-summaries --start-time $(($EPOCH-600)) --end-time $(($EPOCH)) { "TraceSummaries": [ { @@ -88,17 +89,20 @@ $ awslocal xray get-trace-summaries --start-time $(($EPOCH-600)) --end-time $(($ "TracesProcessedCount": 1, "ApproximateTime": 1694625413.0 } - -{{< /command >}} +``` ### Retrieve full trace You can retrieve the full trace by providing the `TRACE_ID` using the [BatchGetTraces](https://docs.aws.amazon.com/xray/latest/api/API_BatchGetTraces.html) API. Run the following commands in your terminal (use the same terminal as for the first command): -{{< command >}} -$ awslocal xray batch-get-traces --trace-ids $TRACE_ID - +```bash +awslocal xray batch-get-traces --trace-ids $TRACE_ID +``` + +The following output would be retrieved: + +```json { "Traces": [ { @@ -114,8 +118,7 @@ $ awslocal xray batch-get-traces --trace-ids $TRACE_ID ], "UnprocessedTraceIds": [] } - -{{< /command >}} +``` ## Examples From 376aba4dc480bbbe99f6433ba0c1942ad0fad371 Mon Sep 17 00:00:00 2001 From: HarshCasper Date: Thu, 19 Jun 2025 00:29:06 +0530 Subject: [PATCH 80/80] last pieces --- src/content/docs/aws/services/firehose.md | 66 +++++++++++------------ src/content/docs/aws/services/neptune.md | 6 +-- 2 files changed, 34 insertions(+), 38 deletions(-) diff --git a/src/content/docs/aws/services/firehose.md b/src/content/docs/aws/services/firehose.md index e66924a7..fe50cb01 100644 --- a/src/content/docs/aws/services/firehose.md +++ b/src/content/docs/aws/services/firehose.md @@ -1,14 +1,12 @@ --- title: "Data Firehose" -linkTitle: "Data Firehose" -description: > - Get started with Data Firehose on LocalStack +description: Get started with Data Firehose on LocalStack tags: ["Free"] --- -{{< callout >}} +:::note This service was formerly called as 'Kinesis Data Firehose'. -{{< /callout >}} +::: ## Introduction @@ -16,7 +14,7 @@ Data Firehose is a service provided by AWS that allows you to extract, transform With Data Firehose, you can ingest and deliver real-time data from different sources as it automates data delivery, handles buffering and compression, and scales according to the data volume. LocalStack allows you to use the Data Firehose APIs in your local environment to load and transform real-time data. -The supported APIs are available on our [API coverage page]({{< ref "coverage_firehose" >}}), which provides information on the extent of Data Firehose's integration with LocalStack. +The supported APIs are available on our [API coverage page](), which provides information on the extent of Data Firehose's integration with LocalStack. ## Getting started @@ -30,9 +28,9 @@ We will demonstrate how to use Firehose to load Kinesis data into Elasticsearch You can create an Elasticsearch domain using the [`create-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/create-elasticsearch-domain.html) command. Execute the following command to create a domain named `es-local`: -{{< command >}} -$ awslocal es create-elasticsearch-domain --domain-name es-local -{{< / command >}} +```bash +awslocal es create-elasticsearch-domain --domain-name es-local +``` Save the value of the `Endpoint` field from the response, as it will be required further down to confirm the setup. @@ -43,17 +41,17 @@ Now let us create our target S3 bucket and our source Kinesis stream: Before creating the stream, we need to create an S3 bucket to store our backup data. You can do this using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command: -{{< command >}} -$ awslocal s3 mb s3://kinesis-activity-backup-local -{{< / command >}} +```bash +awslocal s3 mb s3://kinesis-activity-backup-local +``` You can now use the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API to create a Kinesis stream named `kinesis-es-local-stream` with two shards: -{{< command >}} -$ awslocal kinesis create-stream \ +```bash +awslocal kinesis create-stream \ --stream-name kinesis-es-local-stream \ --shard-count 2 -{{< / command >}} +``` ### Create a Firehouse delivery stream @@ -64,20 +62,20 @@ Within the `kinesis-stream-source-configuration`, it is required to specify the The `elasticsearch-destination-configuration` sets vital parameters, which includes the access role, `DomainARN` of the Elasticsearch domain where you wish to publish, and the settings including the `IndexName` and `TypeName` for the Elasticsearch setup. Additionally to backup all documents to S3, the `S3BackupMode` parameter is set to `AllDocuments`, which is accompanied by `S3Configuration`. -{{< callout >}} +:::note Within LocalStack's default configuration, IAM roles remain unverified and no strict validation is applied on ARNs. However, when operating within the AWS environment, you need to check the access rights of the specified role for the task. -{{< /callout >}} +::: You can use the [`CreateDeliveryStream`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_CreateDeliveryStream.html) API to create a Firehose delivery stream named `activity-to-elasticsearch-local`: -{{< command >}} -$ awslocal firehose create-delivery-stream \ +```bash +awslocal firehose create-delivery-stream \ --delivery-stream-name activity-to-elasticsearch-local \ --delivery-stream-type KinesisStreamAsSource \ --kinesis-stream-source-configuration "KinesisStreamARN=arn:aws:kinesis:us-east-1:000000000000:stream/kinesis-es-local-stream,RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role" \ --elasticsearch-destination-configuration "RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,DomainARN=arn:aws:es:us-east-1:000000000000:domain/es-local,IndexName=activity,TypeName=activity,S3BackupMode=AllDocuments,S3Configuration={RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,BucketARN=arn:aws:s3:::kinesis-activity-backup-local}" -{{< / command >}} +``` On successful execution, the command will return the `DeliveryStreamARN` of the created delivery stream: @@ -93,10 +91,10 @@ Before testing the integration, it's necessary to confirm if the local Elasticse You can use the [`describe-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/describe-elasticsearch-domain.html) command to check the status of the Elasticsearch cluster. Run the following command: -{{< command >}} -$ awslocal es describe-elasticsearch-domain \ +```bash +awslocal es describe-elasticsearch-domain \ --domain-name es-local | jq ".DomainStatus.Processing" -{{< / command >}} +``` Once the command returns `false`, you can move forward with data ingestion. The data can be added to the source Kinesis stream or directly to the Firehose delivery stream. @@ -104,32 +102,32 @@ The data can be added to the source Kinesis stream or directly to the Firehose d You can add data to the Kinesis stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API. The following command adds a record to the stream: -{{< command >}} -$ awslocal kinesis put-record \ +```bash +awslocal kinesis put-record \ --stream-name kinesis-es-local-stream \ --data '{ "target": "barry" }' \ --partition-key partition -{{< / command >}} +``` -{{< callout "tip" >}} +:::note For users using AWS CLI v2, consider adding `--cli-binary-format raw-in-base64-out` to the command mentioned above. -{{< /callout >}} +::: You can use the [`PutRecord`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecord.html) API to add data to the Firehose delivery stream. The following command adds a record to the stream: -{{< command >}} -$ awslocal firehose put-record \ +```bash +awslocal firehose put-record \ --delivery-stream-name activity-to-elasticsearch-local \ --record '{ "Data": "eyJ0YXJnZXQiOiAiSGVsbG8gd29ybGQifQ==" }' -{{< / command >}} +``` To review the entries in Elasticsearch, you can employ [curl](https://curl.se/) for simplicity. Remember to replace the URL with the `Endpoint` field from the initial `create-elasticsearch-domain` operation. -{{< command >}} -$ curl -s http://es-local.us-east-1.es.localhost.localstack.cloud:443/activity/_search | jq '.hits.hits' -{{< / command >}} +```bash +curl -s http://es-local.us-east-1.es.localhost.localstack.cloud:443/activity/_search | jq '.hits.hits' +``` You will get an output similar to the following: diff --git a/src/content/docs/aws/services/neptune.md b/src/content/docs/aws/services/neptune.md index c9c8a2c8..dfca2833 100644 --- a/src/content/docs/aws/services/neptune.md +++ b/src/content/docs/aws/services/neptune.md @@ -142,7 +142,7 @@ if __name__ == '__main__': Amazon Neptune resources with IAM DB authentication enabled require all requests to use AWS Signature Version 4. -When LocalStack starts with [IAM enforcement enabled]({{< ref "/user-guide/security-testing" >}}), the Neptune database checks user permissions before granting access. +When LocalStack starts with [IAM enforcement enabled](/aws/capabilities/security-testing/iam-policy-enforcement), the Neptune database checks user permissions before granting access. The following Gremlin query actions are available for database engine versions `1.3.2.0` and higher: ```json @@ -237,9 +237,7 @@ rm -rf /lib/tinkerpop The LocalStack Web Application provides a Resource Browser for managing Neptune databases and clusters. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Neptune** under the **Database** section. -Neptune Resource Browser -
-
+![Neptune Resource Browser](/images/aws/neptune-resource-browser.png) The Resource Browser allows you to perform the following actions: